diff --git a/spaces/101-5/gpt4free/g4f/.v1/gpt4free/quora/README.md b/spaces/101-5/gpt4free/g4f/.v1/gpt4free/quora/README.md
deleted file mode 100644
index 88fd0093edb7e7d511bbde9d84d6839b2fd082bd..0000000000000000000000000000000000000000
--- a/spaces/101-5/gpt4free/g4f/.v1/gpt4free/quora/README.md
+++ /dev/null
@@ -1,77 +0,0 @@
-
-> ⚠ Warning !!!
-poe.com added security and can detect if you are making automated requests. You may get your account banned if you are using this api.
-The normal non-driver api is also currently not very stable
-
-
-### Example: `quora (poe)` (use like openai pypi package) - GPT-4
-
-```python
-# quora model names: (use left key as argument)
-models = {
- 'sage' : 'capybara',
- 'gpt-4' : 'beaver',
- 'claude-v1.2' : 'a2_2',
- 'claude-instant-v1.0' : 'a2',
- 'gpt-3.5-turbo' : 'chinchilla'
-}
-```
-
-### New: bot creation
-
-```python
-# import quora (poe) package
-from gpt4free import quora
-
-# create account
-# make sure to set enable_bot_creation to True
-token = quora.Account.create(logging=True, enable_bot_creation=True)
-
-model = quora.Model.create(
- token=token,
- model='gpt-3.5-turbo', # or claude-instant-v1.0
- system_prompt='you are ChatGPT a large language model ...'
-)
-
-print(model.name) # gptx....
-
-# streaming response
-for response in quora.StreamingCompletion.create(
- custom_model=model.name,
- prompt='hello world',
- token=token):
- print(response.completion.choices[0].text)
-```
-
-### Normal Response:
-```python
-
-response = quora.Completion.create(model = 'gpt-4',
- prompt = 'hello world',
- token = token)
-
-print(response.completion.choices[0].text)
-```
-
-### Update Use This For Poe
-```python
-from gpt4free.quora import Poe
-
-# available models: ['Sage', 'GPT-4', 'Claude+', 'Claude-instant', 'ChatGPT', 'Dragonfly', 'NeevaAI']
-
-poe = Poe(model='ChatGPT', driver='firefox', cookie_path='cookie.json', driver_path='path_of_driver')
-poe.chat('who won the football world cup most?')
-
-# new bot creation
-poe.create_bot('new_bot_name', prompt='You are new test bot', base_model='gpt-3.5-turbo')
-
-# delete account
-poe.delete_account()
-```
-
-### Deleting the Poe Account
-```python
-from gpt4free import quora
-
-quora.Account.delete(token='')
-```
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Buku Ipa Kelas 9 Penerbit Erlangga BEST.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Buku Ipa Kelas 9 Penerbit Erlangga BEST.md
deleted file mode 100644
index 4732bb7780eb9e2b3b6671fde39923066cb74e8f..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Buku Ipa Kelas 9 Penerbit Erlangga BEST.md
+++ /dev/null
@@ -1,159 +0,0 @@
-
-
Download Buku IPA Kelas 9 Penerbit Erlangga
-
Buku IPA Terpadu 3 SMP/MTs Kelas IX adalah salah satu buku pelajaran yang digunakan oleh siswa kelas 9 di sekolah-sekolah yang menerapkan Kurikulum 2013 Revisi. Buku ini disusun oleh Tim Abdi Guru dan diterbitkan oleh Erlangga, salah satu penerbit buku pendidikan terkemuka di Indonesia. Buku ini berisi materi-materi IPA yang disajikan secara utuh dan terpadu, dengan pendekatan saintifik dan karakteristik yang menarik. Bagaimana cara mendownload buku ini secara gratis dan legal? Apa saja keuntungan menggunakan e-Library Erlangga sebagai sumber belajar online? Simak ulasan lengkapnya di artikel ini.
Buku IPA Terpadu 3 SMP/MTs Kelas IX adalah buku pelajaran yang ditujukan untuk siswa kelas 9 di sekolah-sekolah yang menerapkan Kurikulum 2013 Revisi. Buku ini mencakup kompetensi dalam aspek pengetahuan, keterampilan, dan sikap yang harus dikuasai oleh siswa dalam mempelajari IPA. Buku ini terdiri dari delapan bab, yaitu:
-
-
Bab 1: Sistem Gerak pada Manusia dan Hewan
-
Bab 2: Sistem Ekskresi pada Manusia dan Hewan
-
Bab 3: Sistem Reproduksi pada Manusia dan Tumbuhan
-
Bab 4: Sistem Regulasi pada Manusia dan Tumbuhan
-
Bab 5: Sistem Imun pada Manusia
-
Bab 6: Sistem Koordinasi pada Manusia
-
Bab 7: Sistem Pencernaan pada Manusia
-
Bab 8: Sistem Peredaran Darah pada Manusia
-
-
Karakteristik Buku IPA Terpadu 3 SMP/MTs Kelas IX
-
Buku IPA Terpadu 3 SMP/MTs Kelas IX memiliki beberapa karakteristik yang membuatnya berbeda dari buku-buku pelajaran lainnya. Berikut adalah beberapa karakteristik tersebut:
-
-
Peta Konsep di setiap awal bab untuk memudahkan siswa memahami keterkaitan antarkonsep materi pembelajaran dalam satu bab.
-
Materi Pembelajaran disajikan dengan pendekatan saintifik yang meliputi kegiatan mengamati, menanya, mengumpulkan informasi, mengasosiasi, dan mengomunikasikan sehingga siswa dapat memahami konsep, prinsip, atau teori pembelajaran yang sedang dipelajari dengan mudah.
-
Jelajah Konsep berisi percobaan-percobaan mengenai materi pembelajaran yang dipelajari oleh siswa.
-
Fokus IPA berisi seputar informasi penting dan aktual yang berhubungan dengan materi pembelajaran.
-
Bintang IPA berisi seputar penemu di bidang IPA yang berhubungan erat dengan materi pembelajaran.
-
Uji Kompetensi berisi tentang soal-soal yang berhubungan dengan kompetensi dasar yang harus dikuasai oleh siswa.
-
Terampil IPA berisi tentang keterampilan dalam menyajikan, mengolah, dan menganalisis data hasil percobaan IPA yang berkaitan dengan konsep yang dipelajari, termasuk keterampilan membuat laporan hasil percobaan dan pengamatan, serta keterampilan membuat karya ilmiah.
-
Soal Ulangan Akhir Bab terdiri atas tiga jenis tes, yaitu penilaian tertulis, penilaian proyek, dan penilaian produk. Penilaian tersebut meliputi tiga aspek, yaitu aspek pengetahuan, keterampilan, dan sikap.
-
Karakter yang Dikembangkan berisi nilai-nilai karakter yang dapat dikembangkan dalam proses pembelajaran.
-
-
Manfaat Buku IPA Terpadu 3 SMP/MTs Kelas IX
-
Buku IPA Terpadu 3 SMP/MTs Kelas IX memiliki banyak manfaat bagi siswa maupun guru. Berikut adalah beberapa manfaat tersebut:
-
-
Memperkaya pengetahuan siswa tentang IPA secara utuh dan terpadu.
-
Meningkatkan keterampilan siswa dalam melakukan percobaan, menyajikan data, menganalisis data, membuat laporan, dan membuat karya ilmiah.
-
Mengembangkan sikap ilmiah siswa dalam mempelajari IPA.
-
Menumbuhkan minat dan bakat siswa dalam bidang IPA.
-
Mendorong siswa untuk belajar mandiri dan aktif.
-
Memfasilitasi guru dalam menyusun rencana pembelajaran sesuai dengan Kurikulum 2013 Revisi.
-
Memberikan bahan ajar yang lengkap dan menarik bagi guru.
-
Memberikan bahan evaluasi yang bervariasi dan komprehensif bagi guru.
-
-
Bagaimana Cara Mendownload Buku IPA Terpadu 3 SMP/MTs Kelas IX?
-
Saat ini, buku IPA Terpadu 3 SMP/MTs Kelas IX sudah tersedia dalam bentuk e-Book atau buku digital. E-Book ini dapat diunduh secara gratis dan legal melalui e-Library Erlangga. E-Library Erlangga adalah perpustakaan digital yang menyediakan koleksi e-Book Erlangga untuk berbagai jenjang pendidikan. E-Library Erlangga dapat diakses melalui website atau aplikasi mobile. Untuk mendownload buku IPA Terpadu 3 SMP/MTs Kelas IX dari e-Library Erlangga, ada beberapa langkah yang harus dilakukan.
-
Langkah-langkah Mendownload Buku IPA Terpadu 3 SMP/MTs Kelas IX dari e-Library Erlangga
-
Sebagai Petugas Perpustakaan
-
Jika Anda adalah petugas perpustakaan sekolah atau institusi pendidikan yang ingin mendownload buku IPA Terpadu 3 SMP /MTs Kelas IX untuk koleksi perpustakaan Anda, berikut adalah langkah-langkah yang harus Anda lakukan:
-
-
Kunjungi website e-Library Erlangga di https://e-library.erlanggaonline.co.id
-
Klik Daftar, pada halaman login Pengelola e-Library Erlangga.
-
Isi form yang ada, gunakan email resmi sekolah/institusi (bukan email personal). Lalu klik Register Sekarang.
-
Hubungi CS : 0819-1150-0885 untuk verifikasi akun.
-
Login jika sudah diverifikasi.
-
Pilih Menu Kode Aktivasi yang tercantum dalam KBEL (Kartu Berlangganan E-Library). Pilih judul E-Book yang tertera di kartu, masukan kode aktivasi dan klik Aktifkan.
-
Buku IPA Terpadu 3 SMP/MTs Kelas IX akan muncul di katalog e-Library Erlangga Anda.
-
-
Sebagai Anggota Perpustakaan
-
Jika Anda adalah siswa atau guru yang ingin mendownload buku IPA Terpadu 3 SMP/MTs Kelas IX dari e-Library Erlangga sekolah Anda, berikut adalah langkah-langkah yang harus Anda lakukan:
-
-
Install App e-Library Erlangga di Play Store atau App Store. Unduh disini: https://erlangga.co.id/ebook/
-
Buat Akun, dengan menggunakan email e-Library Erlangga sekolah Anda.
-
Tunggu sampai akun diverifikasi dari petugas perpustakaan sekolah Anda.
-
Login jika sudah diverifikasi.
-
Pinjam E-Book yang ada di dalam katalog e-Library Erlangga. Cari buku IPA Terpadu 3 SMP/MTs Kelas IX dan klik Pinjam.
-
Buku IPA Terpadu 3 SMP/MTs Kelas IX akan tersimpan di rak buku digital Anda. Klik Baca untuk membuka buku tersebut.
-
Anda dapat membaca buku IPA Terpadu 3 SMP/MTs Kelas IX secara online atau offline. Jika ingin membaca secara offline, pastikan Anda sudah mengunduh buku tersebut terlebih dahulu dengan klik Unduh.
-
Jika masa pinjaman buku sudah habis, Anda dapat mengembalikan buku tersebut dengan klik Kembalikan. Anda juga dapat memperpanjang masa pinjaman buku dengan klik Perpanjang.
-
-
Tips dan Trik Mendownload Buku IPA Terpadu 3 SMP/MTs Kelas IX dengan Cepat dan Mudah
-
Berikut adalah beberapa tips dan trik yang dapat membantu Anda mendownload buku IPA Terpadu 3 SMP/MTs Kelas IX dengan cepat dan mudah:
-
-
Pastikan Anda memiliki koneksi internet yang stabil dan cepat saat mendownload buku IPA Terpadu 3 SMP/MTs Kelas IX.
-
Pastikan Anda memiliki ruang penyimpanan yang cukup di perangkat Anda saat mendownload buku IPA Terpadu 3 SMP/MTs Kelas IX.
-
Pastikan Anda menggunakan email resmi sekolah/institusi saat mendaftar akun e-Library Erlangga, agar dapat diverifikasi dengan mudah oleh petugas perpustakaan.
-
Pastikan Anda meminjam buku IPA Terpadu 3 SMP/MTs Kelas IX sesuai dengan jatah pinjaman yang ditentukan oleh perpustakaan sekolah/institusi Anda.
-
Pastikan Anda mengembalikan atau memperpanjang masa pinjaman buku IPA Terpadu 3 SMP/MTs Kelas IX sebelum masa pinjaman habis, agar tidak terkena denda atau sanksi.
-
Pastikan Anda menjaga kerahasiaan akun e-Library Erlangga Anda dan tidak membagikannya kepada orang lain.
-
-
Apa Saja Keuntungan Menggunakan e-Library Erlangga?
-
E-Library Erlangga adalah perpustakaan digital yang menyediakan koleksi e-Book Erlangga untuk berbagai jenjang pendidikan. E-Library Erlangga memiliki banyak keuntungan bagi siswa, guru, maupun sekolah. Berikut adalah beberapa keuntungan tersebut:
-
Peningkatan Manajemen Mutu Sekolah
-
E-Library Erlangga dapat membantu sekolah meningkatkan manajemen mutu pendidikan dengan cara:
-
download ebook ipa terpadu 3 smp/mts kelas ix erlangga
-download buku ipa terpadu kelas 9 kurikulum 2013 revisi erlangga
-download buku siswa dan buku guru ipa kelas 9 revisi 2018 erlangga
-download pdf buku ipa terpadu untuk smp/mts kelas 9 erlangga
-download gratis buku ipa terpadu smp kelas 9 penerbit erlangga
-download buku ipa terpadu kelas 9 semester 1 dan 2 erlangga
-download buku ipa terpadu kelas 9 edisi revisi erlangga online
-download buku ipa terpadu kelas 9 ktsp erlangga
-download buku ipa terpadu kelas 9 berbasis saintifik erlangga
-download buku ipa terpadu kelas 9 lengkap erlangga
-download buku ipa terpadu kelas 9 materi listrik erlangga
-download buku ipa terpadu kelas 9 materi cahaya erlangga
-download buku ipa terpadu kelas 9 materi kalor erlangga
-download buku ipa terpadu kelas 9 materi bunyi erlangga
-download buku ipa terpadu kelas 9 materi getaran dan gelombang erlangga
-download buku ipa terpadu kelas 9 materi sistem peredaran darah erlangga
-download buku ipa terpadu kelas 9 materi sistem pernapasan erlangga
-download buku ipa terpadu kelas 9 materi sistem pencernaan erlangga
-download buku ipa terpadu kelas 9 materi sistem ekskresi erlangga
-download buku ipa terpadu kelas 9 materi sistem reproduksi erlangga
-download buku ipa terpadu kelas 9 materi sistem gerak erlangga
-download buku ipa terpadu kelas 9 materi sistem koordinasi erlangga
-download buku ipa terpadu kelas 9 materi sistem imun dan hormon erlangga
-download buku ipa terpadu kelas 9 materi bumi dan alam semesta erlangga
-download buku ipa terpadu kelas 9 materi perubahan bumi dan dampaknya erlangga
-download buku ipa terpadu kelas 9 materi sumber daya alam dan lingkungan hidup erlangga
-download buku ipa terpadu kelas 9 materi bioteknologi dan rekayasa genetika erlangga
-download buku ipa terpadu kelas 9 materi energi alternatif dan hemat energi erlangga
-download buku ipa terpadu kelas 9 materi teknologi informasi dan komunikasi erlangga
-download buku ipa terpadu kelas 9 materi teknologi transportasi dan industri erlangga
-cara download buku ipa terpadu smp/mts kelas ix penerbit erlangga online gratis
-situs download buku ipa terpadu smp/mts kelas ix penerbit erlangga pdf gratis
-link download buku ipa terpadu smp/mts kelas ix penerbit erlangga pdf gratis
-kode akses e-book ipa terpadu smp/mts kelas ix penerbit erlangga online gratis
-harga buku ipa terpadu smp/mts kelas ix penerbit erlangga cetak baru dan bekas
-jual beli buku ipa terpadu smp/mts kelas ix penerbit erlangga online murah dan berkualitas
-review buku ipa terpadu smp/mts kelas ix penerbit erlangga online oleh guru dan siswa
-contoh soal dan pembahasan buku ipa terpadu smp/mts kelas ix penerbit erlangga online gratis
-ringkasan materi dan rumus-rumus penting buku ipa terpadu smp/mts kelas ix penerbit erlangga online gratis
-video pembelajaran berdasarkan buku ipa terpadu smp/mts kelas ix penerbit erlangga online gratis
-
-
Menghemat biaya pembelian buku cetak yang mahal dan rentan rusak.
-
Menghemat ruang penyimpanan buku yang terbatas dan memerlukan perawatan khusus.
-
Menghemat waktu pencarian buku yang sulit dan merepotkan.
-
Meningkatkan aksesibilitas buku yang mudah dan fleksibel.
-
Meningkatkan ketersediaan buku yang selalu update dan lengkap.
-
Meningkatkan kualitas buku yang berkualitas dan sesuai dengan kurikulum.
-
-
Pembelajaran Berbasis Teknologi
-
E-Library Erlangga dapat mendukung pembelajaran berbasis teknologi dengan cara:
-
-
Menyediakan platform belajar online yang interaktif dan menarik.
-
Menyediakan fitur-fitur belajar online yang bermanfaat dan variatif.
-
Menyediakan media belajar online yang mudah diakses dan digunakan.
-
Mendorong penggunaan teknologi dalam proses belajar mengajar.
-
Mendorong kreativitas dan inovasi dalam proses belajar mengajar.
-
Mendorong kolaborasi dan komunikasi dalam proses belajar mengajar.
-
-
Akreditasi Sekolah/Akreditasi Perpustakaan Sekolah
-
E-Library Erlangga dapat membantu sekolah mendapatkan akreditasi sekolah/akreditasi perpustakaan sekolah dengan cara:
-
-
Memenuhi salah satu ketentuan akreditasi sekolah yaitu memiliki e-Book (diatur dalam undang-undang).
-
Memenuhi standar nasional perpustakaan sekolah yaitu memiliki koleksi digital minimal 10% dari total koleksi (diatur dalam peraturan menteri).
-
Meningkatkan kinerja perpustakaan sekolah dalam hal pelayanan, pengembangan koleksi, pengelolaan sumber daya manusia, pengelolaan anggaran, pengelolaan sarana dan prasarana, serta kerjasama dan jejaring.
-
Mendukung Program “Gemar Baca” dan Literasi
-
E-Library Erlangga dapat mendukung program “Gemar Baca” dan literasi dengan cara:
-
-
Menyediakan koleksi e-Book yang menarik dan bervariasi untuk berbagai minat baca siswa.
-
Menyediakan fitur-fitur e-Book yang mendukung kegiatan membaca seperti zoom, bookmark, highlight, note, dictionary, dll.
-
Menyediakan fitur-fitur e-Book yang mendukung kegiatan menulis seperti copy-paste, share, print, dll.
-
Mengadakan program-program promosi baca seperti lomba baca, diskusi baca, review baca, dll.
-
Mengadakan program-program literasi seperti workshop menulis, seminar literasi, festival literasi, dll.
-
-
Apa Saja Fitur Lainnya di e-Library Erlangga?
-
Selain menyediakan koleksi e-Book Erlangga untuk berbagai jenjang pendidikan, e-Library Erlangga juga memiliki fitur-fitur lainnya yang dapat membantu siswa dan guru dalam proses belajar mengajar. Berikut adalah beberapa fitur tersebut:
-
Platform Learning Management System (LMS)
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Audi Update Software Cd V 5570 Mmi 2g High A6 4f Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Audi Update Software Cd V 5570 Mmi 2g High A6 4f Download.md
deleted file mode 100644
index f3acbcef2585bb4f83b223c306f700139e806f2f..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Audi Update Software Cd V 5570 Mmi 2g High A6 4f Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
audi update software cd v 5570 mmi 2g high a6 4f download
-
-Audi Mmi V5570/20w MMI Update Cd.. Audi Mmi V5570/20w MMI Update. Hi bir sorun yok. Rahata upgrade yapabilirsiniz. AUDI Gncelleme Yazlm CD'si V 5570 MMI.. 22 Mar 2018. Audi Mmi 2g Software Update 3 Cd . Audi Mmi V5570/20w MMI Update Cd.. Audi Mmi V5570/20w MMI Update. Hi bir sorun yok. Rahata upgrade yapabilirsiniz. AUDI Gncelleme Yazlm CD'si V 5570 MMI.. 22 Mar 2018. Audi Mmi 2g Software Update 3 Cd . Audi Mmi V5570/20w MMI Update Cd.. Audi Mmi V5570/20w MMI Update. Hi bir sorun yok. Rahata upgrade yapabilirsiniz. AUDI Gncelleme Yazlm CD'si V 5570 MMI.. 22 Mar 2018. Audi Mmi 2g Software Update 3 Cd . Audi Mmi V5570/20w MMI Update Cd.. Audi Mmi V5570/20w MMI Update. Hi bir sorun yok. Rahata upgrade yapabilirsiniz. AUDI Gncelleme Yazlm CD'si V 5570 MMI.. 22 Mar 2018. Audi Mmi 2g Software Update 3 Cd . Audi Mmi V5570/20w MMI Update Cd.. Audi Mmi V5570/20w MMI Update. Hi bir sorun yok. Rahata upgrade yapabilirsiniz. AUDI Gncelleme Yazlm CD'si V 5570 MMI.. 22 Mar 2018. Audi Mmi 2g Software Update 3 Cd . Audi Mmi V5570/20w MMI Update Cd.. Audi Mmi V5570/20w MMI Update. Hi bir sorun yok. Rahata upgrade yapabilirsiniz. AUDI Gncelleme Yazlm CD'si V 5570 MMI.. 22 Mar 2018. Audi Mmi 2g Software Update 3 Cd . Audi Mmi V5570/20w MMI Update Cd.. Audi Mmi V5570/20w MMI Update. Hi bir sorun 4fefd39f24
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer The Ultimate Simulation Game with Free Open World and Racing.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer The Ultimate Simulation Game with Free Open World and Racing.md
deleted file mode 100644
index 4a6df51664e2edd7b80c9d4577bcafd71a2cfc32..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer The Ultimate Simulation Game with Free Open World and Racing.md
+++ /dev/null
@@ -1,162 +0,0 @@
-
-
Car Parking Free: Tips, Benefits, and Statistics
-
Are you tired of wasting time, money, and energy looking for a parking spot? Do you want to enjoy the convenience and peace of mind of parking your car for free? If yes, then this article is for you.
In this article, we will explain what car parking free is, why it is important, and how you can find it. We will also share some useful tips, benefits, and statistics on car parking free that will help you make the most of this opportunity. Let's get started!
-
Introduction
-
What is car parking free?
-
Car parking free is a term that refers to any parking space that does not require you to pay a fee or a ticket. It can be found in various places, such as public streets, residential areas, shopping malls, parks, airports, or other locations that offer free parking for their customers or visitors.
-
Car parking free can also be provided by some employers, organizations, or institutions that have their own parking lots or garages. They may offer free parking for their employees, members, students, or guests as a perk or a benefit.
-
Why is car parking free important?
-
Car parking free is important because it can make your life easier and happier in many ways. Here are some of the reasons why car parking free matters:
-
-
It can save you time and money. You don't have to spend hours searching for a parking spot or paying expensive fees or fines. You can use that time and money for other things that matter more to you.
-
It can reduce stress and frustration. You don't have to deal with traffic jams, crowded lots, or rude drivers. You can park your car quickly and easily without any hassle or worry.
-
It can protect your car from damage and theft. You don't have to worry about your car getting scratched, dented, or broken into by other vehicles or people. You can park your car in a safe and secure location that minimizes the risk of accidents or crimes.
-
It can improve your driving skills and confidence. You don't have to worry about parallel parking, reversing, or maneuvering in tight spaces. You can park your car in a spacious and comfortable spot that suits your level of experience and comfort.
-
-
Tips for car parking free
-
Use smart parking apps and solutions
-
One of the best ways to find car parking free is to use smart parking apps and solutions that can help you locate, reserve, and pay for parking spaces online or on your smartphone. Some examples of these apps and solutions are:
-
-
Car Parking Multiplayer: This is a simulation game that lets you park your car in various scenarios and modes. You can also customize your car, compete with other players, and explore an open world with real gas stations and car services.
-
BlueStacks: This is a platform that allows you to play Android games on your PC or Mac. You can use it to play Car Parking Multiplayer on a bigger screen with better graphics and performance.
-
Car Parking Multiplayer on the App Store: This is the iOS version of Car Parking Multiplayer that you can download on your iPhone or iPad. You can enjoy the same features as the Android version of the game, with some minor differences in the interface and controls.
-
Parkopedia: This is a website and app that provides information on parking availability, prices, and reviews for over 70 million parking spaces in 15,000 cities around the world. You can use it to find the best and cheapest parking options near your destination.
-
SpotHero: This is a website and app that allows you to book and pay for parking spaces in advance at discounted rates. You can use it to reserve a spot in a garage, lot, or valet service in over 300 cities across the US and Canada.
-
-
Follow the signs and rules
-
Another way to find car parking free is to follow the signs and rules that indicate where you can and cannot park your car. Some examples of these signs and rules are:
-
free car parking near me
-free car parking games
-free car parking app
-free car parking in london
-free car parking at disney world
-free car parking in manchester city centre
-free car parking in edinburgh
-free car parking in dublin
-free car parking in birmingham
-free car parking in brighton
-free car parking in liverpool
-free car parking in glasgow
-free car parking in bristol
-free car parking in leeds
-free car parking in cardiff
-free car parking in newcastle
-free car parking in york
-free car parking in oxford
-free car parking in bath
-free car parking in cambridge
-free car parking in nottingham
-free car parking in chester
-free car parking in leicester
-free car parking in sheffield
-free car parking in norwich
-free car parking in exeter
-free car parking in plymouth
-free car parking in bournemouth
-free car parking in blackpool
-free car parking in southampton
-free car parking near airport
-free car parking near beach
-free car parking near train station
-free car parking near hospital
-free car parking near stadium
-free car parking near mall
-free car parking near museum
-free car parking near zoo
-free car parking near park
-free car parking near university
-how to find free car parking spaces
-how to get free car parking tickets
-how to avoid paying for car parking fees
-how to save money on car parking charges
-how to book a free car parking spot online
-
-
No Parking: This means that you cannot park your car at any time in this area. You may be fined or towed if you do so.
-
No Parking Except Sundays: This means that you can park your car on Sundays only in this area. You may be fined or towed if you park on any other day.
-
2 Hour Parking: This means that you can park your car for up to two hours in this area. You may be fined or towed if you exceed the time limit.
-
Resident Parking Only: This means that only residents with a valid permit can park their car in this area. You may be fined or towed if you are not a resident or do not have a permit.
-
Handicapped Parking Only: This means that only drivers with a valid handicapped placard or license plate can park their car in this area. You may be fined or towed if you are not handicapped or do not have the proper identification.
-
-
Park in a safe and secure location
-
A third way to find car parking free is to park your car in a safe and secure location that protects your car from damage and theft. Some examples of these locations are:
-
-
Well-lit areas: These are areas that have adequate lighting at night or during low-visibility conditions. They can deter potential thieves or vandals from targeting your car.
-
Busy areas: These are areas that have a lot of people or traffic around them. They can provide witnesses or help in case of an emergency or a crime involving your car.
-
Legal areas: These are areas that are not restricted or prohibited by law or by private property owners. They can prevent you from getting into trouble with the authorities or the owners of the land.
-
Covered areas: These are areas that have some form of shelter or protection from the elements, such as a roof, a canopy, or a tree. They can prevent your car from getting damaged by rain, snow, hail, sun, or wind.
-
Locked areas: These are areas that have some form of security or access control, such as a gate, a fence, or a guard. They can prevent unauthorized people from entering or exiting the area where your car is parked.
-
-
Benefits of car parking free
-
Save time and money
-
One of the main benefits of car parking free is that it can save you time and money. Here are some of the ways how:
-
-
You don't have to waste time looking for a parking spot or waiting in line to pay for one. You can park your car as soon as you find a free space and leave as soon as you are done with your business.
-
You don't have to spend money on parking fees or fines. You can park your car for free without worrying about paying anything or getting penalized for violating any rules.
-
You don't have to spend money on gas or maintenance. You can park your car closer to your destination and reduce the distance and time you have to drive. You can also avoid driving on rough or uneven surfaces that can damage your car's tires, suspension, or engine.
-
-
Reduce stress and frustration
-
Another benefit of car parking free is that it can reduce stress and frustration. Here are some of the ways how:
-
-
You don't have to deal with traffic jams, crowded lots, or rude drivers. You can park your car in a less congested and more courteous environment.
-
You don't have to worry about finding a parking spot or paying for one. You can park your car with ease and peace of mind.
-
You don't have to worry about your car getting damaged or stolen. You can park your car in a safe and secure location that minimizes the risk of accidents or crimes.
-
You don't have to worry about your driving skills or confidence. You can park your car in a spot that suits your level of experience and comfort.
-
-
Protect your car from damage and theft
-
A third benefit of car parking free is that it can protect your car from damage and theft. Here are some of the ways how:
-
-
You don't have to expose your car to harsh weather conditions, such as rain, snow, hail, sun, or wind. You can park your car in a covered or sheltered area that protects it from the elements.
-
You don't have to expose your car to dirt, dust, or debris. You can park your car in a clean or paved area that prevents it from getting dirty or scratched.
-
You don't have to expose your car to other vehicles or people. You can park your car in a spacious or isolated area that prevents it from getting bumped or broken into.
-
-
Improve your driving skills and confidence
-
A fourth benefit of car parking free is that it can improve your driving skills and confidence. Here are some of the ways how:
-
-
You don't have to rely on technology or assistance. You can park your car by yourself using your own judgment and intuition.
-
You don't have to follow a fixed or predetermined route. You can park your car wherever you want using your own creativity and flexibility.
-
You don't have to conform to a standard or expectation. You can park your car however you want using your own style and personality.
-
-
Statistics on car parking free
-
Global smart parking market size and growth
-
One of the statistics on car parking free is the global smart parking market size and growth. According to a report by Grand View Research, the global smart parking market size was valued at USD 5.7 billion in 2020 and is expected to grow at a compound annual growth rate (CAGR) of 18.4% from 2021 to 2028. The report attributes this growth to the increasing demand for efficient and convenient parking solutions, the rising adoption of Internet of Things (IoT) and artificial intelligence (AI) technologies, and the growing environmental and social awareness among consumers and governments.
-
Car parking trends and challenges in different regions
-
Another statistic on car parking free is the car parking trends and challenges in different regions. According to a report by Parkopedia, the average parking price for two hours in 2020 was USD 5.46 globally, USD 8.95 in North America, USD 6.15 in Europe, USD 2.69 in Asia-Pacific, USD 1.77 in Latin America, and USD 1.28 in Africa. The report also identifies some of the key trends and challenges in each region, such as:
-
-
North America: The rise of contactless payments, the impact of COVID-19 on parking demand and supply, and the need for more efficient and sustainable parking management.
-
Europe: The expansion of low-emission zones, the adoption of mobility-as-a-service (MaaS) platforms, and the regulation of private parking operators.
-
Asia-Pacific: The rapid urbanization and motorization, the development of smart cities and connected vehicles, and the emergence of new mobility modes and services.
-
Latin America: The lack of parking infrastructure and enforcement, the high level of informality and corruption, and the social inequality and insecurity.
-
Africa: The low penetration of digital technologies, the high cost and scarcity of land, and the poor quality and safety of public transportation.
-
-
Car parking innovations and opportunities in the future
-
A third statistic on car parking free is the car parking innovations and opportunities in the future. According to a report by Frost & Sullivan, some of the key innovations and opportunities in the car parking industry by 2030 are:
-
-
The emergence of autonomous valet parking (AVP) systems that can park cars without human intervention.
-
The integration of blockchain technology that can enable secure and transparent transactions between parking providers and users.
-
The adoption of dynamic pricing models that can adjust parking fees based on demand and supply factors.
-
The implementation of green parking initiatives that can reduce carbon emissions and energy consumption from parking facilities.
-
The creation of smart parking ecosystems that can connect parking spaces with other mobility services and solutions.
-
-
Conclusion
-
Summary of the main pointsSummary of the main points
-
In conclusion, car parking free is a term that refers to any parking space that does not require you to pay a fee or a ticket. It is important because it can save you time and money, reduce stress and frustration, protect your car from damage and theft, and improve your driving skills and confidence. You can find car parking free by using smart parking apps and solutions, following the signs and rules, and parking in a safe and secure location. You can also learn more about car parking free by looking at some of the statistics on the global smart parking market size and growth, the car parking trends and challenges in different regions, and the car parking innovations and opportunities in the future.
-
Call to action for the readers
-
Now that you know more about car parking free, we hope that you will take advantage of this opportunity and enjoy the benefits that it offers. If you want to play Car Parking Multiplayer on your PC or Mac, you can download BlueStacks for free and start playing today. If you want to find the best and cheapest parking options near your destination, you can use Parkopedia or SpotHero to search, compare, and book parking spaces online or on your smartphone. If you want to share your thoughts or experiences on car parking free, you can leave a comment below or contact us through our website. Thank you for reading and happy parking!
-
FAQs
-
Here are some of the frequently asked questions (FAQs) on car parking free:
-
-
Q: How can I tell if a parking space is free or not?
-
A: You can tell if a parking space is free or not by looking at the signs, markings, meters, or machines that indicate the parking rules and regulations in that area. You can also use smart parking apps and solutions that can show you the availability and price of parking spaces in real time.
-
Q: What are the risks or disadvantages of car parking free?
-
A: Some of the risks or disadvantages of car parking free are that it may be hard to find, especially in busy or popular areas; it may be subject to time limits or restrictions that may change depending on the day or hour; it may be located in remote or unsafe areas that may expose your car to damage or theft; or it may be illegal or unethical, especially if you park on private property without permission or on public property without paying taxes or fees.
-
Q: What are some of the best practices or tips for car parking free?
-
A: Some of the best practices or tips for car parking free are to plan ahead and do some research before you go; to use smart parking apps and solutions that can help you find, reserve, and pay for parking spaces online or on your smartphone; to follow the signs and rules that indicate where you can and cannot park your car; to park in a safe and secure location that protects your car from damage and theft; and to be courteous and respectful to other drivers and pedestrians.
-
Q: What are some of the trends or innovations in car parking free?
-
A: Some of the trends or innovations in car parking free are the emergence of autonomous valet parking (AVP) systems that can park cars without human intervention; the integration of blockchain technology that can enable secure and transparent transactions between parking providers and users; the adoption of dynamic pricing models that can adjust parking fees based on demand and supply factors; the implementation of green parking initiatives that can reduce carbon emissions and energy consumption from parking facilities; and the creation of smart parking ecosystems that can connect parking spaces with other mobility services and solutions.
-
Q: Where can I learn more about car parking free?
-
A: You can learn more about car parking free by reading this article, visiting our website, or following us on social media. You can also contact us through our website if you have any questions, comments, or suggestions.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator for Android The Best Way to Play Basara 2 Heroes - Download Link Inside.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator for Android The Best Way to Play Basara 2 Heroes - Download Link Inside.md
deleted file mode 100644
index 2dbaffc1c60b66aef5789c7f586d541666bf5261..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator for Android The Best Way to Play Basara 2 Heroes - Download Link Inside.md
+++ /dev/null
@@ -1,238 +0,0 @@
-
-
Download Link Basara 2 Heroes Dolphin Emulator Android
-
Do you love hack and slash games with historical and fantasy elements? Do you want to play one of the best games in the genre on your Android device? If so, you might be interested in downloading Basara 2 Heroes for Dolphin Emulator.
-
Basara 2 Heroes is a follow-up to Sengoku Basara 2, a game developed by Capcom for PlayStation 2 in 2006. It adds numerous new features and content, such as new playable characters, new game modes, and co-op multiplayer. The game is set in the Sengoku period of Japan, where you can control one of the many warlords and fight against hundreds of enemies in epic battles.
-
download link basara 2 heroes dolphin emulator android
Dolphin Emulator is an app that allows you to play GameCube and Wii games on your Android device. It is an open-source project that has been in development since 2003, and it has improved significantly over the years. It supports many games with high compatibility and performance, as well as various enhancements and features.
-
In this article, we will show you how to download Basara 2 Heroes for Dolphin Emulator, how to install and configure Dolphin Emulator on your Android device, and how to play Basara 2 Heroes on Dolphin Emulator. Follow these steps carefully and you will be able to enjoy this amazing game on your Android device.
-
How to Download Basara 2 Heroes for Dolphin Emulator
-
Download the Game ISO File
-
The first thing you need to do is to download the game ISO file for Basara 2 Heroes. This is a file that contains all the data of the game disc, which you can use with Dolphin Emulator. However, you must own and acquire your own game legally, as downloading games that you do not own is illegal and unethical.
-
There are many websites that offer game ISO files for download, but not all of them are safe and reliable. Some of them may contain viruses, malware, or fake files that can harm your device or compromise your privacy. Therefore, you should only download game ISO files from reputable sources that have positive reviews and feedback from users.
-
One of the best sources for downloading game ISO files is [CoolROM](^1^), a website that has a large collection of game ISO files for various platforms, including GameCube and Wii. You can search for Basara 2 Heroes on the website and find the game ISO file that matches your region and language. You can also check the ratings, comments, and screenshots of the game before downloading it.
-
To download the game ISO file from CoolROM, you need to follow these steps:
-
-
Go to [CoolROM] and type "Basara 2 Heroes" in the search box. Press enter or click on the magnifying glass icon.
-
On the search results page, find the game that matches your region and language. For example, if you want to play the English version of the game, you should look for "Sengoku Basara 2 - Heroes (Japan) (En,Ja)". Click on the game title to go to the game page.
-
On the game page, scroll down and click on the "Download Now" button. This will take you to another page where you need to wait for a few seconds and then click on the "Download Your File" button. This will start the download of the game ISO file.
-
Save the game ISO file to a folder on your device or external storage. Make sure you remember the location of the file, as you will need it later.
-
-
Download the Dolphin Emulator App
-
The next thing you need to do is to download the Dolphin Emulator app for Android. This is an app that allows you to play GameCube and Wii games on your Android device. It is an open-source project that has been in development since 2003, and it has improved significantly over the years. It supports many games with high compatibility and performance, as well as various enhancements and features.
-
The best way to download the Dolphin Emulator app for Android is to get it from the official Google Play Store page. This way, you can ensure that you get the latest and most stable version of the app, as well as receive automatic updates and notifications. You can also avoid any potential risks of downloading fake or malicious apps from unknown sources.
-
How to download and play basara 2 heroes on android with dolphin emulator
-Basara 2 heroes wii edition - dolphin emulator for android download link and tutorial
-Best settings for basara 2 heroes on dolphin emulator android
-Basara 2 heroes iso download for dolphin emulator android
-Dolphin emulator android basara 2 heroes cheats and codes
-Basara 2 heroes gameplay on dolphin emulator android
-Basara 2 heroes dolphin emulator android lag fix and performance boost
-Basara 2 heroes save data for dolphin emulator android
-Basara 2 heroes mod apk for dolphin emulator android
-Basara 2 heroes english patch for dolphin emulator android
-Basara 2 heroes characters and unlockables on dolphin emulator android
-Basara 2 heroes tips and tricks for dolphin emulator android
-Basara 2 heroes review and rating for dolphin emulator android
-Basara 2 heroes vs basara chronicle heroes on dolphin emulator android
-Basara 2 heroes multiplayer mode on dolphin emulator android
-Basara 2 heroes story mode walkthrough on dolphin emulator android
-Basara 2 heroes free download link for dolphin emulator android
-Basara 2 heroes system requirements and compatibility for dolphin emulator android
-Basara 2 heroes graphics and sound quality on dolphin emulator android
-Basara 2 heroes controller settings and configuration for dolphin emulator android
-Basara 2 heroes comparison between wii and ps2 versions on dolphin emulator android
-Basara 2 heroes hidden features and secrets on dolphin emulator android
-Basara 2 heroes best weapons and items on dolphin emulator android
-Basara 2 heroes difficulty levels and challenges on dolphin emulator android
-Basara 2 heroes fun facts and trivia on dolphin emulator android
-
To download the Dolphin Emulator app from the Google Play Store, you need to follow these steps:
-
-
Go to [Dolphin Emulator] on the Google Play Store and tap on the "Install" button. This will start the download and installation of the app on your device.
-
Wait for the app to finish installing and then open it. You will see a welcome screen with some information and tips about using Dolphin Emulator. Tap on "Next" to proceed.
-
You will see a screen asking you to grant Dolphin Emulator access to your device's storage. This is necessary for Dolphin Emulator to scan for game files and save your settings and progress. Tap on "Allow" to grant permission.
-
You will see a screen asking you to enable controller support for Dolphin Emulator. This is optional, but recommended if you want to use an external controller to play games. Tap on "Enable" to grant permission.
-
-
How to Install and Configure Dolphin Emulator on Android
-
Install the App and Grant Permissions
-
You have already installed the app and granted permissions in the previous section, so you can skip this step if you have done so. However, if you have not installed the app or granted permissions yet, you need to do so before proceeding.
-
To install the app and grant permissions, follow these steps:
-
-
Go to [Dolphin Emulator] on the Google Play Store and tap on the "Install" button. This will start the download and installation of the app on your device.
-
Wait for the app to finish installing and then open it. You will see a welcome screen with some information and tips about using Dolphin Emulator. Tap on "Next" to proceed.
-
You will see a screen asking you to grant Dolphin Emulator access to your device's storage. This is necessary for Dolphin Emulator to scan for game files and save your settings and progress. Tap on "Allow" to grant permission.
-
You will see a screen asking you to enable controller support for Dolphin Emulator. This is optional, but recommended if you want to use an external controller to play games. Tap on "Enable" to grant permission.
-
-
Scan for Game Files and Add Basara 2 Heroes to Library
-
The next thing you need to do is to scan for game files and add Basara 2 Heroes to the Dolphin Emulator library. This will allow you to launch and play the game from the app. To do this, you need to have the game ISO file that you downloaded in the previous section.
-
To scan for game files and add Basara 2 Heroes to the library, follow these steps:
-
-
On the main screen of Dolphin Emulator, tap on the "+" icon at the top right corner. This will open a file browser where you can navigate to the folder where you saved the game ISO file.
-
Find and select the game ISO file for Basara 2 Heroes. Tap on "OK" to confirm. This will add the game to the Dolphin Emulator library.
-
You will see a thumbnail of the game on the main screen of Dolphin Emulator. Tap on it to see more details and options for the game.
-
-
Adjust the Settings for Optimal Performance and Compatibility
-
The last thing you need to do before playing Basara 2 Heroes on Dolphin Emulator is to adjust the settings for optimal performance and compatibility. This will ensure that you get the best possible gaming experience on your Android device. However, keep in mind that different devices may have different capabilities and limitations, so you may need to experiment with different settings to find what works best for you.
-
To adjust the settings for optimal performance and compatibility, follow these steps:
-
-
On the main screen of Dolphin Emulator, tap on the menu icon at the top left corner. This will open a sidebar menu where you can access various options and settings.
-
Tap on "Settings" to open the settings menu. Here you can adjust various settings for graphics, audio, controls, and enhancements.
-
For graphics settings, we recommend the following:
-
-
Video Backend: Choose "OpenGL" or "Vulkan" depending on your device's support and preference.
-
Aspect Ratio: Choose "Auto" or "Stretch to Window" depending on your preference.
-
Show FPS: Enable this option if you want to see the frames per second (FPS) of the game.
-
Internal Resolution: Choose a resolution that matches your device's screen resolution or lower. Higher resolutions may improve the image quality, but they may also reduce the performance and cause lag or stuttering.
-
Anisotropic Filtering: Choose a level of filtering that improves the texture quality without affecting the performance too much. We recommend 2x or 4x.
-
Anti-Aliasing: Choose a level of anti-aliasing that smooths out the jagged edges without affecting the performance too much. We recommend None or 2x MSAA.
-
-
For audio settings, we recommend the following:
-
-
Audio Backend: Choose "OpenSL ES" or "Cubeb" depending on your device's support and preference.
-
Audio Stretching: Enable this option if you want to reduce audio crackling and sync issues.
-
Volume: Adjust the volume level according to your preference.
-
-
For control settings, we recommend the following:
-
-
Input Device: Choose "Emulated Wii Remote" or "Emulated GameCube Controller" depending on the game's controller support.
-
Edit Layout: Tap on this option if you want to customize the layout of the on-screen controller. You can resize, reposition, and rearrange the buttons according to your preference.
-
Configure Controller: Tap on this option if you want to configure an external controller for Dolphin Emulator. You can map the buttons and axes of your controller to match the game's controls.
-
-
For enhancement settings, we recommend the following:
-
-
Scaled EFB Copy: Enable this option if you want to improve some effects and textures in some games.
-
Force Texture Filtering: Enable this option if you want to improve some textures in some games.
-
Disable Fog: Disable this option if you want to preserve some atmospheric effects in some games.
-
Widescreen Hack: Enable this option if you want to play games in widescreen mode. However, this may cause some graphical glitches or distortions in some games.
-
-
-
How to Play Basara 2 Heroes on Dolphin Emulator
-
Choose a Game Mode and Character
-
Now that you have downloaded, installed, and configured Dolphin Emulator and Basara 2 Heroes, you are ready to play the game. To start playing, follow these steps:
-
-
On the main screen of Dolphin Emulator, tap on the thumbnail of Basara 2 Heroes. This will launch the game and show the title screen.
-
On the title screen, press the "Start" button to go to the main menu. Here you can choose from different game modes, such as Story Mode, Free Mode, Versus Mode, and Survival Mode. Each mode has its own objectives and challenges.
-
For example, if you choose Story Mode, you can select one of the 16 playable characters and follow their story through a series of stages. Each character has their own personality, skills, weapons, and allies. You can also unlock more characters and content by completing certain conditions.
-
After choosing a game mode and a character, you can also customize some options, such as the difficulty level, the number of lives, the time limit, and the sound settings.
-
-
Use the On-Screen or External Controller
-
To play Basara 2 Heroes on Dolphin Emulator, you can use either the on-screen controller or an external controller. The on-screen controller is a virtual controller that appears on your device's screen and mimics the original GameCube or Wii controller. The external controller is a physical controller that you can connect to your device via Bluetooth or USB.
-
To use the on-screen controller, you need to tap on the buttons and move the analog sticks on your device's screen. You can also customize the layout of the on-screen controller by tapping on the "Edit Layout" option in the control settings menu.
-
To use an external controller, you need to pair it with your device and configure it in Dolphin Emulator. You can do this by tapping on the "Configure Controller" option in the control settings menu. You can also map the buttons and axes of your external controller to match the game's controls.
-
The basic controls for Basara 2 Heroes are as follows:
-
-
-
Button
-
Function
-
-
-
A
-
Normal Attack
-
-
-
B
-
Special Attack
-
-
-
X
-
Jump
-
-
-
Y
-
Basara Attack (when gauge is full)
-
-
-
Z
-
Taunt (increase Basara gauge)
-
-
-
L
-
Guard / Evade (with analog stick)
-
-
-
R
-
Lock-on / Change Target (with analog stick)
-
-
-
D-Pad
-
Select Ally / Order Ally (with A button)
-
-
-
Start
-
Pause / Menu
-
-
-
Analog Stick
-
Move Character / Camera (when locked-on)
-
-
-
C-Stick
-
Move Camera (when not locked-on)
-
-
Enjoy the Game on Your Android Device
-
You are now ready to enjoy Basara 2 Heroes on your Android device. You can experience the thrilling and fast-paced action of hacking and slashing through hundreds of enemies with your favorite character. You can also explore the rich and colorful history and culture of Japan during the Sengoku period.
-
Playing Basara 2 Heroes on Dolphin Emulator has some benefits and drawbacks compared to playing it on a console. Some of the benefits are:
-
-
You can play the game anytime and anywhere on your Android device.
-
You can enhance the game's graphics and audio with Dolphin Emulator's features.
-
You can save and load your progress with Dolphin Emulator's save states.
You can use an external controller to play the game more comfortably.
-
-
Some of the drawbacks are:
-
-
You may encounter some compatibility or performance issues with some devices or games.
-
You may need to adjust the settings for each game to get the best results.
-
You may need to download and install additional files or apps to play the game.
-
You may need to own and acquire the game legally, as downloading games that you do not own is illegal and unethical.
-
-
Despite these drawbacks, playing Basara 2 Heroes on Dolphin Emulator is still a great way to enjoy this classic game on your Android device. You can have fun and learn something new at the same time.
-
Conclusion
-
In this article, we have shown you how to download link Basara 2 Heroes Dolphin Emulator Android. We have explained what Basara 2 Heroes and Dolphin Emulator are, how to download them, how to install and configure them, and how to play them. We have also provided some tips and tricks for getting the best gaming experience on your Android device.
-
Basara 2 Heroes is a hack and slash game that lets you control one of the many warlords of Japan during the Sengoku period. You can fight against hundreds of enemies in epic battles, using your skills, weapons, and allies. You can also choose from different game modes, such as Story Mode, Free Mode, Versus Mode, and Survival Mode.
-
Dolphin Emulator is an app that lets you play GameCube and Wii games on your Android device. It is an open-source project that has been in development since 2003, and it has improved significantly over the years. It supports many games with high compatibility and performance, as well as various enhancements and features.
-
If you want to play Basara 2 Heroes on your Android device, you need to follow these steps:
-
-
Download the game ISO file for Basara 2 Heroes from a reputable source.
-
Download the Dolphin Emulator app from the Google Play Store.
-
Install the app and grant permissions for storage access and controller support.
-
Scan for game files and add Basara 2 Heroes to the Dolphin Emulator library.
-
Adjust the settings for optimal performance and compatibility.
-
Choose a game mode and character in Basara 2 Heroes.
-
Use the on-screen or external controller to play Basara 2 Heroes on Dolphin Emulator.
-
Enjoy the game on your Android device.
-
-
We hope that this article has helped you learn how to download link Basara 2 Heroes Dolphin Emulator Android. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!
-
FAQs
-
Q: Is Dolphin Emulator legal?
-
A: Dolphin Emulator is legal, as it is an open-source project that does not violate any laws or copyrights. However, downloading games that you do not own is illegal and unethical. You should only download games that you own and acquire them legally and safely. You should also respect the developers and publishers of the games and support them if you can.
-
Q: What are the system requirements for Dolphin Emulator?
-
A: Dolphin Emulator does not have a fixed set of system requirements, as different games and settings may have different demands. However, in general, you need a device that has the following specifications or higher:
-
-
Android 5.0 (Lollipop) or later
-
64-bit processor (ARMv8 or x86_64)
-
OpenGL ES 3.0 or Vulkan support
-
2 GB of RAM or more
-
8 GB of storage or more
-
-
You can check your device's specifications by going to the "Settings" app and tapping on "About Phone" or "About Device". You can also use a third-party app like [CPU-Z] to get more detailed information about your device.
-
Q: How can I improve the performance of Dolphin Emulator?
-
A: There are several ways to improve the performance of Dolphin Emulator, such as:
-
-
Lowering the internal resolution and anti-aliasing settings in the graphics settings menu.
-
Disabling some enhancements and features that are not essential for the game.
-
Closing other apps and background processes that may consume resources and battery.
-
Using a device cooler or fan to prevent overheating and throttling.
-
Updating the app and the device's software to the latest versions.
-
-
However, keep in mind that some games may be more demanding than others, and some devices may have more limitations than others. Therefore, you may not be able to achieve a smooth and stable performance for every game on every device.
-
Q: How can I transfer my save data and settings from one device to another?
-
A: If you want to transfer your save data and settings from one device to another, you need to copy the Dolphin Emulator folder from your device's storage or external storage to the other device's storage or external storage. The Dolphin Emulator folder contains all your save data, settings, screenshots, and other files related to Dolphin Emulator. You can use a file manager app or a USB cable to copy the folder.
-
To locate the Dolphin Emulator folder, follow these steps:
-
-
On the main screen of Dolphin Emulator, tap on the menu icon at the top left corner. This will open a sidebar menu where you can access various options and settings.
-
Tap on "Settings" to open the settings menu. Here you can adjust various settings for graphics, audio, controls, and enhancements.
-
Tap on "Paths" to open the paths menu. Here you can see the location of the Dolphin Emulator folder on your device's storage or external storage. You can also change the location if you want.
-
-
Q: How can I contact the developers of Dolphin Emulator?
-
A: If you want to contact the developers of Dolphin Emulator, you can do so by visiting their official website at [Dolphin Emulator]. Here you can find more information about Dolphin Emulator, such as its features, history, compatibility list, FAQ, wiki, blog, forums, and social media links. You can also report bugs, request features, submit feedback, or donate to support the project.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Candy Crush Friends Saga APK and Experience the New Levels and Modes.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Candy Crush Friends Saga APK and Experience the New Levels and Modes.md
deleted file mode 100644
index eebdec04d38cb5540bd538135245153ea86a056b..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Candy Crush Friends Saga APK and Experience the New Levels and Modes.md
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
Download Candy Crush Friends Saga APK
-
If you are a fan of color-match puzzle games, you have probably heard of Candy Crush Saga, one of the most popular and addictive games of its genre. But did you know that there is a new version of Candy Crush that is even more fun and exciting? It's called Candy Crush Friends Saga, and it's available for Android devices. In this article, we will tell you what Candy Crush Friends Saga is, what features it has, why you should download the APK file, and how to do it. Read on and get ready to enjoy this sweet adventure.
-
What is Candy Crush Friends Saga?
-
Candy Crush Friends Saga is a color-match puzzle game from King, the makers of the original Candy Crush Saga and other popular games like Farm Heroes Saga and Bubble Witch Saga. In this game, you have to switch and match candies to clear the board and complete the level. But there is a twist: you are not alone in this journey. You have the help of your friends, who are adorable characters from the Candy Kingdom. Each friend has a special power that can help you in different ways, such as creating special candies, removing obstacles, or boosting your score. You can collect and customize your friends as you progress through the game.
Candy Crush Friends Saga has many features that make it stand out from other color-match puzzle games. Here are some of them:
-
- Hundreds of levels to play
-
The game has hundreds of levels for you to enjoy, each with different goals and challenges. You can play classic levels where you have to reach a certain score or collect a certain number of candies, or try new game modes like Dunk the Cookie, where you have to dunk cookies into chocolate, or Free the Octopuses, where you have to free cute octopuses from jelly. The game also has special events and rewards that keep things fresh and exciting.
-
- New game modes and challenges
-
Candy Crush Friends Saga introduces new game modes and challenges that test your skills and strategy. For example, you can play Boss Levels, where you have to face off against a boss character who will try to stop you from completing the level. You can also play Levels with Friends, where you can team up with your friends or other players online and work together to clear the board. You can also compete with other players in Leaderboards and Tournaments, where you can show off your skills and win prizes.
-
- Collect and customize your friends
-
One of the most unique features of Candy Crush Friends Saga is that you can collect and customize your friends, who are adorable characters from the Candy Kingdom. Each friend has a special power that can help you in different ways, such as creating special candies, removing obstacles, or boosting your score. You can unlock new friends as you progress through the game, and you can also customize their outfits and accessories. You can choose from a variety of friends, such as Tiffi, Yeti, Nutcracker, Misty, Red Rabbit, Olivia, Dachs, Odus, and more.
-
- Explore the sweet world of Candy Kingdom
-
Candy Crush Friends Saga takes you on a journey through the sweet world of Candy Kingdom, where you can discover new places and meet new characters. You can explore locations such as Lollipop Meadow, Lemonade Lake, Chocolate Mountains, Ice Cream Alps, Cotton Candy Clouds, and more. The game has stunning graphics and animations that make the game more immersive and enjoyable. You can also listen to the catchy and cheerful music and sound effects that accompany the game.
-
Why download Candy Crush Friends Saga APK?
-
If you are wondering why you should download the APK file of Candy Crush Friends Saga, here are some reasons:
-
How to download candy crush friends saga apk for android
-Candy crush friends saga apk latest version free download
-Download candy crush friends saga apk mod unlimited lives
-Candy crush friends saga apk download for pc windows 10
-Download candy crush friends saga apk from softpedia[^1^]
-Candy crush friends saga apk offline mode download
-Download candy crush friends saga apk and obb file
-Candy crush friends saga apk hack download no root
-Download candy crush friends saga apk for ios devices
-Candy crush friends saga apk full unlocked download
-Download candy crush friends saga apk with all characters
-Candy crush friends saga apk download for fire tablet
-Download candy crush friends saga apk without google play
-Candy crush friends saga apk mirror download link
-Download candy crush friends saga apk old version
-Candy crush friends saga apk direct download free
-Download candy crush friends saga apk for bluestacks emulator
-Candy crush friends saga apk no ads download
-Download candy crush friends saga apk from apkpure
-Candy crush friends saga apk premium download
-Download candy crush friends saga apk for macbook pro
-Candy crush friends saga apk update download
-Download candy crush friends saga apk on chromebook
-Candy crush friends saga apk cracked download
-Download candy crush friends saga apk for samsung galaxy s10
-Candy crush friends saga apk safe download site
-Download candy crush friends saga apk from uptodown
-Candy crush friends saga apk cheat engine download
-Download candy crush friends saga apk for kindle fire hd
-Candy crush friends saga apk original download
-Download candy crush friends saga apk for android tv box
-Candy crush friends saga apk mod menu download
-Download candy crush friends saga apk from apkmirror
-Candy crush friends saga apk unlimited gold bars download
-Download candy crush friends saga apk for huawei p30 pro
-Candy crush friends saga apk virus free download
-Download candy crush friends saga apk from apksfree
-Candy crush friends saga apk online play download
-Download candy crush friends saga apk for nokia 6.1 plus
-Candy crush friends saga apk new levels download
-
- Enjoy the game without ads or in-app purchases
-
One of the advantages of downloading the APK file is that you can enjoy the game without any interruptions or limitations. You don't have to worry about annoying ads popping up on your screen or tempting you to buy extra lives or boosters. You can play the game as much as you want and have a smooth and satisfying experience.
-
- Get access to the latest updates and features
-
Another benefit of downloading the APK file is that you can get access to the latest updates and features of the game. You don't have to wait for the official release of the game on the Google Play Store, which may take some time or be unavailable in your region. You can get the newest version of the game as soon as it is available and enjoy the new levels, game modes, friends, and more.
-
- Play offline or online with your friends
-
A third reason to download the APK file is that you can play the game offline or online with your friends. You don't need an internet connection to play the game, which is great if you are traveling or in a place with poor network coverage. You can also play online with your friends or other players and share your progress and achievements. You can also sync your game across different devices using your Facebook account.
-
How to download Candy Crush Friends Saga APK?
-
If you are interested in downloading Candy Crush Friends Saga APK, here are the steps you need to follow:
-
Step 1: Enable unknown sources on your device
-
The first step is to enable unknown sources on your device, which will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message, but don't worry, it is safe to proceed.
-
Step 2: Download the APK file from a trusted source
-
The next step is to download the APK file from a trusted source. You can use any web browser to search for Candy Crush Friends Saga APK and choose a reliable website that offers it. Make sure to check the reviews and ratings of the website and avoid any suspicious links or pop-ups. Once you find a suitable website, click on the download button and save the file on your device.
-
Step 3: Install the APK file on your device
-
The third step is to install the APK file on your device. To do this, locate the file on your device using a file manager app or your notification bar. Tap on the file and follow the instructions on the screen. You may see a confirmation message asking if you want to install this app, just tap on Install and wait for the process to finish.
-
Step 4: Launch the game and have fun
-
The final step is to launch the game and have fun. To do this, find the app icon on your home screen or app drawer and tap on it. You will see a welcome screen with some information about the game and its features. You can also connect your Facebook account if you want to sync your game across different devices or play with your friends. Then, you can start playing Candy Crush Friends Saga and enjoy this sweet adventure.
-
Conclusion
-
Candy Crush Friends Saga is a color-match puzzle game that is fun, exciting, and addictive. It has many features that make it stand out from other games of its genre, such as hundreds of levels, new game modes, collectible friends, and a beautiful candy world. You can download Candy Crush Friends Saga APK from a trusted source and enjoy the game without ads or in-app purchases, get access to the latest updates and features, and play offline or online with your friends. If you are looking for a new game to play on your Android device, Candy Crush Friends Saga is a great choice.
-
FAQs
-
Here are some frequently asked questions about Candy Crush Friends Saga:
-
- What is the difference between Candy Crush Saga and Candy Crush Friends Saga?
-
Candy Crush Saga and Candy Crush Friends Saga are both color-match puzzle games from King, but they have some differences. Candy Crush Friends Saga has more features, such as new game modes, collectible friends, boss levels, levels with friends, leaderboards, tournaments, and more. Candy Crush Friends Saga also has better graphics, animations, music, and sound effects that make the game more immersive and enjoyable. Candy Crush Friends Saga also has a different story and characters, where you have to help your friends from the Candy Kingdom in their quests.
-
- How many friends can I collect and customize in Candy Crush Friends Saga?
-
You can collect and customize up to 40 friends in Candy Crush Friends Saga, each with their own special power and personality. You can unlock new friends as you progress through the game, and you can also customize their outfits and accessories. You can choose from a variety of friends, such as Tiffi, Yeti, Nutcracker, Misty, Red Rabbit, Olivia, Dachs, Odus, and more.
-
- How can I play with my friends in Candy Crush Friends Saga?
-
You can play with your friends in Candy Crush Friends Saga in different ways. You can play Levels with Friends, where you can team up with your friends or other players online and work together to clear the board. You can also compete with your friends in Leaderboards and Tournaments, where you can show off your skills and win prizes. You can also send and receive lives and gifts from your friends, and chat with them using stickers and emojis.
-
- How can I get more lives and boosters in Candy Crush Friends Saga?
-
You can get more lives and boosters in Candy Crush Friends Saga by doing the following things: - Wait for your lives to refill over time. You get one life every 30 minutes, up to a maximum of five lives. - Ask your friends to send you lives. You can send and receive up to five lives per day from your friends. - Buy lives and boosters with gold bars. You can earn gold bars by completing levels, events, or achievements, or you can buy them with real money. - Spin the Daily Booster Wheel. You can spin the wheel once a day for a chance to win a free booster or other rewards. - Participate in special events and rewards. You can earn free lives and boosters by playing special levels, completing quests, or joining tournaments.
-
- Is Candy Crush Friends Saga safe to download and play?
-
Yes, Candy Crush Friends Saga is safe to download and play, as long as you download the APK file from a trusted source. You should avoid any websites that offer fake or modified versions of the game that may contain viruses or malware. You should also check the permissions that the app requests before installing it on your device. Candy Crush Friends Saga is a fun and harmless game that does not collect or share any personal information from its users.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Angry Birds Blast MOD APK How to Get Unlimited Moves and Boosters.md b/spaces/1phancelerku/anime-remove-background/Angry Birds Blast MOD APK How to Get Unlimited Moves and Boosters.md
deleted file mode 100644
index 8fc0048958405d50c68675e3ba9fb0dbed73b80b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Angry Birds Blast MOD APK How to Get Unlimited Moves and Boosters.md
+++ /dev/null
@@ -1,175 +0,0 @@
-
-
Angry Birds Blast Mod APK Unlimited Moves: A Review
-
Angry Birds Blast is a popular puzzle game from Rovio Entertainment, the creators of the iconic Angry Birds franchise. In this game, you have to tap matching balloons to blast them and free the birds trapped inside. You also have to outsmart the pigs who are behind this evil scheme and save the eggs from being turned into omelettes. With over 4500 levels, daily challenges, weekly events, puzzle pieces, and global leaderboards, Angry Birds Blast is a fun and addictive game for fans of the genre.
But what if you want to enjoy the game without any limitations? What if you want to have unlimited moves, coins, boosters, and power-ups? What if you want to unlock all levels and features without spending any money? If you are looking for a way to enhance your gaming experience, then you might be interested in Angry Birds Blast Mod APK Unlimited Moves. This is a modified version of the original game that gives you access to unlimited resources and features. In this article, we will review this mod apk and tell you how to download and install it on your device. We will also give you some tips and tricks on how to play the game with this mod apk.
-
What is Angry Birds Blast?
-
Gameplay
-
Angry Birds Blast is a tile-matching puzzle game that follows the same basic mechanics as other games of this genre. You have to tap groups of two or more balloons of the same color to pop them and clear them from the board. The more balloons you pop at once, the more points you score and the more boosters you create. Boosters are special items that can help you clear more balloons or obstacles in one move. Some examples of boosters are rockets, bombs, and laser guns.
-
The game has different types of levels with different goals and challenges. Some levels require you to free a certain number of birds, some require you to defeat pigs by popping balloons next to them, some require you to clear bubbles or wood or glass panels from the board, and some require you to help a hot air balloon reach the top of the board. You have a limited number of moves to complete each level. If you run out of moves before reaching the goal, you lose a life and have to try again. You can earn up to three stars per level depending on your score.
-
Features
-
Angry Birds Blast has many features that make it an enjoyable and engaging game. Some of these features are:
-
-
4500+ fun levels with more added weekly
-
Pick up and play anytime, anywhere even offline
-
Tease your brain with challenging and strategic gameplay
-
Create boosters like rockets, bombs, laser guns
-
Earn free rewards and boosters in daily challenges
-
Join weekly events like Mighty League and Treasure Hunt
-
Find pieces to new puzzles every month in Puzzle Chase
-
Play with friends by connecting to Facebook
-
Earn your spot in global leaderboards with high scores
-
Use iMessage stickers for a little blast in your chats
-
-
What is Angry Birds Blast Mod APK Unlimited Moves?
-
Benefits
-
Angry Birds Blast Mod APK Unlimited Moves is a modified version of the original game that gives you access to unlimited resources and features. Some of the benefits of this mod apk are:
-
angry birds blast mod apk unlimited moves and boosters
-angry birds blast mod apk unlimited moves download
-angry birds blast mod apk unlimited moves latest version
-angry birds blast mod apk unlimited moves free
-angry birds blast mod apk unlimited moves android
-angry birds blast mod apk unlimited moves ios
-angry birds blast mod apk unlimited moves 2023
-angry birds blast mod apk unlimited moves hack
-angry birds blast mod apk unlimited moves cheats
-angry birds blast mod apk unlimited moves online
-angry birds blast mod apk unlimited moves offline
-angry birds blast mod apk unlimited moves no root
-angry birds blast mod apk unlimited moves no ads
-angry birds blast mod apk unlimited moves update
-angry birds blast mod apk unlimited moves install
-angry birds blast mod apk unlimited moves gameplay
-angry birds blast mod apk unlimited moves review
-angry birds blast mod apk unlimited moves features
-angry birds blast mod apk unlimited moves tips
-angry birds blast mod apk unlimited moves tricks
-angry birds blast mod apk unlimited moves guide
-angry birds blast mod apk unlimited moves level
-angry birds blast mod apk unlimited moves puzzle
-angry birds blast mod apk unlimited moves balloon
-angry birds blast mod apk unlimited moves pig
-angry birds blast mod apk unlimited moves fun
-angry birds blast mod apk unlimited moves adventure
-angry birds blast mod apk unlimited moves challenge
-angry birds blast mod apk unlimited moves score
-angry birds blast mod apk unlimited moves star
-angry birds blast mod apk unlimited moves bonus
-angry birds blast mod apk unlimited moves coin
-angry birds blast mod apk unlimited moves gold
-angry birds blast mod apk unlimited moves silver
-angry birds blast mod apk unlimited moves diamond
-angry birds blast mod apk unlimited moves gem
-angry birds blast mod apk unlimited moves reward
-angry birds blast mod apk unlimited moves gift
-angry birds blast mod apk unlimited moves code
-angry birds blast mod apk unlimited moves coupon
-angry birds blast mod apk unlimited moves voucher
-angry birds blast mod apk unlimited moves offer
-angry birds blast mod apk unlimited moves deal
-angry birds blast mod apk unlimited moves discount
-angry birds blast mod apk unlimited moves sale
-angry birds blast mod apk unlimited moves freebie
-angry birds blast mod apk unlimited moves trial
-angry birds blast mod apk unlimited moves premium
-angry birds blast mod apk unlimited moves pro
-
-
Unlimited moves: You can play as long as you want without worrying about running out of moves. You can also retry any level without losing a life.
-
Unlimited coins: You can use coins to buy more boosters, power-ups, and lives. You can also use coins to unlock new puzzles and levels.
-
Unlimited boosters: You can use boosters like rockets, bombs, laser guns, and more to clear more balloons and obstacles in one move. You can also create more boosters by popping more balloons.
-
Unlimited power-ups: You can use power-ups like hammers, slingshots, magnets, and more to help you complete the level goals. You can also activate power-ups by tapping on them.
-
All levels unlocked: You can play any level you want without having to complete the previous ones. You can also skip any level you find too hard or boring.
-
All features unlocked: You can enjoy all the features of the game like daily challenges, weekly events, puzzle pieces, global leaderboards, iMessage stickers, and more.
-
-
Risks
-
Angry Birds Blast Mod APK Unlimited Moves is not an official version of the game and it is not endorsed by Rovio Entertainment. Therefore, there are some risks involved in using this mod apk. Some of the risks are:
-
-
Malware: The mod apk file may contain viruses, spyware, or other malicious software that can harm your device or steal your personal information. You should always download the mod apk from a trusted source and scan it with an antivirus before installing it.
-
Ban: The mod apk may violate the terms of service of the game and get detected by the game servers. This may result in your account being banned or suspended from playing the game. You should always use the mod apk at your own risk and discretion.
-
Crash: The mod apk may not be compatible with your device or the latest version of the game. This may cause the game to crash or freeze frequently. You should always backup your game data before installing the mod apk and update it regularly.
-
-
How to Download and Install Angry Birds Blast Mod APK Unlimited Moves?
-
Requirements
-
To download and install Angry Birds Blast Mod APK Unlimited Moves, you need to have the following requirements:
-
-
An Android device with Android 4.4 or higher
-
At least 100 MB of free storage space
-
A stable internet connection
-
A file manager app
-
-
Steps
-
To download and install Angry Birds Blast Mod APK Unlimited Moves, you need to follow these steps:
-
-
Download the mod apk file from a trusted source. You can use this link as an example.
-
Go to your device settings and enable the installation of apps from unknown sources.
-
Locate the downloaded mod apk file using your file manager app and tap on it to install it.
-
Wait for the installation to finish and launch the game from your app drawer or home screen.
-
Enjoy playing Angry Birds Blast Mod APK Unlimited Moves with unlimited resources and features.
-
-
How to Play Angry Birds Blast Mod APK Unlimited Moves?
-
Tips and Tricks
-
To play Angry Birds Blast Mod APK Unlimited Moves effectively, you need to know some tips and tricks that can help you score higher and complete more levels. Some of these tips and tricks are:
-
-
Tap on groups of four or more balloons of the same color to create boosters. The bigger the group, the better the booster.
-
Use boosters wisely and strategically. Aim for balloons that are near pigs, obstacles, or birds to clear them faster.
-
Combine boosters for more powerful effects. For example, a rocket and a bomb can create a huge explosion that clears a large area of balloons.
-
Save your power-ups for difficult levels or situations. Power-ups can help you overcome challenges like limited moves, tricky goals, or hard-to-reach areas.
-
Collect puzzle pieces every month in Puzzle Chase to unlock new puzzles and rewards. Puzzle pieces are hidden in random levels throughout the game.
-
Participate in daily challenges, weekly events, Mighty League, and Treasure Hunt to earn free rewards and boosters. You can also compete with other players around the world for high scores and prizes.
-
-
Power-ups and Boosters
Angry Birds Blast Mod APK Unlimited Moves has various power-ups and boosters that can help you clear more balloons and obstacles in one move. Some of these power-ups and boosters are:
-
-
-
Power-up/Booster
-
Description
-
How to get/use
-
-
-
Hammer
-
Pops any balloon or obstacle on the board
-
Tap on the power-up icon and then tap on the target
-
-
-
Slingshot
-
Shoots a bird at any balloon or obstacle on the board
-
Tap on the power-up icon and then drag and release to aim and shoot
-
-
-
Magnet
-
Attracts all balloons of the same color to one spot
-
Tap on the power-up icon and then tap on the color you want to attract
-
-
-
Rocket
-
Blasts a column or a row of balloons and obstacles
-
Pop four balloons of the same color or tap on the booster icon and then swipe to choose the direction
-
-
-
Bomb
-
Explodes and clears a 3x3 area of balloons and obstacles
-
Pop five balloons of the same color in an L or T shape or tap on the booster icon and then tap on the target area
-
-
-
Laser Gun
-
Zaps and clears all balloons and obstacles of the same color
-
Pop seven or more balloons of the same color or tap on the booster icon and then tap on the color you want to zap
-
-
Conclusion
-
Angry Birds Blast Mod APK Unlimited Moves is a great way to enjoy Angry Birds Blast without any limitations. You can have unlimited moves, coins, boosters, power-ups, and access to all levels and features. You can also play the game offline and with your friends. However, you should also be aware of the risks involved in using this mod apk, such as malware, ban, or crash. You should always download the mod apk from a trusted source and scan it with an antivirus before installing it. You should also backup your game data before installing the mod apk and update it regularly. You should also use the mod apk at your own risk and discretion.
-
If you are looking for a fun and challenging puzzle game with cute graphics, catchy music, and addictive gameplay, then you should give Angry Birds Blast a try. And if you want to enhance your gaming experience, then you should try Angry Birds Blast Mod APK Unlimited Moves. We hope this article has helped you learn more about this mod apk and how to download and install it on your device. We also hope you have enjoyed playing Angry Birds Blast Mod APK Unlimited Moves with our tips and tricks. Have fun blasting those balloons and saving those birds!
-
FAQs
-
Here are some frequently asked questions about Angry Birds Blast Mod APK Unlimited Moves:
-
-
Is Angry Birds Blast Mod APK Unlimited Moves safe to use?
-Angry Birds Blast Mod APK Unlimited Moves is not an official version of the game and it is not endorsed by Rovio Entertainment. Therefore, there are some risks involved in using this mod apk, such as malware, ban, or crash. You should always download the mod apk from a trusted source and scan it with an antivirus before installing it. You should also backup your game data before installing the mod apk and update it regularly. You should also use the mod apk at your own risk and discretion.
-
Is Angry Birds Blast Mod APK Unlimited Moves free to download?
-Yes, Angry Birds Blast Mod APK Unlimited Moves is free to download from various sources online. However, you should always be careful about where you download it from and what permissions it asks for. You should also avoid clicking on any ads or pop-ups that may appear while downloading it.
-
Can I play Angry Birds Blast Mod APK Unlimited Moves offline?
-Yes, you can play Angry Birds Blast Mod APK Unlimited Moves offline without any internet connection. However, some features like daily challenges, weekly events, global leaderboards, iMessage stickers, etc., may not work offline.
-
Can I play Angry Birds Blast Mod APK Unlimited Moves with my friends?
-Yes, you can play Angry Birds Blast Mod APK Unlimited Moves with your friends by connecting to Facebook. You can see your friends' scores, send and receive gifts, invite them to play, etc.
-
How can I update Angry Birds Blast Mod APK Unlimited Moves?
-You can You can update Angry Birds Blast Mod APK Unlimited Moves by downloading the latest version of the mod apk from the same source you downloaded it from. You can also check for updates within the game settings. However, you should always backup your game data before updating the mod apk and make sure that the new version is compatible with your device and the original game.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Dot Connect A Free and Relaxing Dots Puzzle Game for All Ages.md b/spaces/1phancelerku/anime-remove-background/Dot Connect A Free and Relaxing Dots Puzzle Game for All Ages.md
deleted file mode 100644
index a77b1ea74b2cb045fcfdf891ecea5d1c312723e1..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Dot Connect A Free and Relaxing Dots Puzzle Game for All Ages.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
Connect the Dots Game Download: A Fun and Relaxing Puzzle Game for All Ages
-
If you are looking for a simple yet addictive puzzle game that can keep your mind sharp and entertained, you should try connect the dots game. This game is suitable for all ages and can be played offline or online. In this article, we will tell you what connect the dots game is, how to play it, why you should download it, and which are the best connect the dots game apps for Android and iOS. We will also share some tips and tricks to master this game and answer some frequently asked questions.
-
What is Connect the Dots Game?
-
Connect the dots game is a type of puzzle game that involves connecting dots of the same color with lines. The goal is to fill up the entire board with lines without crossing or overlapping them. The game is also known as numberlink, flow, or pipe puzzle. Connect the dots game is based on a mathematical concept called graph theory, which studies how networks of points and lines can be arranged.
The gameplay of connect the dots game is very simple and intuitive. You just need to tap on a dot and drag your finger to another dot of the same color. You can also use your mouse or stylus if you are playing on a computer or tablet. You can only draw horizontal or vertical lines, not diagonal ones. You have to connect all the dots of each color and cover every square on the board. You can undo or restart your moves if you make a mistake or want to try a different strategy.
-
Why You Should Download Connect the Dots Game
-
There are many reasons why you should download connect the dots game on your device. Here are some of them:
-
-
Connect the dots game is fun and relaxing. You can enjoy solving puzzles at your own pace and listen to soothing music and sound effects.
-
Connect the dots game is challenging and rewarding. You can choose from different difficulty levels, ranging from 5x5 to 14x14 grids. You can also compete against the clock in time trial mode or test your skills with daily puzzles.
-
Connect the dots game is good for your brain. It can improve your logic, concentration, memory, and spatial reasoning skills. It can also help you reduce stress and boredom.
-
Connect the dots game is free to download and play. You don't need to pay anything to enjoy this game. There are some ads and in-app purchases, but they are not intrusive or necessary.
-
-
The Best Connect the Dots Game Apps for Android and iOS
-
There are many connect the dots game apps available on Google Play Store and App Store, but not all of them are worth your time and attention. We have selected three of the best connect the dots game apps for Android and iOS based on their ratings, reviews, features, and popularity. Here they are:
-
Connect The Dots - Color Line by MOOTOY Game
-
This app is one of the most popular connect the dots game apps on Google Play Store, with over 1 million downloads and 4.5 stars rating. It offers over 1000 free puzzles, free play and time trial modes, user-friendly interface and graphics, fun sound effects, hints, and more. You can also adjust the board size, color scheme, day/night mode, and color blind setting according to your preference.
-
Dot Link - Connect the Dots by Playvalve
-
This app is another highly rated connect the dots game app on Google Play Store, with over 500,000 downloads and 4.6 stars rating. It features over 2000 free puzzles, various themes and backgrounds, smooth animation and sound effects, hints, undo, and zoom functions, and more. You can also customize the game settings, such as the grid size, dot size, line thickness, and color mode.
-
Connect Dots - Dot Puzzle Game by Bigman
-
This app is one of the best connect the dots game apps on App Store, with over 100,000 downloads and 4.7 stars rating. It provides over 3000 free puzzles, different game modes, such as classic, hexa, triangle, and square, beautiful graphics and music, hints, undo, and shuffle options, and more. You can also challenge yourself with daily missions and achievements.
-
Tips and Tricks to Master Connect the Dots Game
-
Connect the dots game may seem easy at first glance, but it can get tricky and frustrating as you progress to higher levels. Here are some tips and tricks to help you master this game and have more fun:
-
connect the dots game download for android
-connect the dots game download for pc
-connect the dots game download for ios
-connect the dots game download free
-connect the dots game download apk
-connect the dots game download offline
-connect the dots game download online
-connect the dots game download windows 10
-connect the dots game download mac
-connect the dots game download for kids
-connect the dots game download for adults
-connect the dots game download for toddlers
-connect the dots game download for preschoolers
-connect the dots game download with numbers
-connect the dots game download with letters
-connect the dots game download with animals
-connect the dots game download with shapes
-connect the dots game download with colors
-connect the dots game download with puzzles
-connect the dots game download with sounds
-connect the dots game download with music
-connect the dots game download with levels
-connect the dots game download with hints
-connect the dots game download with challenges
-connect the dots game download with rewards
-connect the dots game download without ads
-connect the dots game download without wifi
-connect the dots game download without internet
-connect the dots game download without registration
-connect the dots game download without subscription
-best connect the dots game download
-new connect the dots game download
-fun connect the dots game download
-easy connect the dots game download
-hard connect the dots game download
-relaxing connect the dots game download
-addictive connect the dots game download
-educational connect the dots game download
-creative connect the dots game download
-interactive connect the dots game download
-dot link - connect the dots - apps on google play[^2^]
-dot link - color line - apps on google play[^1^]
-dot link - dot puzzle - apps on google play[^3^]
-dot link - flow free - apps on google play
-dot link - line puzzle - apps on google play
-
Choose the Right Difficulty Level
-
One of the most important things to do before you start playing connect the dots game is to choose the right difficulty level for your skill and mood. If you are a beginner or just want to relax, you can start with the easy levels that have smaller grids and fewer colors. If you are an expert or want to challenge yourself, you can try the hard levels that have larger grids and more colors. You can also switch between different difficulty levels anytime you want.
-
Plan Your Moves Ahead
-
Another key to success in connect the dots game is to plan your moves ahead and think strategically. You should not just connect the dots randomly or impulsively, but rather look at the whole board and see which dots are easier or harder to connect. You should also try to avoid creating dead ends or loops that will prevent you from completing the puzzle. You can use some techniques, such as starting from the corners or edges, connecting the longest lines first, or following a pattern or sequence.
-
Use Hints and Undo Features Wisely
-
Sometimes, you may get stuck or make a mistake in connect the dots game. In that case, you can use the hints and undo features that are available in most connect the dots game apps. However, you should not rely on them too much or abuse them. You should only use them when you really need them or when you want to learn from your errors. You should also be aware that some hints and undo features may cost you coins or tokens that you have to earn or buy in the game.
-
Challenge Yourself with Time Trial and Daily Puzzles
-
If you want to spice up your connect the dots game experience and test your skills further, you can try some of the special modes that are offered in some connect the dots game apps. For example, you can play time trial mode where you have to solve as many puzzles as possible within a limited time. Or you can play daily puzzles where you have to solve a new puzzle every day with different themes and rewards. These modes can help you improve your speed, accuracy, and creativity in connect the dots game.
-
Conclusion
-
Connect the dots game is a fun and relaxing puzzle game that can be enjoyed by anyone regardless of age or background. It is easy to play but hard to master. It can also benefit your brain health and well-being in many ways. If you are interested in playing this game, you can download one of the best connect the dots game apps for Android and iOS that we have recommended in this article. You can also follow some of our tips and tricks to master this game and have more fun.
-
FAQs
-
Here are some of the frequently asked questions about connect the dots game:
-
-
Q: How many levels are there in connect the dots game?
-
A: The number of levels in connect the dots game depends on the app that you are using. Some apps may have hundreds or thousands of levels, while others may have unlimited levels that are generated randomly.
-
Q: How do I unlock more levels in connect the dots game?
-
A: To unlock more levels in connect the dots game, you usually have to complete the previous levels or achieve certain goals or scores. Some apps may also require you to watch ads or make in-app purchases to unlock more levels.
-
Q: How do I save my progress in connect the dots game?A: To save your progress in connect the dots game, you need to have an internet connection and sign in with your Google Play or Apple ID account. Some apps may also allow you to sync your progress with Facebook or other platforms.
-
Q: How do I share my results in connect the dots game?
-
A: To share your results in connect the dots game, you can use the share button that is usually located on the top or bottom of the screen. You can then choose which app or platform you want to share your results with, such as WhatsApp, Instagram, Twitter, or email.
-
Q: How do I get more coins or tokens in connect the dots game?
-
A: To get more coins or tokens in connect the dots game, you can do one of the following things:
-
-
Watch ads that are offered in the app.
-
Complete daily missions or achievements that reward you with coins or tokens.
-
Buy coins or tokens with real money through in-app purchases.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the Best Attack on Titan Tribute Game with Unity - No Ads No Hassle.md b/spaces/1phancelerku/anime-remove-background/Enjoy the Best Attack on Titan Tribute Game with Unity - No Ads No Hassle.md
deleted file mode 100644
index 0a80c330b16c44c3db020b3fc00501234f9670c4..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy the Best Attack on Titan Tribute Game with Unity - No Ads No Hassle.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
Attack on Titan Tribute Game: How to Download and Play on Unity
-
If you are a fan of Attack on Titan, the popular anime and manga series that depicts a world where humanity is under siege by giant humanoid creatures called titans, you might be interested in playing Attack on Titan Tribute Game, a fan-made game that lets you experience the thrill of fighting titans in various modes, characters, maps, and difficulty levels.
However, before you can play this game, you need to download and install Unity Web Player, a plugin that allows you to run games made with Unity engine on your browser. In this article, we will show you how to download and install Unity Web Player, how to download and play Attack on Titan Tribute Game, and some tips and tricks for playing the game.
-
What is Attack on Titan Tribute Game?
-
Attack on Titan Tribute Game is a fan-made game developed by Feng Lee using Unity engine. It is based on Attack on Titan, a Japanese anime and manga series created by Hajime Isayama that follows the story of Eren Yeager, Mikasa Ackerman, Armin Arlert, and other members of the Scout Regiment who fight against titans that have invaded their world.
-
The game features various modes such as single-player, multiplayer, custom map , and training mode. You can choose from different characters such as Eren, Mikasa, Levi, Armin, Jean, Sasha, and more. You can also customize your character's appearance, equipment, and skills. You can explore different maps such as the city, the forest, the castle, and the underground. You can also face different types of titans such as normal, abnormal, crawler, punk, colossal, and armored.
-
The game requires Unity Web Player to run on your browser. Unity Web Player is a plugin that enables you to play games and interactive content created with Unity engine. Unity is a cross-platform game engine that is used to create games for various platforms such as Windows, Mac, Linux, iOS, Android, and more.
-
How to Download and Install Unity Web Player?
-
To download and install Unity Web Player, you need to follow these steps:
-
-
Visit the official website of Unity at https://unity.com/ and click on the Download button at the top right corner.
-
On the download page, scroll down and find the Unity Web Player section. Click on the Download (Windows) or Download (Mac) button depending on your operating system.
-
A pop-up window will appear asking you to save the file. Choose a location where you want to save the file and click on Save.
-
Once the download is complete, locate the file and double-click on it to run the installer.
-
Follow the instructions on the installer to complete the installation process. You may need to agree to the terms and conditions and choose a destination folder for the plugin.
-
Restart your browser and enable the plugin if prompted. You can also check if the plugin is installed by visiting https://unity3d.com/webplayer/setup.
-
-
How to Download and Play Attack on Titan Tribute Game?
-
To download and play Attack on Titan Tribute Game, you need to follow these steps:
-
attack on titan vr by kosma download
-attackfinal fan game download
-attack on titan defense 2d game
-attack on titan wyperian edition public test
-aot maneuvering experience game
-attack on titan tribute game fenglee download
-aottg 2 development blog
-attack on titan vr game jam
-attack on titan fan game unity3d
-attack on titan physics-based game
-attack on titan vr oculus quest
-attackfinal fan game discord
-attack on titan defense feniks_pl
-attack on titan wyperian edition twitter
-aot maneuvering experience sabtu_ch
-attack on titan tribute game online
-aottg 2 release date
-attack on titan vr steam
-attackfinal fan game update
-attack on titan defense download
-attack on titan wyperian edition gameplay
-aot maneuvering experience download
-attack on titan tribute game mods
-aottg 2 discord server
-attack on titan vr gameplay
-attackfinal fan game trailer
-attack on titan defense itch.io
-attack on titan wyperian edition demo
-aot maneuvering experience simulation
-attack on titan tribute game unblocked
-aottg 2 youtube channel
-attack on titan vr pc
-attackfinal fan game reddit
-attack on titan defense android
-attack on titan wyperian edition beta
-aot maneuvering experience tutorial
-attack on titan tribute game multiplayer
-aottg 2 twitter account
-attack on titan vr ps4
-attackfinal fan game wiki
-attack on titan defense apk
-attack on titan wyperian edition steam
-aot maneuvering experience review
-attack on titan tribute game cheats
-aottg 2 video devblog
-attack on titan vr oculus rift
-attackfinal fan game forum
-attack on titan defense online
-attack on titan wyperian edition release date
On the website, you will see a list of servers that host the game. Choose a server that has a good ping and click on it.
-
A new tab will open with the game loading screen. Wait for the game to load completely.
-
Create a username for yourself by typing it in the box at the top left corner. You can also change your language by clicking on the flag icon at the top right corner.
-
Choose a mode that you want to play by clicking on one of the buttons at the bottom left corner. You can choose from single-player, multiplayer, custom map , and training mode. Each mode has different objectives and rules.
-
Choose a character that you want to play by clicking on one of the buttons at the bottom right corner. You can choose from Eren, Mikasa, Levi, Armin, Jean, Sasha, and more. You can also customize your character's appearance, equipment, and skills by clicking on the Customize button.
-
Choose a map that you want to play by clicking on one of the buttons at the top center. You can choose from the city, the forest, the castle, and the underground. You can also create your own map by clicking on the Create Map button.
-
Choose a difficulty level that you want to play by clicking on one of the buttons at the top right corner. You can choose from easy, normal, hard, and abnormal. The difficulty level affects the number and behavior of the titans.
-
Click on the Start Game button at the bottom center to start playing the game.
-
Use the keyboard and mouse controls to move, attack, and interact with the game. The basic controls are as follows:
-
-
WASD: Move forward, backward, left, and right.
-
Space: Jump.
-
Shift: Dash.
-
Left mouse button: Attack with your blades.
-
Right mouse button: Use your omni-directional mobility gear (ODM) to hook onto objects and swing around.
-
Q and E: Hook onto objects with your left and right ODM respectively.
-
R: Reload your blades or gas.
-
T: Chat with other players (in multiplayer mode).
-
P: Pause the game and access the menu.
-
Esc: Quit the game.
-
-
-
Tips and Tricks for Playing Attack on Titan Tribute Game
-
Playing Attack on Titan Tribute Game can be fun and challenging, but also frustrating if you don't know what you are doing. Here are some tips and tricks that can help you improve your skills and enjoy the game more:
-
-
Learn how to use the omni-directional mobility gear (ODM) effectively. The ODM is your main tool for moving around and fighting titans. You need to master how to hook onto objects, swing around, release, and re-hook in order to maneuver quickly and smoothly. You also need to know when to use your dash and jump abilities to gain speed and height. The ODM is also useful for dodging titan attacks and escaping from dangerous situations.
-
Aim for the nape of the titans to kill them. The nape is the weak spot of the titans that is located at the back of their necks. You need to slash it with your blades in order to kill them. However, this is easier said than done, as titans can move unpredictably and protect their napes with their hands or hair. You need to find an opening or create one by distracting or stunning them with your teammates or objects. You can also use your ODM to get behind them or above them and strike them from there.
-
Use teamwork and communication with other players. In multiplayer mode, you can join or create a server and play with other players online. You can cooperate with them to complete objectives, such as killing all titans in a map, defending a base, or capturing a flag. You can also chat with them using the T key or voice chat (if enabled). Teamwork and communication are essential for surviving and winning in multiplayer mode.
-
Customize your settings and preferences for optimal performance. You can access the settings menu by pressing P or Esc during the game. You can adjust various options such as graphics quality, sound volume, camera sensitivity, key bindings, language, and more. You can also enable or disable some features such as blood effects, damage indicators, crosshair, minimap, etc. You should customize your settings according to your preferences and your device's capabilities for optimal performance.
-
-
Conclusion
-
Attack on Titan Tribute Game is a fun and challenging fan-made game that lets you experience the thrill of fighting titans in various modes, characters, maps, and difficulty levels. You need to download and install Unity Web Player to play the game on your browser. You can download and play the game for free from its official website. If you are a fan of Attack on Titan allow you to mod or create your own maps for the game. You can find some of them at https://aotrc.weebly.com/ or https://www.youtube.com/watch?v=Zi8vJ_lMxQI. However, you need to have some basic knowledge of Unity and coding to use these tools. You also need to follow the rules and guidelines of the game and its developer when modding or creating your own maps.
-
I hope this article has helped you learn more about Attack on Titan Tribute Game and how to download and play it on Unity. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and have fun playing the game!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/7hao/bingo/src/components/theme-toggle.tsx b/spaces/7hao/bingo/src/components/theme-toggle.tsx
deleted file mode 100644
index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/components/theme-toggle.tsx
+++ /dev/null
@@ -1,31 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { useTheme } from 'next-themes'
-
-import { Button } from '@/components/ui/button'
-import { IconMoon, IconSun } from '@/components/ui/icons'
-
-export function ThemeToggle() {
- const { setTheme, theme } = useTheme()
- const [_, startTransition] = React.useTransition()
-
- return (
-
- )
-}
diff --git a/spaces/7hao/bingo/src/pages/api/blob.ts b/spaces/7hao/bingo/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/801artistry/RVC801/tools/infer/train-index.py b/spaces/801artistry/RVC801/tools/infer/train-index.py
deleted file mode 100644
index 44b447ef32148c181eb4bcd9013a22a82371b82c..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/tools/infer/train-index.py
+++ /dev/null
@@ -1,42 +0,0 @@
-"""
-格式:直接cid为自带的index位;aid放不下了,通过字典来查,反正就5w个
-"""
-import os
-import logging
-
-logger = logging.getLogger(__name__)
-
-import faiss
-import numpy as np
-
-# ###########如果是原始特征要先写save
-inp_root = r"E:\codes\py39\dataset\mi\2-co256"
-npys = []
-for name in sorted(list(os.listdir(inp_root))):
- phone = np.load("%s/%s" % (inp_root, name))
- npys.append(phone)
-big_npy = np.concatenate(npys, 0)
-logger.debug(big_npy.shape) # (6196072, 192)#fp32#4.43G
-np.save("infer/big_src_feature_mi.npy", big_npy)
-
-##################train+add
-# big_npy=np.load("/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/inference_f0/big_src_feature_mi.npy")
-logger.debug(big_npy.shape)
-index = faiss.index_factory(256, "IVF512,Flat") # mi
-logger.info("Training...")
-index_ivf = faiss.extract_index_ivf(index) #
-index_ivf.nprobe = 9
-index.train(big_npy)
-faiss.write_index(index, "infer/trained_IVF512_Flat_mi_baseline_src_feat.index")
-logger.info("Adding...")
-index.add(big_npy)
-faiss.write_index(index, "infer/added_IVF512_Flat_mi_baseline_src_feat.index")
-"""
-大小(都是FP32)
-big_src_feature 2.95G
- (3098036, 256)
-big_emb 4.43G
- (6196072, 192)
-big_emb双倍是因为求特征要repeat后再加pitch
-
-"""
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/wav_evaluation/models/audio.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/wav_evaluation/models/audio.py
deleted file mode 100644
index 0980d729dd3b579fee0380d0b9d7055e6843ba12..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/wav_evaluation/models/audio.py
+++ /dev/null
@@ -1,179 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchlibrosa.stft import Spectrogram, LogmelFilterBank
-
-def get_audio_encoder(name: str):
- if name == "Cnn14":
- return Cnn14
- else:
- raise Exception('The audio encoder name {} is incorrect or not supported'.format(name))
-
-
-class ConvBlock(nn.Module):
- def __init__(self, in_channels, out_channels):
-
- super(ConvBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3), stride=(1, 1),
- padding=(1, 1), bias=False)
-
- self.conv2 = nn.Conv2d(in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3), stride=(1, 1),
- padding=(1, 1), bias=False)
-
- self.bn1 = nn.BatchNorm2d(out_channels)
- self.bn2 = nn.BatchNorm2d(out_channels)
-
-
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
-
- x = input
- x = F.relu_(self.bn1(self.conv1(x)))
- x = F.relu_(self.bn2(self.conv2(x)))
- if pool_type == 'max':
- x = F.max_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg':
- x = F.avg_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg+max':
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
- x2 = F.max_pool2d(x, kernel_size=pool_size)
- x = x1 + x2
- else:
- raise Exception('Incorrect argument!')
-
- return x
-
-
-class ConvBlock5x5(nn.Module):
- def __init__(self, in_channels, out_channels):
-
- super(ConvBlock5x5, self).__init__()
-
- self.conv1 = nn.Conv2d(in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(5, 5), stride=(1, 1),
- padding=(2, 2), bias=False)
-
- self.bn1 = nn.BatchNorm2d(out_channels)
-
-
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
-
- x = input
- x = F.relu_(self.bn1(self.conv1(x)))
- if pool_type == 'max':
- x = F.max_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg':
- x = F.avg_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg+max':
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
- x2 = F.max_pool2d(x, kernel_size=pool_size)
- x = x1 + x2
- else:
- raise Exception('Incorrect argument!')
-
- return x
-
-
-class AttBlock(nn.Module):
- def __init__(self, n_in, n_out, activation='linear', temperature=1.):
- super(AttBlock, self).__init__()
-
- self.activation = activation
- self.temperature = temperature
- self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
- self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
-
- self.bn_att = nn.BatchNorm1d(n_out)
-
- def forward(self, x):
- # x: (n_samples, n_in, n_time)
- norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
- cla = self.nonlinear_transform(self.cla(x))
- x = torch.sum(norm_att * cla, dim=2)
- return x, norm_att, cla
-
- def nonlinear_transform(self, x):
- if self.activation == 'linear':
- return x
- elif self.activation == 'sigmoid':
- return torch.sigmoid(x)
-
-
-class Cnn14(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num, out_emb):
-
- super(Cnn14, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- self.bn0 = nn.BatchNorm2d(64)
-
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
-
- # out_emb is 2048 for best Cnn14
- self.fc1 = nn.Linear(2048, out_emb, bias=True)
- self.fc_audioset = nn.Linear(out_emb, classes_num, bias=True)
-
- def forward(self, input, mixup_lambda=None):
- """
- Input: (batch_size, data_length)
- """
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
-
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.mean(x, dim=3)
-
- (x1, _) = torch.max(x, dim=2)
- x2 = torch.mean(x, dim=2)
- x = x1 + x2
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.fc1(x))
- embedding = F.dropout(x, p=0.5, training=self.training)
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
-
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding}
-
- return output_dict
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/discriminator/multi_window_disc.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/discriminator/multi_window_disc.py
deleted file mode 100644
index 1aef6493c90c7cf5206ff92f7fe8831a0821664f..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/discriminator/multi_window_disc.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-
-
-class Discriminator2DFactory(nn.Module):
- def __init__(self, time_length, freq_length=80, kernel=(3, 3), c_in=1, hidden_size=128,
- norm_type='bn', reduction='sum'):
- super(Discriminator2DFactory, self).__init__()
- padding = (kernel[0] // 2, kernel[1] // 2)
-
- def discriminator_block(in_filters, out_filters, first=False):
- """
- Input: (B, in, 2H, 2W)
- Output:(B, out, H, W)
- """
- conv = nn.Conv2d(in_filters, out_filters, kernel, (2, 2), padding)
- if norm_type == 'sn':
- conv = nn.utils.spectral_norm(conv)
- block = [
- conv, # padding = kernel//2
- nn.LeakyReLU(0.2, inplace=True),
- nn.Dropout2d(0.25)
- ]
- if norm_type == 'bn' and not first:
- block.append(nn.BatchNorm2d(out_filters, 0.8))
- if norm_type == 'in' and not first:
- block.append(nn.InstanceNorm2d(out_filters, affine=True))
- block = nn.Sequential(*block)
- return block
-
- self.model = nn.ModuleList([
- discriminator_block(c_in, hidden_size, first=True),
- discriminator_block(hidden_size, hidden_size),
- discriminator_block(hidden_size, hidden_size),
- ])
-
- self.reduction = reduction
- ds_size = (time_length // 2 ** 3, (freq_length + 7) // 2 ** 3)
- if reduction != 'none':
- # The height and width of downsampled image
- self.adv_layer = nn.Linear(hidden_size * ds_size[0] * ds_size[1], 1)
- else:
- self.adv_layer = nn.Linear(hidden_size * ds_size[1], 1)
-
- def forward(self, x):
- """
-
- :param x: [B, C, T, n_bins]
- :return: validity: [B, 1], h: List of hiddens
- """
- h = []
- for l in self.model:
- x = l(x)
- h.append(x)
- if self.reduction != 'none':
- x = x.view(x.shape[0], -1)
- validity = self.adv_layer(x) # [B, 1]
- else:
- B, _, T_, _ = x.shape
- x = x.transpose(1, 2).reshape(B, T_, -1)
- validity = self.adv_layer(x)[:, :, 0] # [B, T]
- return validity, h
-
-
-class MultiWindowDiscriminator(nn.Module):
- def __init__(self, time_lengths, cond_size=0, freq_length=80, kernel=(3, 3),
- c_in=1, hidden_size=128, norm_type='bn', reduction='sum'):
- super(MultiWindowDiscriminator, self).__init__()
- self.win_lengths = time_lengths
- self.reduction = reduction
-
- self.conv_layers = nn.ModuleList()
- if cond_size > 0:
- self.cond_proj_layers = nn.ModuleList()
- self.mel_proj_layers = nn.ModuleList()
- for time_length in time_lengths:
- conv_layer = [
- Discriminator2DFactory(
- time_length, freq_length, kernel, c_in=c_in, hidden_size=hidden_size,
- norm_type=norm_type, reduction=reduction)
- ]
- self.conv_layers += conv_layer
- if cond_size > 0:
- self.cond_proj_layers.append(nn.Linear(cond_size, freq_length))
- self.mel_proj_layers.append(nn.Linear(freq_length, freq_length))
-
- def forward(self, x, x_len, cond=None, start_frames_wins=None):
- '''
- Args:
- x (tensor): input mel, (B, c_in, T, n_bins).
- x_length (tensor): len of per mel. (B,).
-
- Returns:
- tensor : (B).
- '''
- validity = []
- if start_frames_wins is None:
- start_frames_wins = [None] * len(self.conv_layers)
- h = []
- for i, start_frames in zip(range(len(self.conv_layers)), start_frames_wins):
- x_clip, c_clip, start_frames = self.clip(
- x, cond, x_len, self.win_lengths[i], start_frames) # (B, win_length, C)
- start_frames_wins[i] = start_frames
- if x_clip is None:
- continue
- if cond is not None:
- x_clip = self.mel_proj_layers[i](x_clip) # (B, 1, win_length, C)
- c_clip = self.cond_proj_layers[i](c_clip)[:, None] # (B, 1, win_length, C)
- x_clip = x_clip + c_clip
- x_clip, h_ = self.conv_layers[i](x_clip)
- h += h_
- validity.append(x_clip)
- if len(validity) != len(self.conv_layers):
- return None, start_frames_wins, h
- if self.reduction == 'sum':
- validity = sum(validity) # [B]
- elif self.reduction == 'stack':
- validity = torch.stack(validity, -1) # [B, W_L]
- elif self.reduction == 'none':
- validity = torch.cat(validity, -1) # [B, W_sum]
- return validity, start_frames_wins, h
-
- def clip(self, x, cond, x_len, win_length, start_frames=None):
- '''Ramdom clip x to win_length.
- Args:
- x (tensor) : (B, c_in, T, n_bins).
- cond (tensor) : (B, T, H).
- x_len (tensor) : (B,).
- win_length (int): target clip length
-
- Returns:
- (tensor) : (B, c_in, win_length, n_bins).
-
- '''
- T_start = 0
- T_end = x_len.max() - win_length
- if T_end < 0:
- return None, None, start_frames
- T_end = T_end.item()
- if start_frames is None:
- start_frame = np.random.randint(low=T_start, high=T_end + 1)
- start_frames = [start_frame] * x.size(0)
- else:
- start_frame = start_frames[0]
- x_batch = x[:, :, start_frame: start_frame + win_length]
- c_batch = cond[:, start_frame: start_frame + win_length] if cond is not None else None
- return x_batch, c_batch, start_frames
-
-
-class Discriminator(nn.Module):
- def __init__(self, time_lengths=[32, 64, 128], freq_length=80, cond_size=0, kernel=(3, 3), c_in=1,
- hidden_size=128, norm_type='bn', reduction='sum', uncond_disc=True):
- super(Discriminator, self).__init__()
- self.time_lengths = time_lengths
- self.cond_size = cond_size
- self.reduction = reduction
- self.uncond_disc = uncond_disc
- if uncond_disc:
- self.discriminator = MultiWindowDiscriminator(
- freq_length=freq_length,
- time_lengths=time_lengths,
- kernel=kernel,
- c_in=c_in, hidden_size=hidden_size, norm_type=norm_type,
- reduction=reduction
- )
- if cond_size > 0:
- self.cond_disc = MultiWindowDiscriminator(
- freq_length=freq_length,
- time_lengths=time_lengths,
- cond_size=cond_size,
- kernel=kernel,
- c_in=c_in, hidden_size=hidden_size, norm_type=norm_type,
- reduction=reduction
- )
-
- def forward(self, x, cond=None, start_frames_wins=None):
- """
-
- :param x: [B, T, 80]
- :param cond: [B, T, cond_size]
- :param return_y_only:
- :return:
- """
- if len(x.shape) == 3:
- x = x[:, None, :, :]
- x_len = x.sum([1, -1]).ne(0).int().sum([-1])
- ret = {'y_c': None, 'y': None}
- if self.uncond_disc:
- ret['y'], start_frames_wins, ret['h'] = self.discriminator(
- x, x_len, start_frames_wins=start_frames_wins)
- if self.cond_size > 0 and cond is not None:
- ret['y_c'], start_frames_wins, ret['h_c'] = self.cond_disc(
- x, x_len, cond, start_frames_wins=start_frames_wins)
- ret['start_frames_wins'] = start_frames_wins
- return ret
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/act.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/act.py
deleted file mode 100644
index 028debd697dd60458aae75010057df038bd3518a..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/act.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
-# LICENSE is in incl_licenses directory.
-
-import torch.nn as nn
-from .resample import UpSample1d, DownSample1d
-
-
-class Activation1d(nn.Module):
- def __init__(self,
- activation,
- up_ratio: int = 2,
- down_ratio: int = 2,
- up_kernel_size: int = 12,
- down_kernel_size: int = 12):
- super().__init__()
- self.up_ratio = up_ratio
- self.down_ratio = down_ratio
- self.act = activation
- self.upsample = UpSample1d(up_ratio, up_kernel_size)
- self.downsample = DownSample1d(down_ratio, down_kernel_size)
-
- # x: [B,C,T]
- def forward(self, x):
- x = self.upsample(x)
- x = self.act(x)
- x = self.downsample(x)
-
- return x
\ No newline at end of file
diff --git a/spaces/ARTeLab/DTM_Estimation_SRandD/app.py b/spaces/ARTeLab/DTM_Estimation_SRandD/app.py
deleted file mode 100644
index bd1151705717abea6940b49f863eade904aba537..0000000000000000000000000000000000000000
--- a/spaces/ARTeLab/DTM_Estimation_SRandD/app.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import gradio as gr
-import os
-from PIL import Image
-import torchvision
-from torchvision import transforms
-import torch
-import matplotlib.pyplot as plt
-import numpy as np
-from models.modelNetA import Generator as GA
-from models.modelNetB import Generator as GB
-from models.modelNetC import Generator as GC
-
-scale_size = 128
-scale_sizes = [128, 256, 512]
-# load model
-modeltype2path = {
- 'ModelA': 'DTM_exp_train10%_model_a/g-best.pth',
- 'ModelB': 'DTM_exp_train10%_model_b/g-best.pth',
- 'ModelC': 'DTM_exp_train10%_model_c/g-best.pth',
-}
-DEVICE='cpu'
-MODELS_TYPE = list(modeltype2path.keys())
-generators = [GA(), GB(), GC()]
-
-for i in range(len(generators)):
- generators[i] = torch.nn.DataParallel(generators[i])
- state_dict = torch.load(modeltype2path[MODELS_TYPE[i]], map_location=torch.device('cpu'))
- generators[i].load_state_dict(state_dict)
- generators[i] = generators[i].module.to(DEVICE)
- generators[i].eval()
-
-preprocess = transforms.Compose([
- transforms.Grayscale(),
- transforms.ToTensor()
-])
-
-def predict(input_image, model_name, input_scale_factor):
- pil_image = Image.fromarray(input_image.astype('uint8'), 'RGB')
- pil_image = transforms.Resize((input_scale_factor, input_scale_factor))(pil_image)
- # transform image to torch and do preprocessing
- torch_img = preprocess(pil_image).to(DEVICE).unsqueeze(0).to(DEVICE)
- torch_img = (torch_img - torch.min(torch_img)) / (torch.max(torch_img) - torch.min(torch_img))
- # model predict
- with torch.no_grad():
- output = generators[MODELS_TYPE.index(model_name)](torch_img)
- sr, sr_dem_selected = output[0], output[1]
- # transform torch to image
- sr = sr.squeeze(0).cpu()
- torchvision.utils.save_image(sr, 'sr_pred.png')
- sr = np.array(Image.open('sr_pred.png'))
-
- sr_dem_selected = sr_dem_selected.squeeze().cpu().detach().numpy()
- fig, ax = plt.subplots()
- im = ax.imshow(sr_dem_selected, cmap='jet', vmin=0, vmax=np.max(sr_dem_selected))
- plt.colorbar(im, ax=ax)
- fig.canvas.draw()
- data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- # return correct image and info
- info = f"{model_name} with {sum(p.numel() for p in generators[MODELS_TYPE.index(model_name)].parameters())} parameters"
- return info, sr, data
-
-iface = gr.Interface(
- fn=predict,
- inputs=[
- gr.Image(),
- gr.inputs.Radio(MODELS_TYPE),
- gr.inputs.Radio(scale_sizes)
- ],
- outputs=[
- gr.Text(label='Model info'),
- gr.Image(label='Super Resolution'),
- gr.Image(label='DTM')
- ],
- examples=[
- [f"demo_imgs/{name}", MODELS_TYPE[0], 128] for name in os.listdir('demo_imgs')
- ],
- title="Super Resolution and DTM Estimation",
- description=f"This demo predict Super Resolution and (Super Resolution) DTM from a Grayscale image (if RGB we convert it)."
-)
-iface.launch()
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/shufflenet-v2-1x_4xb32_2000e_3c_noF/shufflenet-v2-1x_4xb32_2000e_3c_noF.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/shufflenet-v2-1x_4xb32_2000e_3c_noF/shufflenet-v2-1x_4xb32_2000e_3c_noF.py
deleted file mode 100644
index 498df9518f20b41383851a1253aba27bd9fdeca6..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/shufflenet-v2-1x_4xb32_2000e_3c_noF/shufflenet-v2-1x_4xb32_2000e_3c_noF.py
+++ /dev/null
@@ -1,155 +0,0 @@
-model = dict(
- type='ImageClassifier',
- backbone=dict(type='ShuffleNetV2', widen_factor=1.0),
- neck=dict(type='GlobalAveragePooling'),
- head=dict(
- type='LinearClsHead',
- num_classes=7,
- in_channels=1024,
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
- topk=(
- 1,
- 3,
- )))
-dataset_type = 'CustomDataset'
-data_preprocessor = dict(
- num_classes=7,
- mean=[
- 123.675,
- 116.28,
- 103.53,
- ],
- std=[
- 58.395,
- 57.12,
- 57.375,
- ],
- to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='RandomResizedCrop', scale=224, backend='pillow'),
- dict(type='RandomFlip', prob=0.5, direction='horizontal'),
- dict(type='PackInputs'),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='ResizeEdge', scale=256, edge='short', backend='pillow'),
- dict(type='CenterCrop', crop_size=224),
- dict(type='PackInputs'),
-]
-train_dataloader = dict(
- pin_memory=True,
- persistent_workers=True,
- collate_fn=dict(type='default_collate'),
- batch_size=32,
- num_workers=5,
- dataset=dict(
- type='CustomDataset',
- data_root='data',
- with_label=True,
- ann_file='',
- data_prefix='train',
- pipeline=[
- dict(type='LoadImageFromFile'),
- dict(type='RandomResizedCrop', scale=224, backend='pillow'),
- dict(type='RandomFlip', prob=0.5, direction='horizontal'),
- dict(type='PackInputs'),
- ]),
- sampler=dict(type='DefaultSampler', shuffle=True))
-val_dataloader = dict(
- pin_memory=True,
- persistent_workers=True,
- collate_fn=dict(type='default_collate'),
- batch_size=32,
- num_workers=5,
- dataset=dict(
- type='CustomDataset',
- data_root='data',
- with_label=True,
- ann_file='',
- data_prefix='val',
- pipeline=[
- dict(type='LoadImageFromFile'),
- dict(type='ResizeEdge', scale=256, edge='short', backend='pillow'),
- dict(type='CenterCrop', crop_size=224),
- dict(type='PackInputs'),
- ]),
- sampler=dict(type='DefaultSampler', shuffle=False))
-val_evaluator = dict(
- type='Accuracy', topk=(
- 1,
- 3,
- ))
-test_dataloader = dict(
- pin_memory=True,
- persistent_workers=True,
- collate_fn=dict(type='default_collate'),
- batch_size=32,
- num_workers=5,
- dataset=dict(
- type='CustomDataset',
- data_root='data',
- with_label=True,
- ann_file='',
- data_prefix='val',
- pipeline=[
- dict(type='LoadImageFromFile'),
- dict(type='ResizeEdge', scale=256, edge='short', backend='pillow'),
- dict(type='CenterCrop', crop_size=224),
- dict(type='PackInputs'),
- ]),
- sampler=dict(type='DefaultSampler', shuffle=False))
-test_evaluator = dict(
- pin_memory=True,
- persistent_workers=True,
- collate_fn=dict(type='default_collate'),
- batch_size=32,
- num_workers=5,
- dataset=dict(
- type='CustomDataset',
- data_root='data',
- with_label=True,
- ann_file='',
- data_prefix='val',
- pipeline=[
- dict(type='LoadImageFromFile'),
- dict(type='ResizeEdge', scale=256, edge='short', backend='pillow'),
- dict(type='CenterCrop', crop_size=224),
- dict(type='PackInputs'),
- ]),
- sampler=dict(type='DefaultSampler', shuffle=False))
-optim_wrapper = dict(
- optimizer=dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001),
- paramwise_cfg=dict(norm_decay_mult=0))
-param_scheduler = dict(type='StepLR', by_epoch=True, step_size=10, gamma=0.98)
-train_cfg = dict(by_epoch=True, max_epochs=2000, val_interval=10)
-val_cfg = dict()
-test_cfg = dict()
-auto_scale_lr = dict(base_batch_size=1024)
-default_scope = 'mmpretrain'
-default_hooks = dict(
- timer=dict(type='IterTimerHook'),
- logger=dict(type='LoggerHook', interval=10),
- param_scheduler=dict(type='ParamSchedulerHook'),
- checkpoint=dict(type='CheckpointHook', save_best='auto', interval=10),
- sampler_seed=dict(type='DistSamplerSeedHook'),
- visualization=dict(type='VisualizationHook', enable=False))
-env_cfg = dict(
- cudnn_benchmark=False,
- mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
- dist_cfg=dict(backend='nccl'))
-vis_backends = [
- dict(type='LocalVisBackend'),
-]
-visualizer = dict(
- type='UniversalVisualizer',
- vis_backends=[
- dict(type='LocalVisBackend'),
- dict(type='WandbVisBackend'),
- ])
-log_level = 'INFO'
-load_from = None
-resume = False
-randomness = dict(seed=None, deterministic=False)
-launcher = 'pytorch'
-work_dir = './work_dirs/shufflenet-v2-1x_4xb32_2000e_3c_noF'
diff --git a/spaces/Aadhithya/Binance-Crypto-Tracker/README.md b/spaces/Aadhithya/Binance-Crypto-Tracker/README.md
deleted file mode 100644
index 62a407395393000bf0b341b02115d245f0913b7f..0000000000000000000000000000000000000000
--- a/spaces/Aadhithya/Binance-Crypto-Tracker/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Binance Crypto Tracker
-emoji: 🏢
-colorFrom: pink
-colorTo: green
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Abhilashvj/planogram-compliance/setup.sh b/spaces/Abhilashvj/planogram-compliance/setup.sh
deleted file mode 100644
index f0ab2585fe12edf5a8ea8eb3a8614ba23ed52e7f..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/setup.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-mkdir -p ~/.streamlit/
-echo "\
-[server]\n\
-headless = true\n\
-port = $PORT\n\
-enableCORS = false\n\
-\n\
-" > ~/.streamlit/config.toml
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/PostResolveSize.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/PostResolveSize.js
deleted file mode 100644
index 996827f7696a8435741fcb2b4e89a80fbab4001c..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/PostResolveSize.js
+++ /dev/null
@@ -1,46 +0,0 @@
-import ResizeGameObject from '../../../plugins/utils/size/ResizeGameObject.js';
-
-var PostResolveSize = function (width, height) {
- if (this.hasRatioFitChild) {
- // Resize child for ratio-fit
- var innerHeight, innerWidth;
- if (this.orientation === 0) {
- innerHeight = height - this.getInnerPadding('top') - this.getInnerPadding('bottom');
- } else {
- innerWidth = width - this.getInnerPadding('left') - this.getInnerPadding('right');
- }
-
- var children = this.sizerChildren,
- childWidth, childHeight;
- for (var i = 0, cnt = children.length; i < cnt; i++) {
- var child = children[i];
- if (child.rexSizer.hidden) {
- continue;
- }
-
- var fitRatio = child.rexSizer.fitRatio;
- if (!fitRatio) {
- continue;
- }
-
- if (this.orientation === 0) {
- childHeight = innerHeight - this.getChildOuterPadding(child, 'top') - this.getChildOuterPadding(child, 'bottom');
- childWidth = childHeight * fitRatio;
- } else {
- childWidth = innerHeight - this.getChildOuterPadding(child, 'top') - this.getChildOuterPadding(child, 'bottom');
- childHeight = childWidth / fitRatio;
- }
-
- ResizeGameObject(child, childWidth, childHeight);
- if (child.isRexSizer) {
- child.setMinSize(childWidth, childHeight)
- }
- }
-
- this.proportionLength = undefined;
- this._childrenWidth = undefined;
- this.resolveWidth(width, true);
- }
-}
-
-export default PostResolveSize;
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/bin/gen_mask_dataset.py b/spaces/AlexWang/lama/bin/gen_mask_dataset.py
deleted file mode 100644
index 6e2ce3a9bc9708fd46641cab815113508af32d02..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/bin/gen_mask_dataset.py
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/usr/bin/env python3
-
-import glob
-import os
-import shutil
-import traceback
-
-import PIL.Image as Image
-import numpy as np
-from joblib import Parallel, delayed
-
-from saicinpainting.evaluation.masks.mask import SegmentationMask, propose_random_square_crop
-from saicinpainting.evaluation.utils import load_yaml, SmallMode
-from saicinpainting.training.data.masks import MixedMaskGenerator
-
-
-class MakeManyMasksWrapper:
- def __init__(self, impl, variants_n=2):
- self.impl = impl
- self.variants_n = variants_n
-
- def get_masks(self, img):
- img = np.transpose(np.array(img), (2, 0, 1))
- return [self.impl(img)[0] for _ in range(self.variants_n)]
-
-
-def process_images(src_images, indir, outdir, config):
- if config.generator_kind == 'segmentation':
- mask_generator = SegmentationMask(**config.mask_generator_kwargs)
- elif config.generator_kind == 'random':
- variants_n = config.mask_generator_kwargs.pop('variants_n', 2)
- mask_generator = MakeManyMasksWrapper(MixedMaskGenerator(**config.mask_generator_kwargs),
- variants_n=variants_n)
- else:
- raise ValueError(f'Unexpected generator kind: {config.generator_kind}')
-
- max_tamper_area = config.get('max_tamper_area', 1)
-
- for infile in src_images:
- try:
- file_relpath = infile[len(indir):]
- img_outpath = os.path.join(outdir, file_relpath)
- os.makedirs(os.path.dirname(img_outpath), exist_ok=True)
-
- image = Image.open(infile).convert('RGB')
-
- # scale input image to output resolution and filter smaller images
- if min(image.size) < config.cropping.out_min_size:
- handle_small_mode = SmallMode(config.cropping.handle_small_mode)
- if handle_small_mode == SmallMode.DROP:
- continue
- elif handle_small_mode == SmallMode.UPSCALE:
- factor = config.cropping.out_min_size / min(image.size)
- out_size = (np.array(image.size) * factor).round().astype('uint32')
- image = image.resize(out_size, resample=Image.BICUBIC)
- else:
- factor = config.cropping.out_min_size / min(image.size)
- out_size = (np.array(image.size) * factor).round().astype('uint32')
- image = image.resize(out_size, resample=Image.BICUBIC)
-
- # generate and select masks
- src_masks = mask_generator.get_masks(image)
-
- filtered_image_mask_pairs = []
- for cur_mask in src_masks:
- if config.cropping.out_square_crop:
- (crop_left,
- crop_top,
- crop_right,
- crop_bottom) = propose_random_square_crop(cur_mask,
- min_overlap=config.cropping.crop_min_overlap)
- cur_mask = cur_mask[crop_top:crop_bottom, crop_left:crop_right]
- cur_image = image.copy().crop((crop_left, crop_top, crop_right, crop_bottom))
- else:
- cur_image = image
-
- if len(np.unique(cur_mask)) == 0 or cur_mask.mean() > max_tamper_area:
- continue
-
- filtered_image_mask_pairs.append((cur_image, cur_mask))
-
- mask_indices = np.random.choice(len(filtered_image_mask_pairs),
- size=min(len(filtered_image_mask_pairs), config.max_masks_per_image),
- replace=False)
-
- # crop masks; save masks together with input image
- mask_basename = os.path.join(outdir, os.path.splitext(file_relpath)[0])
- for i, idx in enumerate(mask_indices):
- cur_image, cur_mask = filtered_image_mask_pairs[idx]
- cur_basename = mask_basename + f'_crop{i:03d}'
- Image.fromarray(np.clip(cur_mask * 255, 0, 255).astype('uint8'),
- mode='L').save(cur_basename + f'_mask{i:03d}.png')
- cur_image.save(cur_basename + '.png')
- except KeyboardInterrupt:
- return
- except Exception as ex:
- print(f'Could not make masks for {infile} due to {ex}:\n{traceback.format_exc()}')
-
-
-def main(args):
- if not args.indir.endswith('/'):
- args.indir += '/'
-
- os.makedirs(args.outdir, exist_ok=True)
-
- config = load_yaml(args.config)
-
- in_files = list(glob.glob(os.path.join(args.indir, '**', f'*.{args.ext}'), recursive=True))
- if args.n_jobs == 0:
- process_images(in_files, args.indir, args.outdir, config)
- else:
- in_files_n = len(in_files)
- chunk_size = in_files_n // args.n_jobs + (1 if in_files_n % args.n_jobs > 0 else 0)
- Parallel(n_jobs=args.n_jobs)(
- delayed(process_images)(in_files[start:start+chunk_size], args.indir, args.outdir, config)
- for start in range(0, len(in_files), chunk_size)
- )
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('config', type=str, help='Path to config for dataset generation')
- aparser.add_argument('indir', type=str, help='Path to folder with images')
- aparser.add_argument('outdir', type=str, help='Path to folder to store aligned images and masks to')
- aparser.add_argument('--n-jobs', type=int, default=0, help='How many processes to use')
- aparser.add_argument('--ext', type=str, default='jpg', help='Input image extension')
-
- main(aparser.parse_args())
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/options/__init__.py b/spaces/Alpaca233/SadTalker/src/face3d/options/__init__.py
deleted file mode 100644
index e7eedebe54aa70169fd25951b3034d819e396c90..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/options/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""This package options includes option modules: training options, test options, and basic options (used in both training and test)."""
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/image_processor.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/image_processor.py
deleted file mode 100644
index 6ccf9b465ebd4cd6ce48a40dfe45bbc70d1f3416..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/image_processor.py
+++ /dev/null
@@ -1,366 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import warnings
-from typing import List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-from PIL import Image
-
-from .configuration_utils import ConfigMixin, register_to_config
-from .utils import CONFIG_NAME, PIL_INTERPOLATION, deprecate
-
-
-class VaeImageProcessor(ConfigMixin):
- """
- Image processor for VAE.
-
- Args:
- do_resize (`bool`, *optional*, defaults to `True`):
- Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`. Can accept
- `height` and `width` arguments from [`image_processor.VaeImageProcessor.preprocess`] method.
- vae_scale_factor (`int`, *optional*, defaults to `8`):
- VAE scale factor. If `do_resize` is `True`, the image is automatically resized to multiples of this factor.
- resample (`str`, *optional*, defaults to `lanczos`):
- Resampling filter to use when resizing the image.
- do_normalize (`bool`, *optional*, defaults to `True`):
- Whether to normalize the image to [-1,1].
- do_convert_rgb (`bool`, *optional*, defaults to be `False`):
- Whether to convert the images to RGB format.
- """
-
- config_name = CONFIG_NAME
-
- @register_to_config
- def __init__(
- self,
- do_resize: bool = True,
- vae_scale_factor: int = 8,
- resample: str = "lanczos",
- do_normalize: bool = True,
- do_convert_rgb: bool = False,
- ):
- super().__init__()
-
- @staticmethod
- def numpy_to_pil(images: np.ndarray) -> PIL.Image.Image:
- """
- Convert a numpy image or a batch of images to a PIL image.
- """
- if images.ndim == 3:
- images = images[None, ...]
- images = (images * 255).round().astype("uint8")
- if images.shape[-1] == 1:
- # special case for grayscale (single channel) images
- pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images]
- else:
- pil_images = [Image.fromarray(image) for image in images]
-
- return pil_images
-
- @staticmethod
- def pil_to_numpy(images: Union[List[PIL.Image.Image], PIL.Image.Image]) -> np.ndarray:
- """
- Convert a PIL image or a list of PIL images to NumPy arrays.
- """
- if not isinstance(images, list):
- images = [images]
- images = [np.array(image).astype(np.float32) / 255.0 for image in images]
- images = np.stack(images, axis=0)
-
- return images
-
- @staticmethod
- def numpy_to_pt(images: np.ndarray) -> torch.FloatTensor:
- """
- Convert a NumPy image to a PyTorch tensor.
- """
- if images.ndim == 3:
- images = images[..., None]
-
- images = torch.from_numpy(images.transpose(0, 3, 1, 2))
- return images
-
- @staticmethod
- def pt_to_numpy(images: torch.FloatTensor) -> np.ndarray:
- """
- Convert a PyTorch tensor to a NumPy image.
- """
- images = images.cpu().permute(0, 2, 3, 1).float().numpy()
- return images
-
- @staticmethod
- def normalize(images):
- """
- Normalize an image array to [-1,1].
- """
- return 2.0 * images - 1.0
-
- @staticmethod
- def denormalize(images):
- """
- Denormalize an image array to [0,1].
- """
- return (images / 2 + 0.5).clamp(0, 1)
-
- @staticmethod
- def convert_to_rgb(image: PIL.Image.Image) -> PIL.Image.Image:
- """
- Converts an image to RGB format.
- """
- image = image.convert("RGB")
- return image
-
- def resize(
- self,
- image: PIL.Image.Image,
- height: Optional[int] = None,
- width: Optional[int] = None,
- ) -> PIL.Image.Image:
- """
- Resize a PIL image. Both height and width are downscaled to the next integer multiple of `vae_scale_factor`.
- """
- if height is None:
- height = image.height
- if width is None:
- width = image.width
-
- width, height = (
- x - x % self.config.vae_scale_factor for x in (width, height)
- ) # resize to integer multiple of vae_scale_factor
- image = image.resize((width, height), resample=PIL_INTERPOLATION[self.config.resample])
- return image
-
- def preprocess(
- self,
- image: Union[torch.FloatTensor, PIL.Image.Image, np.ndarray],
- height: Optional[int] = None,
- width: Optional[int] = None,
- ) -> torch.Tensor:
- """
- Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors.
- """
- supported_formats = (PIL.Image.Image, np.ndarray, torch.Tensor)
- if isinstance(image, supported_formats):
- image = [image]
- elif not (isinstance(image, list) and all(isinstance(i, supported_formats) for i in image)):
- raise ValueError(
- f"Input is in incorrect format: {[type(i) for i in image]}. Currently, we only support {', '.join(supported_formats)}"
- )
-
- if isinstance(image[0], PIL.Image.Image):
- if self.config.do_convert_rgb:
- image = [self.convert_to_rgb(i) for i in image]
- if self.config.do_resize:
- image = [self.resize(i, height, width) for i in image]
- image = self.pil_to_numpy(image) # to np
- image = self.numpy_to_pt(image) # to pt
-
- elif isinstance(image[0], np.ndarray):
- image = np.concatenate(image, axis=0) if image[0].ndim == 4 else np.stack(image, axis=0)
- image = self.numpy_to_pt(image)
- _, _, height, width = image.shape
- if self.config.do_resize and (
- height % self.config.vae_scale_factor != 0 or width % self.config.vae_scale_factor != 0
- ):
- raise ValueError(
- f"Currently we only support resizing for PIL image - please resize your numpy array to be divisible by {self.config.vae_scale_factor}"
- f"currently the sizes are {height} and {width}. You can also pass a PIL image instead to use resize option in VAEImageProcessor"
- )
-
- elif isinstance(image[0], torch.Tensor):
- image = torch.cat(image, axis=0) if image[0].ndim == 4 else torch.stack(image, axis=0)
- _, channel, height, width = image.shape
-
- # don't need any preprocess if the image is latents
- if channel == 4:
- return image
-
- if self.config.do_resize and (
- height % self.config.vae_scale_factor != 0 or width % self.config.vae_scale_factor != 0
- ):
- raise ValueError(
- f"Currently we only support resizing for PIL image - please resize your pytorch tensor to be divisible by {self.config.vae_scale_factor}"
- f"currently the sizes are {height} and {width}. You can also pass a PIL image instead to use resize option in VAEImageProcessor"
- )
-
- # expected range [0,1], normalize to [-1,1]
- do_normalize = self.config.do_normalize
- if image.min() < 0:
- warnings.warn(
- "Passing `image` as torch tensor with value range in [-1,1] is deprecated. The expected value range for image tensor is [0,1] "
- f"when passing as pytorch tensor or numpy Array. You passed `image` with value range [{image.min()},{image.max()}]",
- FutureWarning,
- )
- do_normalize = False
-
- if do_normalize:
- image = self.normalize(image)
-
- return image
-
- def postprocess(
- self,
- image: torch.FloatTensor,
- output_type: str = "pil",
- do_denormalize: Optional[List[bool]] = None,
- ):
- if not isinstance(image, torch.Tensor):
- raise ValueError(
- f"Input for postprocessing is in incorrect format: {type(image)}. We only support pytorch tensor"
- )
- if output_type not in ["latent", "pt", "np", "pil"]:
- deprecation_message = (
- f"the output_type {output_type} is outdated and has been set to `np`. Please make sure to set it to one of these instead: "
- "`pil`, `np`, `pt`, `latent`"
- )
- deprecate("Unsupported output_type", "1.0.0", deprecation_message, standard_warn=False)
- output_type = "np"
-
- if output_type == "latent":
- return image
-
- if do_denormalize is None:
- do_denormalize = [self.config.do_normalize] * image.shape[0]
-
- image = torch.stack(
- [self.denormalize(image[i]) if do_denormalize[i] else image[i] for i in range(image.shape[0])]
- )
-
- if output_type == "pt":
- return image
-
- image = self.pt_to_numpy(image)
-
- if output_type == "np":
- return image
-
- if output_type == "pil":
- return self.numpy_to_pil(image)
-
-
-class VaeImageProcessorLDM3D(VaeImageProcessor):
- """
- Image processor for VAE LDM3D.
-
- Args:
- do_resize (`bool`, *optional*, defaults to `True`):
- Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`.
- vae_scale_factor (`int`, *optional*, defaults to `8`):
- VAE scale factor. If `do_resize` is `True`, the image is automatically resized to multiples of this factor.
- resample (`str`, *optional*, defaults to `lanczos`):
- Resampling filter to use when resizing the image.
- do_normalize (`bool`, *optional*, defaults to `True`):
- Whether to normalize the image to [-1,1].
- """
-
- config_name = CONFIG_NAME
-
- @register_to_config
- def __init__(
- self,
- do_resize: bool = True,
- vae_scale_factor: int = 8,
- resample: str = "lanczos",
- do_normalize: bool = True,
- ):
- super().__init__()
-
- @staticmethod
- def numpy_to_pil(images):
- """
- Convert a NumPy image or a batch of images to a PIL image.
- """
- if images.ndim == 3:
- images = images[None, ...]
- images = (images * 255).round().astype("uint8")
- if images.shape[-1] == 1:
- # special case for grayscale (single channel) images
- pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images]
- else:
- pil_images = [Image.fromarray(image[:, :, :3]) for image in images]
-
- return pil_images
-
- @staticmethod
- def rgblike_to_depthmap(image):
- """
- Args:
- image: RGB-like depth image
-
- Returns: depth map
-
- """
- return image[:, :, 1] * 2**8 + image[:, :, 2]
-
- def numpy_to_depth(self, images):
- """
- Convert a NumPy depth image or a batch of images to a PIL image.
- """
- if images.ndim == 3:
- images = images[None, ...]
- images_depth = images[:, :, :, 3:]
- if images.shape[-1] == 6:
- images_depth = (images_depth * 255).round().astype("uint8")
- pil_images = [
- Image.fromarray(self.rgblike_to_depthmap(image_depth), mode="I;16") for image_depth in images_depth
- ]
- elif images.shape[-1] == 4:
- images_depth = (images_depth * 65535.0).astype(np.uint16)
- pil_images = [Image.fromarray(image_depth, mode="I;16") for image_depth in images_depth]
- else:
- raise Exception("Not supported")
-
- return pil_images
-
- def postprocess(
- self,
- image: torch.FloatTensor,
- output_type: str = "pil",
- do_denormalize: Optional[List[bool]] = None,
- ):
- if not isinstance(image, torch.Tensor):
- raise ValueError(
- f"Input for postprocessing is in incorrect format: {type(image)}. We only support pytorch tensor"
- )
- if output_type not in ["latent", "pt", "np", "pil"]:
- deprecation_message = (
- f"the output_type {output_type} is outdated and has been set to `np`. Please make sure to set it to one of these instead: "
- "`pil`, `np`, `pt`, `latent`"
- )
- deprecate("Unsupported output_type", "1.0.0", deprecation_message, standard_warn=False)
- output_type = "np"
-
- if do_denormalize is None:
- do_denormalize = [self.config.do_normalize] * image.shape[0]
-
- image = torch.stack(
- [self.denormalize(image[i]) if do_denormalize[i] else image[i] for i in range(image.shape[0])]
- )
-
- image = self.pt_to_numpy(image)
-
- if output_type == "np":
- if image.shape[-1] == 6:
- image_depth = np.stack([self.rgblike_to_depthmap(im[:, :, 3:]) for im in image], axis=0)
- else:
- image_depth = image[:, :, :, 3:]
- return image[:, :, :, :3], image_depth
-
- if output_type == "pil":
- return self.numpy_to_pil(image), self.numpy_to_depth(image)
- else:
- raise Exception(f"This type {output_type} is not supported")
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_vq_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_vq_diffusion.py
deleted file mode 100644
index 74437ad4548074a488917d3ea9b5eef4f0ac1532..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_vq_diffusion.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import torch
-import torch.nn.functional as F
-
-from diffusers import VQDiffusionScheduler
-
-from .test_schedulers import SchedulerCommonTest
-
-
-class VQDiffusionSchedulerTest(SchedulerCommonTest):
- scheduler_classes = (VQDiffusionScheduler,)
-
- def get_scheduler_config(self, **kwargs):
- config = {
- "num_vec_classes": 4097,
- "num_train_timesteps": 100,
- }
-
- config.update(**kwargs)
- return config
-
- def dummy_sample(self, num_vec_classes):
- batch_size = 4
- height = 8
- width = 8
-
- sample = torch.randint(0, num_vec_classes, (batch_size, height * width))
-
- return sample
-
- @property
- def dummy_sample_deter(self):
- assert False
-
- def dummy_model(self, num_vec_classes):
- def model(sample, t, *args):
- batch_size, num_latent_pixels = sample.shape
- logits = torch.rand((batch_size, num_vec_classes - 1, num_latent_pixels))
- return_value = F.log_softmax(logits.double(), dim=1).float()
- return return_value
-
- return model
-
- def test_timesteps(self):
- for timesteps in [2, 5, 100, 1000]:
- self.check_over_configs(num_train_timesteps=timesteps)
-
- def test_num_vec_classes(self):
- for num_vec_classes in [5, 100, 1000, 4000]:
- self.check_over_configs(num_vec_classes=num_vec_classes)
-
- def test_time_indices(self):
- for t in [0, 50, 99]:
- self.check_over_forward(time_step=t)
-
- def test_add_noise_device(self):
- pass
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/legacy_1.x/mask_rcnn_r50_fpn_1x_coco_v1.py b/spaces/Andy1621/uniformer_image_detection/configs/legacy_1.x/mask_rcnn_r50_fpn_1x_coco_v1.py
deleted file mode 100644
index 04581bbc901d0fda0ec8c6b4a8078ae04f21473a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/legacy_1.x/mask_rcnn_r50_fpn_1x_coco_v1.py
+++ /dev/null
@@ -1,34 +0,0 @@
-_base_ = [
- '../_base_/models/mask_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_instance.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-
-model = dict(
- rpn_head=dict(
- anchor_generator=dict(type='LegacyAnchorGenerator', center_offset=0.5),
- bbox_coder=dict(type='LegacyDeltaXYWHBBoxCoder'),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
- roi_head=dict(
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(
- type='RoIAlign',
- output_size=7,
- sampling_ratio=2,
- aligned=False)),
- mask_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(
- type='RoIAlign',
- output_size=14,
- sampling_ratio=2,
- aligned=False)),
- bbox_head=dict(
- bbox_coder=dict(type='LegacyDeltaXYWHBBoxCoder'),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))),
-
- # model training and testing settings
- train_cfg=dict(
- rpn_proposal=dict(max_per_img=2000),
- rcnn=dict(assigner=dict(match_low_quality=True))))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_2x_coco.py
deleted file mode 100644
index 612490b4342a1b6fc164ec80bbe0a6c6df147d76..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_2x_coco.py
+++ /dev/null
@@ -1,3 +0,0 @@
-_base_ = './faster_rcnn_regnetx-3.2GF_fpn_1x_coco.py'
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r101_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r101_fpn_1x_coco.py
deleted file mode 100644
index b2af6119319c03a8e213b2c352fc48e66bc8a822..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r101_fpn_1x_coco.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './rpn_r50_fpn_1x_coco.py'
-model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/emanet_r50-d8.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/emanet_r50-d8.py
deleted file mode 100644
index 26adcd430926de0862204a71d345f2543167f27b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/emanet_r50-d8.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='EMAHead',
- in_channels=2048,
- in_index=3,
- channels=256,
- ema_channels=512,
- num_bases=64,
- num_stages=3,
- momentum=0.1,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/chase_db1.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/chase_db1.py
deleted file mode 100644
index 298594ea925f87f22b37094a2ec50e370aec96a0..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/chase_db1.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# dataset settings
-dataset_type = 'ChaseDB1Dataset'
-data_root = 'data/CHASE_DB1'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-img_scale = (960, 999)
-crop_size = (128, 128)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg'])
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale,
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img'])
- ])
-]
-
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type='RepeatDataset',
- times=40000,
- dataset=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/training',
- ann_dir='annotations/training',
- pipeline=train_pipeline)),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline))
diff --git a/spaces/Araloak/fz/app.py b/spaces/Araloak/fz/app.py
deleted file mode 100644
index 996b392df9d3306d7c0db13132d4ef8259bf59fe..0000000000000000000000000000000000000000
--- a/spaces/Araloak/fz/app.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# import gradio as gr
-# import os, openai
-#
-#
-# conversation = []
-#
-# class ChatGPT:
-#
-#
-# def __init__(self):
-# self.api_key = ""
-# self.messages = conversation
-# self.model = os.getenv("OPENAI_MODEL", default = "gpt-3.5-turbo")
-#
-# def save_api_key(self, user_input0):
-# self.api_key = user_input0
-#
-# def get_response(self, user_input):
-# openai.api_key = self.api_key
-# conversation.append({"role": "user", "content": user_input})
-#
-#
-# response = openai.ChatCompletion.create(
-# model=self.model,
-# messages = self.messages
-#
-# )
-#
-# conversation.append({"role": "assistant", "content": response['choices'][0]['message']['content']})
-#
-# print("AI回答內容:")
-# print(response['choices'][0]['message']['content'].strip())
-#
-#
-#
-# return response['choices'][0]['message']['content'].strip()
-#
-#
-# chatgpt = ChatGPT()
-#
-#
-# def greet(prompt, api_key):
-# chatgpt.save_api_key(api_key)
-#
-# reply_text = chatgpt.get_response(prompt)
-#
-# greeting = f"{reply_text}"
-#
-# return greeting
-#
-# demo = gr.Interface(
-# fn=greet,
-# inputs=["text", "text"],
-# outputs=["text"],
-# )
-#
-# demo.launch()
-
-import argparse
-
-import gradio as gr
-from loguru import logger
-
-from chat_completion import ChatCompletion
-
-parser = argparse.ArgumentParser()
-parser.add_argument('--api_key_path', type=str, default='./openai_api_key')
-parser.add_argument('--log_path', type=str, default='./log.txt')
-parser.add_argument('--share', action='store_true', default=False)
-parser.add_argument('--welcome', type=str, default='Say something to ChatGPT here ...')
-parser.add_argument('--title', type=str, default='ChatGPT')
-parser.add_argument('--setting', type=str, default=None)
-args = parser.parse_args()
-
-bot = ChatCompletion(api_key_path=args.api_key_path)
-logger.add(args.log_path)
-
-with gr.Blocks(title=args.title) as demo:
- chatbot = gr.Chatbot(show_label=False)
- msg = gr.TextArea(show_label=False, placeholder=args.welcome)
- send_btn = gr.Button('Send')
- retry_btn = gr.Button('Retry')
- reset_btn = gr.Button('Reset')
-
- def send(user_message, history):
- if not user_message:
- return '', history
-
- logger.info(f'[MSG] {user_message}')
- response = bot(user_message, setting=args.setting) if user_message != 'retry' else bot.retry()
- logger.info(f'[ANS] {response}')
- return '', history + [[user_message, response]]
-
- def reset():
- bot.reset()
- logger.info('[RESET]')
- return None, [[None, None]]
-
- def retry(history):
- return send('retry', history)
-
- send_btn.click(send, inputs=[msg, chatbot], outputs=[msg, chatbot], show_progress=True)
- reset_btn.click(reset, inputs=None, outputs=[msg, chatbot])
- retry_btn.click(retry, inputs=chatbot, outputs=[msg, chatbot])
-
-
-demo.launch(share=args.share)
diff --git a/spaces/Aspik101/Polish_Llama2/README.md b/spaces/Aspik101/Polish_Llama2/README.md
deleted file mode 100644
index 5d4e79b61712ffd013b5cdfce6193b89ea6800fd..0000000000000000000000000000000000000000
--- a/spaces/Aspik101/Polish_Llama2/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Polish Llama2
-emoji: 📚
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/getting_started.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/getting_started.md
deleted file mode 100644
index e90bde77a3197b77f4cfdce86ca8f96491650acd..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/getting_started.md
+++ /dev/null
@@ -1 +0,0 @@
-../../GETTING_STARTED.md
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/stores/pendingMessageIdToRetry.ts b/spaces/BetterAPI/BetterChat_new/src/lib/stores/pendingMessageIdToRetry.ts
deleted file mode 100644
index 47eec8770ae561b2c4881c5d001a3d46ee699b3b..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/lib/stores/pendingMessageIdToRetry.ts
+++ /dev/null
@@ -1,4 +0,0 @@
-import type { Message } from "$lib/types/Message";
-import { writable } from "svelte/store";
-
-export const pendingMessageIdToRetry = writable(null);
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/auth.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/auth.py
deleted file mode 100644
index 9733686ddb36b826ead4f4666d42311397fa6fec..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/auth.py
+++ /dev/null
@@ -1,315 +0,0 @@
-"""
-requests.auth
-~~~~~~~~~~~~~
-
-This module contains the authentication handlers for Requests.
-"""
-
-import hashlib
-import os
-import re
-import threading
-import time
-import warnings
-from base64 import b64encode
-
-from ._internal_utils import to_native_string
-from .compat import basestring, str, urlparse
-from .cookies import extract_cookies_to_jar
-from .utils import parse_dict_header
-
-CONTENT_TYPE_FORM_URLENCODED = "application/x-www-form-urlencoded"
-CONTENT_TYPE_MULTI_PART = "multipart/form-data"
-
-
-def _basic_auth_str(username, password):
- """Returns a Basic Auth string."""
-
- # "I want us to put a big-ol' comment on top of it that
- # says that this behaviour is dumb but we need to preserve
- # it because people are relying on it."
- # - Lukasa
- #
- # These are here solely to maintain backwards compatibility
- # for things like ints. This will be removed in 3.0.0.
- if not isinstance(username, basestring):
- warnings.warn(
- "Non-string usernames will no longer be supported in Requests "
- "3.0.0. Please convert the object you've passed in ({!r}) to "
- "a string or bytes object in the near future to avoid "
- "problems.".format(username),
- category=DeprecationWarning,
- )
- username = str(username)
-
- if not isinstance(password, basestring):
- warnings.warn(
- "Non-string passwords will no longer be supported in Requests "
- "3.0.0. Please convert the object you've passed in ({!r}) to "
- "a string or bytes object in the near future to avoid "
- "problems.".format(type(password)),
- category=DeprecationWarning,
- )
- password = str(password)
- # -- End Removal --
-
- if isinstance(username, str):
- username = username.encode("latin1")
-
- if isinstance(password, str):
- password = password.encode("latin1")
-
- authstr = "Basic " + to_native_string(
- b64encode(b":".join((username, password))).strip()
- )
-
- return authstr
-
-
-class AuthBase:
- """Base class that all auth implementations derive from"""
-
- def __call__(self, r):
- raise NotImplementedError("Auth hooks must be callable.")
-
-
-class HTTPBasicAuth(AuthBase):
- """Attaches HTTP Basic Authentication to the given Request object."""
-
- def __init__(self, username, password):
- self.username = username
- self.password = password
-
- def __eq__(self, other):
- return all(
- [
- self.username == getattr(other, "username", None),
- self.password == getattr(other, "password", None),
- ]
- )
-
- def __ne__(self, other):
- return not self == other
-
- def __call__(self, r):
- r.headers["Authorization"] = _basic_auth_str(self.username, self.password)
- return r
-
-
-class HTTPProxyAuth(HTTPBasicAuth):
- """Attaches HTTP Proxy Authentication to a given Request object."""
-
- def __call__(self, r):
- r.headers["Proxy-Authorization"] = _basic_auth_str(self.username, self.password)
- return r
-
-
-class HTTPDigestAuth(AuthBase):
- """Attaches HTTP Digest Authentication to the given Request object."""
-
- def __init__(self, username, password):
- self.username = username
- self.password = password
- # Keep state in per-thread local storage
- self._thread_local = threading.local()
-
- def init_per_thread_state(self):
- # Ensure state is initialized just once per-thread
- if not hasattr(self._thread_local, "init"):
- self._thread_local.init = True
- self._thread_local.last_nonce = ""
- self._thread_local.nonce_count = 0
- self._thread_local.chal = {}
- self._thread_local.pos = None
- self._thread_local.num_401_calls = None
-
- def build_digest_header(self, method, url):
- """
- :rtype: str
- """
-
- realm = self._thread_local.chal["realm"]
- nonce = self._thread_local.chal["nonce"]
- qop = self._thread_local.chal.get("qop")
- algorithm = self._thread_local.chal.get("algorithm")
- opaque = self._thread_local.chal.get("opaque")
- hash_utf8 = None
-
- if algorithm is None:
- _algorithm = "MD5"
- else:
- _algorithm = algorithm.upper()
- # lambdas assume digest modules are imported at the top level
- if _algorithm == "MD5" or _algorithm == "MD5-SESS":
-
- def md5_utf8(x):
- if isinstance(x, str):
- x = x.encode("utf-8")
- return hashlib.md5(x).hexdigest()
-
- hash_utf8 = md5_utf8
- elif _algorithm == "SHA":
-
- def sha_utf8(x):
- if isinstance(x, str):
- x = x.encode("utf-8")
- return hashlib.sha1(x).hexdigest()
-
- hash_utf8 = sha_utf8
- elif _algorithm == "SHA-256":
-
- def sha256_utf8(x):
- if isinstance(x, str):
- x = x.encode("utf-8")
- return hashlib.sha256(x).hexdigest()
-
- hash_utf8 = sha256_utf8
- elif _algorithm == "SHA-512":
-
- def sha512_utf8(x):
- if isinstance(x, str):
- x = x.encode("utf-8")
- return hashlib.sha512(x).hexdigest()
-
- hash_utf8 = sha512_utf8
-
- KD = lambda s, d: hash_utf8(f"{s}:{d}") # noqa:E731
-
- if hash_utf8 is None:
- return None
-
- # XXX not implemented yet
- entdig = None
- p_parsed = urlparse(url)
- #: path is request-uri defined in RFC 2616 which should not be empty
- path = p_parsed.path or "/"
- if p_parsed.query:
- path += f"?{p_parsed.query}"
-
- A1 = f"{self.username}:{realm}:{self.password}"
- A2 = f"{method}:{path}"
-
- HA1 = hash_utf8(A1)
- HA2 = hash_utf8(A2)
-
- if nonce == self._thread_local.last_nonce:
- self._thread_local.nonce_count += 1
- else:
- self._thread_local.nonce_count = 1
- ncvalue = f"{self._thread_local.nonce_count:08x}"
- s = str(self._thread_local.nonce_count).encode("utf-8")
- s += nonce.encode("utf-8")
- s += time.ctime().encode("utf-8")
- s += os.urandom(8)
-
- cnonce = hashlib.sha1(s).hexdigest()[:16]
- if _algorithm == "MD5-SESS":
- HA1 = hash_utf8(f"{HA1}:{nonce}:{cnonce}")
-
- if not qop:
- respdig = KD(HA1, f"{nonce}:{HA2}")
- elif qop == "auth" or "auth" in qop.split(","):
- noncebit = f"{nonce}:{ncvalue}:{cnonce}:auth:{HA2}"
- respdig = KD(HA1, noncebit)
- else:
- # XXX handle auth-int.
- return None
-
- self._thread_local.last_nonce = nonce
-
- # XXX should the partial digests be encoded too?
- base = (
- f'username="{self.username}", realm="{realm}", nonce="{nonce}", '
- f'uri="{path}", response="{respdig}"'
- )
- if opaque:
- base += f', opaque="{opaque}"'
- if algorithm:
- base += f', algorithm="{algorithm}"'
- if entdig:
- base += f', digest="{entdig}"'
- if qop:
- base += f', qop="auth", nc={ncvalue}, cnonce="{cnonce}"'
-
- return f"Digest {base}"
-
- def handle_redirect(self, r, **kwargs):
- """Reset num_401_calls counter on redirects."""
- if r.is_redirect:
- self._thread_local.num_401_calls = 1
-
- def handle_401(self, r, **kwargs):
- """
- Takes the given response and tries digest-auth, if needed.
-
- :rtype: requests.Response
- """
-
- # If response is not 4xx, do not auth
- # See https://github.com/psf/requests/issues/3772
- if not 400 <= r.status_code < 500:
- self._thread_local.num_401_calls = 1
- return r
-
- if self._thread_local.pos is not None:
- # Rewind the file position indicator of the body to where
- # it was to resend the request.
- r.request.body.seek(self._thread_local.pos)
- s_auth = r.headers.get("www-authenticate", "")
-
- if "digest" in s_auth.lower() and self._thread_local.num_401_calls < 2:
-
- self._thread_local.num_401_calls += 1
- pat = re.compile(r"digest ", flags=re.IGNORECASE)
- self._thread_local.chal = parse_dict_header(pat.sub("", s_auth, count=1))
-
- # Consume content and release the original connection
- # to allow our new request to reuse the same one.
- r.content
- r.close()
- prep = r.request.copy()
- extract_cookies_to_jar(prep._cookies, r.request, r.raw)
- prep.prepare_cookies(prep._cookies)
-
- prep.headers["Authorization"] = self.build_digest_header(
- prep.method, prep.url
- )
- _r = r.connection.send(prep, **kwargs)
- _r.history.append(r)
- _r.request = prep
-
- return _r
-
- self._thread_local.num_401_calls = 1
- return r
-
- def __call__(self, r):
- # Initialize per-thread state, if needed
- self.init_per_thread_state()
- # If we have a saved nonce, skip the 401
- if self._thread_local.last_nonce:
- r.headers["Authorization"] = self.build_digest_header(r.method, r.url)
- try:
- self._thread_local.pos = r.body.tell()
- except AttributeError:
- # In the case of HTTPDigestAuth being reused and the body of
- # the previous request was a file-like object, pos has the
- # file position of the previous body. Ensure it's set to
- # None.
- self._thread_local.pos = None
- r.register_hook("response", self.handle_401)
- r.register_hook("response", self.handle_redirect)
- self._thread_local.num_401_calls = 1
-
- return r
-
- def __eq__(self, other):
- return all(
- [
- self.username == getattr(other, "username", None),
- self.password == getattr(other, "password", None),
- ]
- )
-
- def __ne__(self, other):
- return not self == other
diff --git a/spaces/Blessin/drama-director/README.md b/spaces/Blessin/drama-director/README.md
deleted file mode 100644
index dc053d5b1d157db242a71ee5bac6d8483520b3fc..0000000000000000000000000000000000000000
--- a/spaces/Blessin/drama-director/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Drama Director
-emoji: 👁
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Bonosa2/parrot-chat-bot/app.py b/spaces/Bonosa2/parrot-chat-bot/app.py
deleted file mode 100644
index dd618ae2c013c1195414d15e89b255773c4e7d3c..0000000000000000000000000000000000000000
--- a/spaces/Bonosa2/parrot-chat-bot/app.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import gradio as gr
-import openai
-import os
-openai.api_key = os.environ['key3']
-
-def answer_query(prompt):
- response = openai.Completion.create(
- engine="text-davinci-003",
- prompt=prompt,
- max_tokens=150
- )
- message = response.choices[0].text.strip()
-
- # Check if query is parrot-related
- if 'parrot' not in prompt.lower():
- return "This service is only for parrot-related queries."
-
- # Disclaimer for vet info
- if 'vet' in prompt.lower() or 'veterinarian' in prompt.lower() or 'medical' in prompt.lower():
- return f"{message}\n\nPlease note that while I strive to provide accurate information, I'm an AI and not a veterinarian. Always consult with a professional for medical advice."
-
- return message
-
-iface = gr.Interface(fn=answer_query, inputs="text", outputs="text")
-iface.launch()
diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/base_assigner.py b/spaces/CVPR/WALT/mmdet/core/bbox/assigners/base_assigner.py
deleted file mode 100644
index 1ff0160dbb4bfbf53cb40d1d5cb29bcc3d197a59..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/base_assigner.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-
-class BaseAssigner(metaclass=ABCMeta):
- """Base assigner that assigns boxes to ground truth boxes."""
-
- @abstractmethod
- def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None):
- """Assign boxes to either a ground truth boxes or a negative boxes."""
diff --git a/spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/modules/batchnorm.py b/spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/modules/batchnorm.py
deleted file mode 100644
index 18318965335b37cc671004a6aceda3229dc7b477..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/modules/batchnorm.py
+++ /dev/null
@@ -1,329 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : batchnorm.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import collections
-
-import torch
-import torch.nn.functional as F
-
-from torch.nn.modules.batchnorm import _BatchNorm
-from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast
-
-from .comm import SyncMaster
-
-__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d']
-
-
-def _sum_ft(tensor):
- """sum over the first and last dimention"""
- return tensor.sum(dim=0).sum(dim=-1)
-
-
-def _unsqueeze_ft(tensor):
- """add new dementions at the front and the tail"""
- return tensor.unsqueeze(0).unsqueeze(-1)
-
-
-_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])
-_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])
-
-
-class _SynchronizedBatchNorm(_BatchNorm):
- def __init__(self, num_features, eps=1e-5, momentum=0.001, affine=True):
- super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine)
-
- self._sync_master = SyncMaster(self._data_parallel_master)
-
- self._is_parallel = False
- self._parallel_id = None
- self._slave_pipe = None
-
- # customed batch norm statistics
- self._moving_average_fraction = 1. - momentum
- self.register_buffer('_tmp_running_mean', torch.zeros(self.num_features))
- self.register_buffer('_tmp_running_var', torch.ones(self.num_features))
- self.register_buffer('_running_iter', torch.ones(1))
- self._tmp_running_mean = self.running_mean.clone() * self._running_iter
- self._tmp_running_var = self.running_var.clone() * self._running_iter
-
- def forward(self, input):
- # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.
- if not (self._is_parallel and self.training):
- return F.batch_norm(
- input, self.running_mean, self.running_var, self.weight, self.bias,
- self.training, self.momentum, self.eps)
-
- # Resize the input to (B, C, -1).
- input_shape = input.size()
- input = input.view(input.size(0), self.num_features, -1)
-
- # Compute the sum and square-sum.
- sum_size = input.size(0) * input.size(2)
- input_sum = _sum_ft(input)
- input_ssum = _sum_ft(input ** 2)
-
- # Reduce-and-broadcast the statistics.
- if self._parallel_id == 0:
- mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))
- else:
- mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))
-
- # Compute the output.
- if self.affine:
- # MJY:: Fuse the multiplication for speed.
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)
- else:
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)
-
- # Reshape it.
- return output.view(input_shape)
-
- def __data_parallel_replicate__(self, ctx, copy_id):
- self._is_parallel = True
- self._parallel_id = copy_id
-
- # parallel_id == 0 means master device.
- if self._parallel_id == 0:
- ctx.sync_master = self._sync_master
- else:
- self._slave_pipe = ctx.sync_master.register_slave(copy_id)
-
- def _data_parallel_master(self, intermediates):
- """Reduce the sum and square-sum, compute the statistics, and broadcast it."""
- intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())
-
- to_reduce = [i[1][:2] for i in intermediates]
- to_reduce = [j for i in to_reduce for j in i] # flatten
- target_gpus = [i[1].sum.get_device() for i in intermediates]
-
- sum_size = sum([i[1].sum_size for i in intermediates])
- sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)
-
- mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)
-
- broadcasted = Broadcast.apply(target_gpus, mean, inv_std)
-
- outputs = []
- for i, rec in enumerate(intermediates):
- outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))
-
- return outputs
-
- def _add_weighted(self, dest, delta, alpha=1, beta=1, bias=0):
- """return *dest* by `dest := dest*alpha + delta*beta + bias`"""
- return dest * alpha + delta * beta + bias
-
- def _compute_mean_std(self, sum_, ssum, size):
- """Compute the mean and standard-deviation with sum and square-sum. This method
- also maintains the moving average on the master device."""
- assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
- mean = sum_ / size
- sumvar = ssum - sum_ * mean
- unbias_var = sumvar / (size - 1)
- bias_var = sumvar / size
-
- self._tmp_running_mean = self._add_weighted(self._tmp_running_mean, mean.data, alpha=self._moving_average_fraction)
- self._tmp_running_var = self._add_weighted(self._tmp_running_var, unbias_var.data, alpha=self._moving_average_fraction)
- self._running_iter = self._add_weighted(self._running_iter, 1, alpha=self._moving_average_fraction)
-
- self.running_mean = self._tmp_running_mean / self._running_iter
- self.running_var = self._tmp_running_var / self._running_iter
-
- return mean, bias_var.clamp(self.eps) ** -0.5
-
-
-class SynchronizedBatchNorm1d(_SynchronizedBatchNorm):
- r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a
- mini-batch.
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm1d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of size
- `batch_size x num_features [x width]`
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C)` or :math:`(N, C, L)`
- - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 2 and input.dim() != 3:
- raise ValueError('expected 2D or 3D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm1d, self)._check_input_dim(input)
-
-
-class SynchronizedBatchNorm2d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch
- of 3d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm2d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C, H, W)`
- - Output: :math:`(N, C, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 4:
- raise ValueError('expected 4D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm2d, self)._check_input_dim(input)
-
-
-class SynchronizedBatchNorm3d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch
- of 4d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm3d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm
- or Spatio-temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x depth x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C, D, H, W)`
- - Output: :math:`(N, C, D, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 5:
- raise ValueError('expected 5D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm3d, self)._check_input_dim(input)
diff --git a/spaces/ChrisCaviar/ControlNet-v1-1/README.md b/spaces/ChrisCaviar/ControlNet-v1-1/README.md
deleted file mode 100644
index 6233ca211cfefb5d2dc8a4be6fbc2412af2d3568..0000000000000000000000000000000000000000
--- a/spaces/ChrisCaviar/ControlNet-v1-1/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: ControlNet V1.1
-emoji: 📉
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-sdk_version: 3.34.0
-python_version: 3.10.11
-app_file: app.py
-pinned: false
-license: mit
-suggested_hardware: t4-medium
-duplicated_from: hysts/ControlNet-v1-1
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/preprocessing/process_pipeline.py b/spaces/ChrisPreston/diff-svc_minato_aqua/preprocessing/process_pipeline.py
deleted file mode 100644
index 50bef79be2aa8fcdbdfa36b4a1e3e5005f9d9c26..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/preprocessing/process_pipeline.py
+++ /dev/null
@@ -1,247 +0,0 @@
-import hashlib
-import json
-import os
-import time
-import traceback
-import warnings
-from pathlib import Path
-
-import numpy as np
-import parselmouth
-import resampy
-import torch
-import torchcrepe
-
-import utils
-from modules.vocoders.nsf_hifigan import nsf_hifigan
-from utils.hparams import hparams
-from utils.pitch_utils import f0_to_coarse
-
-warnings.filterwarnings("ignore")
-
-
-class BinarizationError(Exception):
- pass
-
-
-def get_md5(content):
- return hashlib.new("md5", content).hexdigest()
-
-
-def read_temp(file_name):
- if not os.path.exists(file_name):
- with open(file_name, "w") as f:
- f.write(json.dumps({"info": "temp_dict"}))
- return {}
- else:
- try:
- with open(file_name, "r") as f:
- data = f.read()
- data_dict = json.loads(data)
- if os.path.getsize(file_name) > 50 * 1024 * 1024:
- f_name = file_name.split("/")[-1]
- print(f"clean {f_name}")
- for wav_hash in list(data_dict.keys()):
- if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
- del data_dict[wav_hash]
- except Exception as e:
- print(e)
- print(f"{file_name} error,auto rebuild file")
- data_dict = {"info": "temp_dict"}
- return data_dict
-
-
-def write_temp(file_name, data):
- with open(file_name, "w") as f:
- f.write(json.dumps(data))
-
-
-f0_dict = read_temp("./infer_tools/f0_temp.json")
-
-
-def get_pitch_parselmouth(wav_data, mel, hparams):
- """
-
- :param wav_data: [T]
- :param mel: [T, 80]
- :param hparams:
- :return:
- """
- time_step = hparams['hop_size'] / hparams['audio_sample_rate']
- f0_min = hparams['f0_min']
- f0_max = hparams['f0_max']
-
- f0 = parselmouth.Sound(wav_data, hparams['audio_sample_rate']).to_pitch_ac(
- time_step=time_step, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
-
- pad_size = (int(len(wav_data) // hparams['hop_size']) - len(f0) + 1) // 2
- f0 = np.pad(f0, [[pad_size, len(mel) - len(f0) - pad_size]], mode='constant')
- pitch_coarse = f0_to_coarse(f0, hparams)
- return f0, pitch_coarse
-
-
-def get_pitch_crepe(wav_data, mel, hparams, threshold=0.05):
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- # device = torch.device("cuda")
- # crepe只支持16khz采样率,需要重采样
- wav16k = resampy.resample(wav_data, hparams['audio_sample_rate'], 16000)
- wav16k_torch = torch.FloatTensor(wav16k).unsqueeze(0).to(device)
-
- # 频率范围
- f0_min = hparams['f0_min']
- f0_max = hparams['f0_max']
-
- # 重采样后按照hopsize=80,也就是5ms一帧分析f0
- f0, pd = torchcrepe.predict(wav16k_torch, 16000, 80, f0_min, f0_max, pad=True, model='full', batch_size=1024,
- device=device, return_periodicity=True)
-
- # 滤波,去掉静音,设置uv阈值,参考原仓库readme
- pd = torchcrepe.filter.median(pd, 3)
- pd = torchcrepe.threshold.Silence(-60.)(pd, wav16k_torch, 16000, 80)
- f0 = torchcrepe.threshold.At(threshold)(f0, pd)
- f0 = torchcrepe.filter.mean(f0, 3)
-
- # 将nan频率(uv部分)转换为0频率
- f0 = torch.where(torch.isnan(f0), torch.full_like(f0, 0), f0)
-
- # 去掉0频率,并线性插值
- nzindex = torch.nonzero(f0[0]).squeeze()
- f0 = torch.index_select(f0[0], dim=0, index=nzindex).cpu().numpy()
- time_org = 0.005 * nzindex.cpu().numpy()
- time_frame = np.arange(len(mel)) * hparams['hop_size'] / hparams['audio_sample_rate']
- if f0.shape[0] == 0:
- f0 = torch.FloatTensor(time_frame.shape[0]).fill_(0)
- print('f0 all zero!')
- else:
- f0 = np.interp(time_frame, time_org, f0, left=f0[0], right=f0[-1])
- pitch_coarse = f0_to_coarse(f0, hparams)
- return f0, pitch_coarse
-
-
-class File2Batch:
- '''
- pipeline: file -> temporary_dict -> processed_input -> batch
- '''
-
- @staticmethod
- def file2temporary_dict(raw_data_dir, ds_id):
- '''
- read from file, store data in temporary dicts
- '''
- raw_data_dir = Path(raw_data_dir)
- utterance_labels = []
- utterance_labels.extend(list(raw_data_dir.rglob(f"*.wav")))
- utterance_labels.extend(list(raw_data_dir.rglob(f"*.ogg")))
-
- all_temp_dict = {}
- for utterance_label in utterance_labels:
- item_name = str(utterance_label)
- temp_dict = {'wav_fn': str(utterance_label), 'spk_id': ds_id}
- all_temp_dict[item_name] = temp_dict
- return all_temp_dict
-
- @staticmethod
- def temporary_dict2processed_input(item_name, temp_dict, encoder, infer=False, **kwargs):
- '''
- process data in temporary_dicts
- '''
-
- def get_pitch(wav, mel):
- # get ground truth f0 by self.get_pitch_algorithm
- global f0_dict
- use_crepe = hparams['use_crepe'] if not infer else kwargs['use_crepe']
- if use_crepe:
- md5 = get_md5(wav)
- if infer and md5 in f0_dict.keys():
- print("load temp crepe f0")
- gt_f0 = np.array(f0_dict[md5]["f0"])
- coarse_f0 = np.array(f0_dict[md5]["coarse"])
- else:
- torch.cuda.is_available() and torch.cuda.empty_cache()
- gt_f0, coarse_f0 = get_pitch_crepe(wav, mel, hparams, threshold=0.05)
- if infer:
- f0_dict[md5] = {"f0": gt_f0.tolist(), "coarse": coarse_f0.tolist(), "time": int(time.time())}
- write_temp("./infer_tools/f0_temp.json", f0_dict)
- else:
- gt_f0, coarse_f0 = get_pitch_parselmouth(wav, mel, hparams)
- if sum(gt_f0) == 0:
- raise BinarizationError("Empty **gt** f0")
- processed_input['f0'] = gt_f0
- processed_input['pitch'] = coarse_f0
-
- def get_align(mel, phone_encoded):
- mel2ph = np.zeros([mel.shape[0]], int)
- start_frame = 0
- ph_durs = mel.shape[0] / phone_encoded.shape[0]
- for i_ph in range(phone_encoded.shape[0]):
- end_frame = int(i_ph * ph_durs + ph_durs + 0.5)
- mel2ph[start_frame:end_frame + 1] = i_ph + 1
- start_frame = end_frame + 1
-
- processed_input['mel2ph'] = mel2ph
-
- wav, mel = nsf_hifigan.wav2spec(temp_dict['wav_fn'])
- processed_input = {
- 'item_name': item_name, 'mel': mel,
- 'sec': len(wav) / hparams['audio_sample_rate'], 'len': mel.shape[0]
- }
- processed_input = {**temp_dict, **processed_input,
- 'spec_min': np.min(mel, axis=0),
- 'spec_max': np.max(mel, axis=0)} # merge two dicts
- try:
- get_pitch(wav, mel)
- try:
- hubert_encoded = processed_input['hubert'] = encoder.encode(temp_dict['wav_fn'])
- except:
- traceback.print_exc()
- raise Exception(f"hubert encode error")
- get_align(mel, hubert_encoded)
- except Exception as e:
- print(f"| Skip item ({e}). item_name: {item_name}, wav_fn: {temp_dict['wav_fn']}")
- return None
- if hparams['use_energy_embed']:
- max_frames = hparams['max_frames']
- spec = torch.Tensor(processed_input['mel'])[:max_frames]
- processed_input['energy'] = (spec.exp() ** 2).sum(-1).sqrt()
- return processed_input
-
- @staticmethod
- def processed_input2batch(samples):
- '''
- Args:
- samples: one batch of processed_input
- NOTE:
- the batch size is controlled by hparams['max_sentences']
- '''
- if len(samples) == 0:
- return {}
- id = torch.LongTensor([s['id'] for s in samples])
- item_names = [s['item_name'] for s in samples]
- hubert = utils.collate_2d([s['hubert'] for s in samples], 0.0)
- f0 = utils.collate_1d([s['f0'] for s in samples], 0.0)
- pitch = utils.collate_1d([s['pitch'] for s in samples])
- uv = utils.collate_1d([s['uv'] for s in samples])
- mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \
- if samples[0]['mel2ph'] is not None else None
- mels = utils.collate_2d([s['mel'] for s in samples], 0.0)
- mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
-
- batch = {
- 'id': id,
- 'item_name': item_names,
- 'nsamples': len(samples),
- 'hubert': hubert,
- 'mels': mels,
- 'mel_lengths': mel_lengths,
- 'mel2ph': mel2ph,
- 'pitch': pitch,
- 'f0': f0,
- 'uv': uv,
- }
- if hparams['use_energy_embed']:
- batch['energy'] = utils.collate_1d([s['energy'] for s in samples], 0.0)
- if hparams['use_spk_id']:
- spk_ids = torch.LongTensor([s['spk_id'] for s in samples])
- batch['spk_ids'] = spk_ids
- return batch
diff --git a/spaces/CofAI/chat.b4/client/css/sidebar.css b/spaces/CofAI/chat.b4/client/css/sidebar.css
deleted file mode 100644
index 310887c60443abd491c3162f62e44b5ec333e50d..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.b4/client/css/sidebar.css
+++ /dev/null
@@ -1,197 +0,0 @@
-.sidebar {
- max-width: 260px;
- padding: var(--section-gap);
- flex-shrink: 0;
- display: flex;
- flex-direction: column;
- justify-content: space-between;
-}
-
-.sidebar .title {
- font-size: 14px;
- font-weight: 500;
-}
-
-.sidebar .conversation-sidebar {
- padding: 8px 12px;
- display: flex;
- gap: 18px;
- align-items: center;
- user-select: none;
- justify-content: space-between;
-}
-
-.sidebar .conversation-sidebar .left {
- cursor: pointer;
- display: flex;
- align-items: center;
- gap: 10px;
-}
-
-.sidebar i {
- color: var(--conversations);
- cursor: pointer;
-}
-
-.sidebar .top {
- display: flex;
- flex-direction: column;
- overflow: hidden;
- gap: 16px;
- padding-right: 8px;
-}
-
-.sidebar .top:hover {
- overflow: auto;
-}
-
-.sidebar .info {
- padding: 8px 12px 0px 12px;
- display: flex;
- align-items: center;
- justify-content: center;
- user-select: none;
- background: transparent;
- width: 100%;
- border: none;
- text-decoration: none;
-}
-
-.sidebar .info span {
- color: var(--conversations);
- line-height: 1.5;
- font-size: 0.75rem;
-}
-
-.sidebar .info i::before {
- margin-right: 8px;
-}
-
-.sidebar-footer {
- width: 100%;
- margin-top: 16px;
- display: flex;
- flex-direction: column;
-}
-
-.sidebar-footer button {
- cursor: pointer;
- user-select: none;
- background: transparent;
-}
-
-.sidebar.shown {
- position: fixed;
- top: 0;
- left: 0;
- width: 100%;
- height: 100%;
- z-index: 1000;
-}
-
-.sidebar.shown .box {
- background-color: #16171a;
- width: 80%;
- height: 100%;
- overflow-y: auto;
-}
-
-@keyframes spinner {
- to {
- transform: rotate(360deg);
- }
-}
-
-/* scrollbar */
-.sidebar .top::-webkit-scrollbar {
- width: 4px;
- padding: 8px 0px;
-}
-
-.sidebar .top::-webkit-scrollbar-track {
- background-color: #ffffff00;
-}
-
-.sidebar .top::-webkit-scrollbar-thumb {
- background-color: #555555;
- border-radius: 10px;
-}
-
-.spinner:before {
- content: "";
- box-sizing: border-box;
- position: absolute;
- top: 50%;
- left: 45%;
- width: 20px;
- height: 20px;
- border-radius: 50%;
- border: 1px solid var(--conversations);
- border-top-color: white;
- animation: spinner 0.6s linear infinite;
-}
-
-.menu-button {
- display: none !important;
- position: absolute;
- z-index: 100000;
- top: 0;
- left: 0;
- margin: 10px;
- font-size: 1rem;
- cursor: pointer;
- width: 30px;
- height: 30px;
- justify-content: center;
- align-items: center;
- transition: 0.33s;
-}
-
-.menu-button i {
- transition: 0.33s;
-}
-
-.rotated {
- transform: rotate(360deg);
-}
-
-.menu-button.rotated {
- position: fixed;
- top: 10px;
- left: 10px;
- z-index: 1001;
-}
-
-@media screen and (max-width: 990px) {
- .sidebar {
- display: none;
- width: 100%;
- max-width: none;
- }
-
- .menu-button {
- display: flex !important;
- }
-}
-
-@media (max-width: 990px) {
- .sidebar .top {
- padding-top: 48px;
- }
-}
-
-@media (min-width: 768px) {
- .sidebar.shown {
- position: static;
- width: auto;
- height: auto;
- background-color: transparent;
- }
-
- .sidebar.shown .box {
- background-color: #16171a;
- width: auto;
- height: auto;
- overflow-y: auto;
- }
-}
diff --git a/spaces/Coweed/GoodTrip/greeting.md b/spaces/Coweed/GoodTrip/greeting.md
deleted file mode 100644
index 113d519d3321816664a72512cd1666437566bc54..0000000000000000000000000000000000000000
--- a/spaces/Coweed/GoodTrip/greeting.md
+++ /dev/null
@@ -1,4 +0,0 @@
-
-
-
-В С Е !!!
diff --git a/spaces/Curranj/Words_To_SQL/app.py b/spaces/Curranj/Words_To_SQL/app.py
deleted file mode 100644
index 3b1a258639b0beaa3e6877855f364faf63c9fac0..0000000000000000000000000000000000000000
--- a/spaces/Curranj/Words_To_SQL/app.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import openai
-import gradio as gr
-import os
-
-#OpenAi call
-def gpt3(texts):
- openai.api_key = os.environ["Secret"]
- response = openai.Completion.create(
- engine="text-davinci-003",
- prompt= texts,
- temperature=0,
- max_tokens=750,
- top_p=1,
- frequency_penalty=0.0,
- presence_penalty=0.0,
- stop = (";", "/*", "")
- )
- x = response.choices[0].text
-
- return x
-
-# Function to elicit sql response from model
-def greet(prompt):
- txt= (f'''/*Prompt: {prompt}*/ \n —-SQL Code:\n''')
- sql = gpt3(txt)
- return sql
-
-
-#Code to set up Gradio UI
-iface = gr.Interface(greet, inputs = ["text"], outputs = "text",title="Natural Language to SQL", description="Enter any prompt and get a SQL statement back! For better results, give it more context")
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/layers/roi_align.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/layers/roi_align.py
deleted file mode 100644
index 170c8f18696aed19c4b9533a51933264530a1530..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/layers/roi_align.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import torch
-from torch import nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair
-
-from maskrcnn_benchmark import _C
-
-
-class _ROIAlign(Function):
- @staticmethod
- def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio):
- ctx.save_for_backward(roi)
- ctx.output_size = _pair(output_size)
- ctx.spatial_scale = spatial_scale
- ctx.sampling_ratio = sampling_ratio
- ctx.input_shape = input.size()
- output = _C.roi_align_forward(
- input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio
- )
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- rois, = ctx.saved_tensors
- output_size = ctx.output_size
- spatial_scale = ctx.spatial_scale
- sampling_ratio = ctx.sampling_ratio
- bs, ch, h, w = ctx.input_shape
- grad_input = _C.roi_align_backward(
- grad_output,
- rois,
- spatial_scale,
- output_size[0],
- output_size[1],
- bs,
- ch,
- h,
- w,
- sampling_ratio,
- )
- return grad_input, None, None, None, None
-
-
-roi_align = _ROIAlign.apply
-
-
-class ROIAlign(nn.Module):
- def __init__(self, output_size, spatial_scale, sampling_ratio):
- super(ROIAlign, self).__init__()
- self.output_size = output_size
- self.spatial_scale = spatial_scale
- self.sampling_ratio = sampling_ratio
-
- def forward(self, input, rois):
- return roi_align(
- input, rois, self.output_size, self.spatial_scale, self.sampling_ratio
- )
-
- def __repr__(self):
- tmpstr = self.__class__.__name__ + "("
- tmpstr += "output_size=" + str(self.output_size)
- tmpstr += ", spatial_scale=" + str(self.spatial_scale)
- tmpstr += ", sampling_ratio=" + str(self.sampling_ratio)
- tmpstr += ")"
- return tmpstr
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/commands/scan_cache.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/commands/scan_cache.py
deleted file mode 100644
index ff26fa9de50f607ca78a24c5041010b4d629c148..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/commands/scan_cache.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# coding=utf-8
-# Copyright 2022-present, the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Contains command to scan the HF cache directory.
-
-Usage:
- huggingface-cli scan-cache
- huggingface-cli scan-cache -v
- huggingface-cli scan-cache -vvv
- huggingface-cli scan-cache --dir ~/.cache/huggingface/hub
-"""
-import time
-from argparse import _SubParsersAction
-from typing import Optional
-
-from ..utils import CacheNotFound, HFCacheInfo, scan_cache_dir
-from . import BaseHuggingfaceCLICommand
-from ._cli_utils import ANSI, tabulate
-
-
-class ScanCacheCommand(BaseHuggingfaceCLICommand):
- @staticmethod
- def register_subcommand(parser: _SubParsersAction):
- scan_cache_parser = parser.add_parser("scan-cache", help="Scan cache directory.")
-
- scan_cache_parser.add_argument(
- "--dir",
- type=str,
- default=None,
- help="cache directory to scan (optional). Default to the default HuggingFace cache.",
- )
- scan_cache_parser.add_argument(
- "-v",
- "--verbose",
- action="count",
- default=0,
- help="show a more verbose output",
- )
- scan_cache_parser.set_defaults(func=ScanCacheCommand)
-
- def __init__(self, args):
- self.verbosity: int = args.verbose
- self.cache_dir: Optional[str] = args.dir
-
- def run(self):
- try:
- t0 = time.time()
- hf_cache_info = scan_cache_dir(self.cache_dir)
- t1 = time.time()
- except CacheNotFound as exc:
- cache_dir = exc.cache_dir
- print(f"Cache directory not found: {cache_dir}")
- return
-
- self._print_hf_cache_info_as_table(hf_cache_info)
-
- print(
- f"\nDone in {round(t1-t0,1)}s. Scanned {len(hf_cache_info.repos)} repo(s)"
- f" for a total of {ANSI.red(hf_cache_info.size_on_disk_str)}."
- )
- if len(hf_cache_info.warnings) > 0:
- message = f"Got {len(hf_cache_info.warnings)} warning(s) while scanning."
- if self.verbosity >= 3:
- print(ANSI.gray(message))
- for warning in hf_cache_info.warnings:
- print(ANSI.gray(warning))
- else:
- print(ANSI.gray(message + " Use -vvv to print details."))
-
- def _print_hf_cache_info_as_table(self, hf_cache_info: HFCacheInfo) -> None:
- if self.verbosity == 0:
- print(
- tabulate(
- rows=[
- [
- repo.repo_id,
- repo.repo_type,
- "{:>12}".format(repo.size_on_disk_str),
- repo.nb_files,
- repo.last_accessed_str,
- repo.last_modified_str,
- ", ".join(sorted(repo.refs)),
- str(repo.repo_path),
- ]
- for repo in sorted(hf_cache_info.repos, key=lambda repo: repo.repo_path)
- ],
- headers=[
- "REPO ID",
- "REPO TYPE",
- "SIZE ON DISK",
- "NB FILES",
- "LAST_ACCESSED",
- "LAST_MODIFIED",
- "REFS",
- "LOCAL PATH",
- ],
- )
- )
- else:
- print(
- tabulate(
- rows=[
- [
- repo.repo_id,
- repo.repo_type,
- revision.commit_hash,
- "{:>12}".format(revision.size_on_disk_str),
- revision.nb_files,
- revision.last_modified_str,
- ", ".join(sorted(revision.refs)),
- str(revision.snapshot_path),
- ]
- for repo in sorted(hf_cache_info.repos, key=lambda repo: repo.repo_path)
- for revision in sorted(repo.revisions, key=lambda revision: revision.commit_hash)
- ],
- headers=[
- "REPO ID",
- "REPO TYPE",
- "REVISION",
- "SIZE ON DISK",
- "NB FILES",
- "LAST_MODIFIED",
- "REFS",
- "LOCAL PATH",
- ],
- )
- )
diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/pretrained_example.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/pretrained_example.py
deleted file mode 100644
index 63baef08bfa4bf34f52a0cf63e10a0b6783ac316..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/pretrained_example.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
-#
-# This work is licensed under the Creative Commons Attribution-NonCommercial
-# 4.0 International License. To view a copy of this license, visit
-# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to
-# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
-
-"""Minimal script for generating an image using pre-trained StyleGAN generator."""
-
-import os
-import pickle
-import numpy as np
-import PIL.Image
-import dnnlib
-import dnnlib.tflib as tflib
-import config
-
-def main():
- # Initialize TensorFlow.
- tflib.init_tf()
-
- # Load pre-trained network.
- url = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # karras2019stylegan-ffhq-1024x1024.pkl
- with dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f:
- _G, _D, Gs = pickle.load(f)
- # _G = Instantaneous snapshot of the generator. Mainly useful for resuming a previous training run.
- # _D = Instantaneous snapshot of the discriminator. Mainly useful for resuming a previous training run.
- # Gs = Long-term average of the generator. Yields higher-quality results than the instantaneous snapshot.
-
- # Print network details.
- Gs.print_layers()
-
- # Pick latent vector.
- rnd = np.random.RandomState(5)
- latents = rnd.randn(1, Gs.input_shape[1])
-
- # Generate image.
- fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)
- images = Gs.run(latents, None, truncation_psi=0.7, randomize_noise=True, output_transform=fmt)
-
- # Save image.
- os.makedirs(config.result_dir, exist_ok=True)
- png_filename = os.path.join(config.result_dir, 'example.png')
- PIL.Image.fromarray(images[0], 'RGB').save(png_filename)
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Duskfallcrew/Gambit_and_Rogue/README.md b/spaces/Duskfallcrew/Gambit_and_Rogue/README.md
deleted file mode 100644
index 2eff328836488587143eb2ba2da96635247017cf..0000000000000000000000000000000000000000
--- a/spaces/Duskfallcrew/Gambit_and_Rogue/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Gambit And Rogue
-emoji: 🐢
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/transformer_decoder/maskformer_transformer_decoder.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/transformer_decoder/maskformer_transformer_decoder.py
deleted file mode 100644
index 79f09fa43f2f5a33c3422a6bb999b20763ab8b5e..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/transformer_decoder/maskformer_transformer_decoder.py
+++ /dev/null
@@ -1,188 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/detr.py
-import fvcore.nn.weight_init as weight_init
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d
-from detectron2.utils.registry import Registry
-
-from .position_encoding import PositionEmbeddingSine
-from .transformer import Transformer
-
-
-TRANSFORMER_DECODER_REGISTRY = Registry("TRANSFORMER_MODULE")
-TRANSFORMER_DECODER_REGISTRY.__doc__ = """
-Registry for transformer module in MaskFormer.
-"""
-
-
-def build_transformer_decoder(cfg, in_channels, mask_classification=True):
- """
- Build a instance embedding branch from `cfg.MODEL.INS_EMBED_HEAD.NAME`.
- """
- name = cfg.MODEL.MASK_FORMER.TRANSFORMER_DECODER_NAME
- return TRANSFORMER_DECODER_REGISTRY.get(name)(cfg, in_channels, mask_classification)
-
-
-@TRANSFORMER_DECODER_REGISTRY.register()
-class StandardTransformerDecoder(nn.Module):
- @configurable
- def __init__(
- self,
- in_channels,
- mask_classification=True,
- *,
- num_classes: int,
- hidden_dim: int,
- num_queries: int,
- nheads: int,
- dropout: float,
- dim_feedforward: int,
- enc_layers: int,
- dec_layers: int,
- pre_norm: bool,
- deep_supervision: bool,
- mask_dim: int,
- enforce_input_project: bool,
- ):
- """
- NOTE: this interface is experimental.
- Args:
- in_channels: channels of the input features
- mask_classification: whether to add mask classifier or not
- num_classes: number of classes
- hidden_dim: Transformer feature dimension
- num_queries: number of queries
- nheads: number of heads
- dropout: dropout in Transformer
- dim_feedforward: feature dimension in feedforward network
- enc_layers: number of Transformer encoder layers
- dec_layers: number of Transformer decoder layers
- pre_norm: whether to use pre-LayerNorm or not
- deep_supervision: whether to add supervision to every decoder layers
- mask_dim: mask feature dimension
- enforce_input_project: add input project 1x1 conv even if input
- channels and hidden dim is identical
- """
- super().__init__()
-
- self.mask_classification = mask_classification
-
- # positional encoding
- N_steps = hidden_dim // 2
- self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
-
- transformer = Transformer(
- d_model=hidden_dim,
- dropout=dropout,
- nhead=nheads,
- dim_feedforward=dim_feedforward,
- num_encoder_layers=enc_layers,
- num_decoder_layers=dec_layers,
- normalize_before=pre_norm,
- return_intermediate_dec=deep_supervision,
- )
-
- self.num_queries = num_queries
- self.transformer = transformer
- hidden_dim = transformer.d_model
-
- self.query_embed = nn.Embedding(num_queries, hidden_dim)
-
- if in_channels != hidden_dim or enforce_input_project:
- self.input_proj = Conv2d(in_channels, hidden_dim, kernel_size=1)
- weight_init.c2_xavier_fill(self.input_proj)
- else:
- self.input_proj = nn.Sequential()
- self.aux_loss = deep_supervision
-
- # output FFNs
- if self.mask_classification:
- self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
- self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
-
- @classmethod
- def from_config(cls, cfg, in_channels, mask_classification):
- ret = {}
- ret["in_channels"] = in_channels
- ret["mask_classification"] = mask_classification
-
- ret["num_classes"] = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES
- ret["hidden_dim"] = cfg.MODEL.MASK_FORMER.HIDDEN_DIM
- ret["num_queries"] = cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES
- # Transformer parameters:
- ret["nheads"] = cfg.MODEL.MASK_FORMER.NHEADS
- ret["dropout"] = cfg.MODEL.MASK_FORMER.DROPOUT
- ret["dim_feedforward"] = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD
- ret["enc_layers"] = cfg.MODEL.MASK_FORMER.ENC_LAYERS
- ret["dec_layers"] = cfg.MODEL.MASK_FORMER.DEC_LAYERS
- ret["pre_norm"] = cfg.MODEL.MASK_FORMER.PRE_NORM
- ret["deep_supervision"] = cfg.MODEL.MASK_FORMER.DEEP_SUPERVISION
- ret["enforce_input_project"] = cfg.MODEL.MASK_FORMER.ENFORCE_INPUT_PROJ
-
- ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
-
- return ret
-
- def forward(self, x, mask_features, mask=None):
- if mask is not None:
- mask = F.interpolate(mask[None].float(), size=x.shape[-2:]).to(torch.bool)[0]
- pos = self.pe_layer(x, mask)
-
- src = x
- hs, memory = self.transformer(self.input_proj(src), mask, self.query_embed.weight, pos)
-
- if self.mask_classification:
- outputs_class = self.class_embed(hs)
- out = {"pred_logits": outputs_class[-1]}
- else:
- out = {}
-
- if self.aux_loss:
- # [l, bs, queries, embed]
- mask_embed = self.mask_embed(hs)
- outputs_seg_masks = torch.einsum("lbqc,bchw->lbqhw", mask_embed, mask_features)
- out["pred_masks"] = outputs_seg_masks[-1]
- out["aux_outputs"] = self._set_aux_loss(
- outputs_class if self.mask_classification else None, outputs_seg_masks
- )
- else:
- # FIXME h_boxes takes the last one computed, keep this in mind
- # [bs, queries, embed]
- mask_embed = self.mask_embed(hs[-1])
- outputs_seg_masks = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features)
- out["pred_masks"] = outputs_seg_masks
- return out
-
- @torch.jit.unused
- def _set_aux_loss(self, outputs_class, outputs_seg_masks):
- # this is a workaround to make torchscript happy, as torchscript
- # doesn't support dictionary with non-homogeneous values, such
- # as a dict having both a Tensor and a list.
- if self.mask_classification:
- return [
- {"pred_logits": a, "pred_masks": b}
- for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1])
- ]
- else:
- return [{"pred_masks": b} for b in outputs_seg_masks[:-1]]
-
-
-class MLP(nn.Module):
- """Very simple multi-layer perceptron (also called FFN)"""
-
- def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
- super().__init__()
- self.num_layers = num_layers
- h = [hidden_dim] * (num_layers - 1)
- self.layers = nn.ModuleList(
- nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim])
- )
-
- def forward(self, x):
- for i, layer in enumerate(self.layers):
- x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
- return x
diff --git a/spaces/Edward-Ji/essentials-of-microeconomics/essentials_of_microeconomics/equilibrium_and_welfare.py b/spaces/Edward-Ji/essentials-of-microeconomics/essentials_of_microeconomics/equilibrium_and_welfare.py
deleted file mode 100644
index 360cd29cb50381c4af50a466e133bcbcb530a3da..0000000000000000000000000000000000000000
--- a/spaces/Edward-Ji/essentials-of-microeconomics/essentials_of_microeconomics/equilibrium_and_welfare.py
+++ /dev/null
@@ -1,138 +0,0 @@
-import matplotlib.pyplot as plt
-from shiny import module, reactive, render, req, ui
-from sympy import integrate, latex, plot, simplify, solve, symbols
-
-from module import demand_supply_ui, demand_supply_server
-from util import latex_approx
-
-
-@module.ui
-def equilibrium_and_welfare_ui():
- return ui.nav(
- "Equilibrium and welfare",
- ui.h1("Equilibrium and welfare"),
- demand_supply_ui("ds"),
- ui.h2("Equilibrium"),
- ui.p(r"""A market is in equilibrium if, at some market price, the
- quantity \(Q_d\) demanded by consumers equals the quantity \(Q_s\)
- supplied by firms. The price at which this occurs is called the
- market-clearing price (or equilibrium price), denoted \(P^*\)."""),
- ui.output_text("equilibrium_text"),
- ui.h2("Welfare"),
- ui.p("""We can measure the observed changes in the benefits consumers
- and firms gain in the markets using welfare analysis."""),
- ui.h3("Consumer surplus"),
- ui.p("""Consumer surplus (CS) is the welfare consumers receive from
- buying units of goods or services in the market. It is given by the
- consumer’s willingness to pay, minus the price paid, for each unit
- bought. We can find an individual’s CS by calculating the area
- between the demand curve and the price line."""),
- ui.output_text("CS_text"),
- ui.h3("Producer surplus"),
- ui.p("""Producer surplus (PS) is the welfare producers (usually firms)
- receive from selling units of a good or service in the market. It
- is given by the price the producer receives, minus the cost of
- production, for each unit of the good or service bought. We can
- find a firm’s PS by calculating the area between the price line and
- the firm’s supply curve."""),
- ui.output_text("PS_text"),
- ui.h3("Total surplus"),
- ui.p(r"""The total surplus (TS) is the sum of consumer and producer
- surplus in the market equilibrium. TS is the area between the
- demand and supply curves, up to the market equilibrium, quantity
- \(Q^*\)."""),
- ui.output_text("TS_text"),
- ui.output_plot("welfare"),
- value="equilibrium_and_welfare"
- )
-
-
-@module.server
-def equilibrium_and_welfare_server(input, output, session, settings):
- symbol_P, symbol_Q = symbols("P, Q", positive=True)
-
- demand, supply, P_d, P_s = demand_supply_server("ds", settings)
-
- @reactive.Calc
- def equilibrium():
- solutions = solve([demand(), supply()], symbol_P, symbol_Q, dict=True)
- req(len(solutions) == 1)
- return solutions[0]
-
- @reactive.Calc
- def P_optimal():
- return equilibrium()[symbol_P]
-
- @reactive.Calc
- def Q_optimal():
- return equilibrium()[symbol_Q]
-
- @reactive.Calc
- def CS():
- return simplify(integrate(P_d() - P_optimal(),
- (symbol_Q, 0, Q_optimal())))
-
- @reactive.Calc
- def PS():
- return simplify(integrate(P_optimal() - P_s(),
- (symbol_Q, 0, Q_optimal())))
-
- @reactive.Calc
- def TS():
- return simplify(CS() + PS())
-
- @render.text
- def equilibrium_text():
- return (
- r"$$\begin{cases}"
- + latex(demand()) + r"\\"
- + latex(supply())
- + r"\end{cases} \implies \begin{cases}"
- + "P^* ="
- + latex_approx(P_optimal(), settings.perc(), settings.approx())
- + r"\\"
- + "Q^* ="
- + latex_approx(Q_optimal(), settings.perc(), settings.approx())
- + r"\end{cases}$$")
-
- @render.text
- def CS_text():
- return (r"$$CS = \int_0^{Q^*}P_d - P^*\,dQ ="
- + latex_approx(CS(), settings.perc(), settings.approx())
- + "$$")
-
- @render.text
- def PS_text():
- return (r"$$PS = \int_0^{Q^*}P^* - P_s\,dQ ="
- + latex_approx(PS(), settings.perc(), settings.approx())
- + "$$")
-
- @render.text
- def TS_text():
- return (r"$$TS = CS + PS ="
- + latex_approx(TS(), settings.perc(), settings.approx())
- + "$$")
-
- @render.plot(height=400)
- def welfare():
- ax = plt.subplot()
- plot_d, plot_s = plot(P_d(), P_s(),
- (symbol_Q, 0, Q_optimal() * 2),
- show=False)
- plot_cs, plot_ps = plot(P_d(), P_s(),
- (symbol_Q, 0 ,Q_optimal()),
- show=False)
- ax.plot(*plot_d.get_points(), label="Demand")
- ax.plot(*plot_s.get_points(), label="Supply")
- ax.scatter(Q_optimal(), P_optimal(), s=50, c="tab:green", marker="o",
- label="Equilibrium", zorder=100)
- ax.fill_between(*plot_cs.get_points(), float(P_optimal()),
- alpha=.5, label="CS")
- ax.fill_between(*plot_ps.get_points(), float(P_optimal()),
- alpha=.5, label="PS")
- ax.set_xlim(0)
- ax.set_ylim(0)
- ax.set_xlabel("$Q$")
- ax.set_ylabel("$P$")
- ax.legend()
- return ax
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnet/dbnet_r50dcnv2_fpnc_100k_iters_synthtext.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnet/dbnet_r50dcnv2_fpnc_100k_iters_synthtext.py
deleted file mode 100644
index 0ccd22c9b0675062571ed971a16dd75958ac03e0..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnet/dbnet_r50dcnv2_fpnc_100k_iters_synthtext.py
+++ /dev/null
@@ -1,61 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_sgd_100k_iters.py',
- '../../_base_/det_models/dbnet_r50dcnv2_fpnc.py',
- '../../_base_/det_datasets/synthtext.py',
- '../../_base_/det_pipelines/dbnet_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-img_norm_cfg_r50dcnv2 = dict(
- mean=[122.67891434, 116.66876762, 104.00698793],
- std=[58.395, 57.12, 57.375],
- to_rgb=True)
-train_pipeline_r50dcnv2 = [
- dict(type='LoadImageFromFile', color_type='color_ignore_orientation'),
- dict(
- type='LoadTextAnnotations',
- with_bbox=True,
- with_mask=True,
- poly2mask=False),
- dict(type='ColorJitter', brightness=32.0 / 255, saturation=0.5),
- dict(type='Normalize', **img_norm_cfg_r50dcnv2),
- dict(
- type='ImgAug',
- args=[['Fliplr', 0.5],
- dict(cls='Affine', rotate=[-10, 10]), ['Resize', [0.5, 3.0]]],
- clip_invalid_ploys=False),
- dict(type='EastRandomCrop', target_size=(640, 640)),
- dict(type='DBNetTargets', shrink_ratio=0.4),
- dict(type='Pad', size_divisor=32),
- dict(
- type='CustomFormatBundle',
- keys=['gt_shrink', 'gt_shrink_mask', 'gt_thr', 'gt_thr_mask'],
- visualize=dict(flag=False, boundary_key='gt_shrink')),
- dict(
- type='Collect',
- keys=['img', 'gt_shrink', 'gt_shrink_mask', 'gt_thr', 'gt_thr_mask'])
-]
-test_pipeline_4068_1024 = {{_base_.test_pipeline_4068_1024}}
-
-data = dict(
- samples_per_gpu=16,
- workers_per_gpu=8,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline_r50dcnv2),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_4068_1024),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_4068_1024))
-
-evaluation = dict(interval=999999, metric='hmean-iou') # do not evaluate
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/fcenet/fcenet_r50dcnv2_fpn_1500e_ctw1500.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/fcenet/fcenet_r50dcnv2_fpn_1500e_ctw1500.py
deleted file mode 100644
index 44bbfcd55a2efc29f441e06fb33079a48de61905..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/fcenet/fcenet_r50dcnv2_fpn_1500e_ctw1500.py
+++ /dev/null
@@ -1,33 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_sgd_1500e.py',
- '../../_base_/det_models/fcenet_r50dcnv2_fpn.py',
- '../../_base_/det_datasets/ctw1500.py',
- '../../_base_/det_pipelines/fcenet_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline_ctw1500 = {{_base_.train_pipeline_ctw1500}}
-test_pipeline_ctw1500 = {{_base_.test_pipeline_ctw1500}}
-
-data = dict(
- samples_per_gpu=6,
- workers_per_gpu=2,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline_ctw1500),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_ctw1500),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_ctw1500))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/FL33TW00D/whisper-turbo/_next/static/css/af45b391324561cd.css b/spaces/FL33TW00D/whisper-turbo/_next/static/css/af45b391324561cd.css
deleted file mode 100644
index 0107d86cd0df941f586c97a32f5d7991bb86adea..0000000000000000000000000000000000000000
--- a/spaces/FL33TW00D/whisper-turbo/_next/static/css/af45b391324561cd.css
+++ /dev/null
@@ -1,3 +0,0 @@
-/*
-! tailwindcss v3.3.5 | MIT License | https://tailwindcss.com
-*/*,:after,:before{box-sizing:border-box;border:0 solid #e5e7eb}:after,:before{--tw-content:""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,pre,samp{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-feature-settings:inherit;font-variation-settings:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dd,dl,figure,h1,h2,h3,h4,h5,h6,hr,p,pre{margin:0}fieldset{margin:0}fieldset,legend{padding:0}menu,ol,ul{list-style:none;margin:0;padding:0}dialog{padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}[role=button],button{cursor:pointer}:disabled{cursor:default}audio,canvas,embed,iframe,img,object,svg,video{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:after,:before{--tw-border-spacing-x:0;--tw-border-spacing-y:0;--tw-translate-x:0;--tw-translate-y:0;--tw-rotate:0;--tw-skew-x:0;--tw-skew-y:0;--tw-scale-x:1;--tw-scale-y:1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness:proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width:0px;--tw-ring-offset-color:#fff;--tw-ring-color:rgba(59,130,246,.5);--tw-ring-offset-shadow:0 0 #0000;--tw-ring-shadow:0 0 #0000;--tw-shadow:0 0 #0000;--tw-shadow-colored:0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x:0;--tw-border-spacing-y:0;--tw-translate-x:0;--tw-translate-y:0;--tw-rotate:0;--tw-skew-x:0;--tw-skew-y:0;--tw-scale-x:1;--tw-scale-y:1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness:proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width:0px;--tw-ring-offset-color:#fff;--tw-ring-color:rgba(59,130,246,.5);--tw-ring-offset-shadow:0 0 #0000;--tw-ring-shadow:0 0 #0000;--tw-shadow:0 0 #0000;--tw-shadow-colored:0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.absolute{position:absolute}.relative{position:relative}.bottom-0{bottom:0}.-z-20{z-index:-20}.z-10{z-index:10}.mx-8{margin-left:2rem;margin-right:2rem}.mx-auto{margin-left:auto;margin-right:auto}.my-4{margin-top:1rem;margin-bottom:1rem}.my-auto{margin-top:auto;margin-bottom:auto}.mb-2{margin-bottom:.5rem}.mr-1{margin-right:.25rem}.mt-2{margin-top:.5rem}.mt-8{margin-top:2rem}.block{display:block}.inline-block{display:inline-block}.flex{display:flex}.inline-flex{display:inline-flex}.hidden{display:none}.h-3{height:.75rem}.h-4{height:1rem}.h-6{height:1.5rem}.h-8{height:2rem}.h-full{height:100%}.h-screen{height:100vh}.min-h-screen{min-height:100vh}.w-1\/2{width:50%}.w-4{width:1rem}.w-6{width:1.5rem}.w-8{width:2rem}.w-full{width:100%}.flex-1{flex:1 1 0%}.cursor-pointer{cursor:pointer}.flex-row{flex-direction:row}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-end{justify-content:flex-end}.justify-between{justify-content:space-between}.gap-2{gap:.5rem}.gap-4{gap:1rem}.gap-6{gap:1.5rem}.gap-8{gap:2rem}.overflow-hidden{overflow:hidden}.overflow-scroll{overflow:scroll}.overflow-x-hidden{overflow-x:hidden}.rounded{border-radius:.25rem}.rounded-b-md{border-bottom-right-radius:.375rem;border-bottom-left-radius:.375rem}.\!bg-pop-orange{--tw-bg-opacity:1!important;background-color:rgb(249 60 38/var(--tw-bg-opacity))!important}.bg-emerald-500{--tw-bg-opacity:1;background-color:rgb(16 185 129/var(--tw-bg-opacity))}.bg-gray-200{--tw-bg-opacity:1;background-color:rgb(229 231 235/var(--tw-bg-opacity))}.bg-green-500{--tw-bg-opacity:1;background-color:rgb(34 197 94/var(--tw-bg-opacity))}.bg-orange-500{--tw-bg-opacity:1;background-color:rgb(249 115 22/var(--tw-bg-opacity))}.bg-pop-orange{--tw-bg-opacity:1;background-color:rgb(249 60 38/var(--tw-bg-opacity))}.bg-sky-500{--tw-bg-opacity:1;background-color:rgb(14 165 233/var(--tw-bg-opacity))}.bg-white{--tw-bg-opacity:1;background-color:rgb(255 255 255/var(--tw-bg-opacity))}.fill-current{fill:currentColor}.p-0{padding:0}.p-4{padding:1rem}.px-3{padding-left:.75rem;padding-right:.75rem}.px-4{padding-left:1rem;padding-right:1rem}.px-6{padding-left:1.5rem;padding-right:1.5rem}.px-8{padding-left:2rem;padding-right:2rem}.py-12{padding-top:3rem;padding-bottom:3rem}.py-2{padding-top:.5rem;padding-bottom:.5rem}.py-2\.5{padding-top:.625rem;padding-bottom:.625rem}.py-3{padding-top:.75rem;padding-bottom:.75rem}.py-4{padding-top:1rem}.pb-4,.py-4{padding-bottom:1rem}.pt-8{padding-top:2rem}.text-center{text-align:center}.text-right{text-align:right}.text-2xl{font-size:1.5rem;line-height:2rem}.text-lg{font-size:1.125rem}.text-lg,.text-xl{line-height:1.75rem}.text-xl{font-size:1.25rem}.font-bold{font-weight:700}.font-semibold{font-weight:600}.\!text-white{--tw-text-opacity:1!important;color:rgb(255 255 255/var(--tw-text-opacity))!important}.text-green-700{--tw-text-opacity:1;color:rgb(21 128 61/var(--tw-text-opacity))}.text-red-700{--tw-text-opacity:1;color:rgb(185 28 28/var(--tw-text-opacity))}.text-slate-900{--tw-text-opacity:1;color:rgb(15 23 42/var(--tw-text-opacity))}.text-stone-50{--tw-text-opacity:1;color:rgb(250 250 249/var(--tw-text-opacity))}.text-white{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}.antialiased{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.shadow-lg{--tw-shadow:0 10px 15px -3px rgba(0,0,0,.1),0 4px 6px -4px rgba(0,0,0,.1);--tw-shadow-colored:0 10px 15px -3px var(--tw-shadow-color),0 4px 6px -4px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.\!outline{outline-style:solid!important}.outline{outline-style:solid}.outline-2{outline-width:2px}.outline-black{outline-color:#000}.outline-white{outline-color:#fff}body,html{padding:0;margin:0;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif}a{color:inherit;text-decoration:none}*{box-sizing:border-box}:root{--foreground-rgb:0,0,0;--background-start-rgb:214,219,220;--background-end-rgb:255,255,255}@media (prefers-color-scheme:dark){:root{--foreground-rgb:255,255,255;--background-start-rgb:0,0,0;--background-end-rgb:0,0,0}}body{color:rgb(var(--foreground-rgb));background:linear-gradient(to bottom,transparent,rgb(var(--background-end-rgb))) rgb(var(--background-start-rgb))}audio::-webkit-media-controls-current-time-display,audio::-webkit-media-controls-time-remaining-display{font-family:__VT323_2a9463;font-size:1.2rem}audio::-webkit-media-controls-enclosure{border-radius:0;border:2px solid #000}.loader{animation:spin 1s linear infinite;height:10px;width:10px;margin:-5px;scale:.5}@keyframes spin{0%{box-shadow:0 -30px #fff,10px -30px #fff,20px -20px #fff,30px -10px #fff,30px 0 #fff,30px 10px #fff,20px 20px #fff,10px 30px #fff,0 30px transparent,-10px 30px transparent,-20px 20px transparent,-30px 10px transparent,-30px 0 transparent,-30px -10px transparent,-20px -20px transparent,-10px -30px transparent}6.25%{box-shadow:0 -30px transparent,10px -30px #fff,20px -20px #fff,30px -10px #fff,30px 0 #fff,30px 10px #fff,20px 20px #fff,10px 30px #fff,0 30px #fff,-10px 30px transparent,-20px 20px transparent,-30px 10px transparent,-30px 0 transparent,-30px -10px transparent,-20px -20px transparent,-10px -30px transparent}12.5%{box-shadow:0 -30px transparent,10px -30px transparent,20px -20px #fff,30px -10px #fff,30px 0 #fff,30px 10px #fff,20px 20px #fff,10px 30px #fff,0 30px #fff,-10px 30px #fff,-20px 20px transparent,-30px 10px transparent,-30px 0 transparent,-30px -10px transparent,-20px -20px transparent,-10px -30px transparent}18.75%{box-shadow:0 -30px transparent,10px -30px transparent,20px -20px transparent,30px -10px #fff,30px 0 #fff,30px 10px #fff,20px 20px #fff,10px 30px #fff,0 30px #fff,-10px 30px #fff,-20px 20px #fff,-30px 10px transparent,-30px 0 transparent,-30px -10px transparent,-20px -20px transparent,-10px -30px transparent}25%{box-shadow:0 -30px transparent,10px -30px transparent,20px -20px transparent,30px -10px transparent,30px 0 #fff,30px 10px #fff,20px 20px #fff,10px 30px #fff,0 30px #fff,-10px 30px #fff,-20px 20px #fff,-30px 10px #fff,-30px 0 transparent,-30px -10px transparent,-20px -20px transparent,-10px -30px transparent}31.25%{box-shadow:0 -30px transparent,10px -30px transparent,20px -20px transparent,30px -10px transparent,30px 0 transparent,30px 10px #fff,20px 20px #fff,10px 30px #fff,0 30px #fff,-10px 30px #fff,-20px 20px #fff,-30px 10px #fff,-30px 0 #fff,-30px -10px transparent,-20px -20px transparent,-10px -30px transparent}37.5%{box-shadow:0 -30px transparent,10px -30px transparent,20px -20px transparent,30px -10px transparent,30px 0 transparent,30px 10px transparent,20px 20px #fff,10px 30px #fff,0 30px #fff,-10px 30px #fff,-20px 20px #fff,-30px 10px #fff,-30px 0 #fff,-30px -10px #fff,-20px -20px transparent,-10px -30px transparent}43.75%{box-shadow:0 -30px transparent,10px -30px transparent,20px -20px transparent,30px -10px transparent,30px 0 transparent,30px 10px transparent,20px 20px transparent,10px 30px #fff,0 30px #fff,-10px 30px #fff,-20px 20px #fff,-30px 10px #fff,-30px 0 #fff,-30px -10px #fff,-20px -20px #fff,-10px -30px transparent}50%{box-shadow:0 -30px transparent,10px -30px transparent,20px -20px transparent,30px -10px transparent,30px 0 transparent,30px 10px transparent,20px 20px transparent,10px 30px transparent,0 30px #fff,-10px 30px #fff,-20px 20px #fff,-30px 10px #fff,-30px 0 #fff,-30px -10px #fff,-20px -20px #fff,-10px -30px #fff}56.25%{box-shadow:0 -30px #fff,10px -30px transparent,20px -20px transparent,30px -10px transparent,30px 0 transparent,30px 10px transparent,20px 20px transparent,10px 30px transparent,0 30px transparent,-10px 30px #fff,-20px 20px #fff,-30px 10px #fff,-30px 0 #fff,-30px -10px #fff,-20px -20px #fff,-10px -30px #fff}62.5%{box-shadow:0 -30px #fff,10px -30px #fff,20px -20px transparent,30px -10px transparent,30px 0 transparent,30px 10px transparent,20px 20px transparent,10px 30px transparent,0 30px transparent,-10px 30px transparent,-20px 20px #fff,-30px 10px #fff,-30px 0 #fff,-30px -10px #fff,-20px -20px #fff,-10px -30px #fff}68.75%{box-shadow:0 -30px #fff,10px -30px #fff,20px -20px #fff,30px -10px transparent,30px 0 transparent,30px 10px transparent,20px 20px transparent,10px 30px transparent,0 30px transparent,-10px 30px transparent,-20px 20px transparent,-30px 10px #fff,-30px 0 #fff,-30px -10px #fff,-20px -20px #fff,-10px -30px #fff}75%{box-shadow:0 -30px #fff,10px -30px #fff,20px -20px #fff,30px -10px #fff,30px 0 transparent,30px 10px transparent,20px 20px transparent,10px 30px transparent,0 30px transparent,-10px 30px transparent,-20px 20px transparent,-30px 10px transparent,-30px 0 #fff,-30px -10px #fff,-20px -20px #fff,-10px -30px #fff}81.25%{box-shadow:0 -30px #fff,10px -30px #fff,20px -20px #fff,30px -10px #fff,30px 0 #fff,30px 10px transparent,20px 20px transparent,10px 30px transparent,0 30px transparent,-10px 30px transparent,-20px 20px transparent,-30px 10px transparent,-30px 0 transparent,-30px -10px #fff,-20px -20px #fff,-10px -30px #fff}87.5%{box-shadow:0 -30px #fff,10px -30px #fff,20px -20px #fff,30px -10px #fff,30px 0 #fff,30px 10px #fff,20px 20px transparent,10px 30px transparent,0 30px transparent,-10px 30px transparent,-20px 20px transparent,-30px 10px transparent,-30px 0 transparent,-30px -10px transparent,-20px -20px #fff,-10px -30px #fff}93.75%{box-shadow:0 -30px #fff,10px -30px #fff,20px -20px #fff,30px -10px #fff,30px 0 #fff,30px 10px #fff,20px 20px #fff,10px 30px transparent,0 30px transparent,-10px 30px transparent,-20px 20px transparent,-30px 10px transparent,-30px 0 transparent,-30px -10px transparent,-20px -20px transparent,-10px -30px #fff}to{box-shadow:0 -30px #fff,10px -30px #fff,20px -20px #fff,30px -10px #fff,30px 0 #fff,30px 10px #fff,20px 20px #fff,10px 30px #fff,0 30px transparent,-10px 30px transparent,-20px 20px transparent,-30px 10px transparent,-30px 0 transparent,-30px -10px transparent,-20px -20px transparent,-10px -30px transparent}}.hover\:bg-green-700:hover{--tw-bg-opacity:1;background-color:rgb(21 128 61/var(--tw-bg-opacity))}.hover\:bg-pop-orange:hover{--tw-bg-opacity:1;background-color:rgb(249 60 38/var(--tw-bg-opacity))}.hover\:text-blue-600:hover{--tw-text-opacity:1;color:rgb(37 99 235/var(--tw-text-opacity))}.hover\:underline:hover{text-decoration-line:underline}.active\:bg-pop-orange-dark:active{--tw-bg-opacity:1;background-color:rgb(204 25 5/var(--tw-bg-opacity))}.group:hover .group-hover\:block{display:block}@media (min-width:768px){.md\:w-1\/2{width:50%}}@media (min-width:1280px){.xl\:w-1\/3{width:33.333333%}.xl\:w-3\/4{width:75%}.xl\:pl-32{padding-left:8rem}.xl\:pr-32{padding-right:8rem}}@media (min-width:1536px){.\32xl\:w-1\/2{width:50%}.\32xl\:w-1\/4{width:25%}}.react-responsive-modal-root{position:fixed;top:0;bottom:0;left:0;right:0;z-index:1000}.react-responsive-modal-overlay{background:rgba(0,0,0,.5);position:fixed;top:0;bottom:0;left:0;right:0;z-index:-1}.react-responsive-modal-container{height:100%;outline:0;overflow-x:hidden;overflow-y:auto;text-align:center}.react-responsive-modal-containerCenter:after{width:0;height:100%;content:"";display:inline-block;vertical-align:middle}.react-responsive-modal-modal{max-width:800px;display:inline-block;text-align:left;vertical-align:middle;background:#fff;box-shadow:0 12px 15px 0 rgba(0,0,0,.25);margin:1.2rem;padding:1.2rem;position:relative;overflow-y:auto}.react-responsive-modal-closeButton{position:absolute;top:14px;right:14px;border:none;padding:0;cursor:pointer;background-color:transparent;display:flex}.react-responsive-modal-container,.react-responsive-modal-modal,.react-responsive-modal-overlay{animation-fill-mode:forwards!important}@keyframes react-responsive-modal-overlay-in{0%{opacity:0}to{opacity:1}}@keyframes react-responsive-modal-overlay-out{0%{opacity:1}to{opacity:0}}@keyframes react-responsive-modal-modal-in{0%{transform:scale(.96);opacity:0}to{transform:scale(100%);opacity:1}}@keyframes react-responsive-modal-modal-out{0%{transform:scale(100%);opacity:1}to{transform:scale(.96);opacity:0}}
\ No newline at end of file
diff --git a/spaces/Fengbinbin/gpt-academic/request_llm/bridge_all.py b/spaces/Fengbinbin/gpt-academic/request_llm/bridge_all.py
deleted file mode 100644
index fddc9a756f062b68610737123ea39b6a83698a42..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/request_llm/bridge_all.py
+++ /dev/null
@@ -1,240 +0,0 @@
-
-"""
- 该文件中主要包含2个函数,是所有LLM的通用接口,它们会继续向下调用更底层的LLM模型,处理多模型并行等细节
-
- 不具备多线程能力的函数:正常对话时使用,具备完备的交互功能,不可多线程
- 1. predict(...)
-
- 具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
- 2. predict_no_ui_long_connection(...)
-"""
-import tiktoken
-from functools import lru_cache
-from concurrent.futures import ThreadPoolExecutor
-from toolbox import get_conf, trimmed_format_exc
-
-from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
-from .bridge_chatgpt import predict as chatgpt_ui
-
-from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
-from .bridge_chatglm import predict as chatglm_ui
-
-from .bridge_newbing import predict_no_ui_long_connection as newbing_noui
-from .bridge_newbing import predict as newbing_ui
-
-# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
-# from .bridge_tgui import predict as tgui_ui
-
-colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
-
-class LazyloadTiktoken(object):
- def __init__(self, model):
- self.model = model
-
- @staticmethod
- @lru_cache(maxsize=128)
- def get_encoder(model):
- print('正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数')
- tmp = tiktoken.encoding_for_model(model)
- print('加载tokenizer完毕')
- return tmp
-
- def encode(self, *args, **kwargs):
- encoder = self.get_encoder(self.model)
- return encoder.encode(*args, **kwargs)
-
- def decode(self, *args, **kwargs):
- encoder = self.get_encoder(self.model)
- return encoder.decode(*args, **kwargs)
-
-# Endpoint 重定向
-API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
-openai_endpoint = "https://api.openai.com/v1/chat/completions"
-api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
-newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
-# 兼容旧版的配置
-try:
- API_URL, = get_conf("API_URL")
- if API_URL != "https://api.openai.com/v1/chat/completions":
- openai_endpoint = API_URL
- print("警告!API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置")
-except:
- pass
-# 新版配置
-if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
-if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
-if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
-
-
-# 获取tokenizer
-tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
-tokenizer_gpt4 = LazyloadTiktoken("gpt-4")
-get_token_num_gpt35 = lambda txt: len(tokenizer_gpt35.encode(txt, disallowed_special=()))
-get_token_num_gpt4 = lambda txt: len(tokenizer_gpt4.encode(txt, disallowed_special=()))
-
-
-model_info = {
- # openai
- "gpt-3.5-turbo": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": openai_endpoint,
- "max_token": 4096,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-
- "gpt-4": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": openai_endpoint,
- "max_token": 8192,
- "tokenizer": tokenizer_gpt4,
- "token_cnt": get_token_num_gpt4,
- },
-
- # api_2d
- "api2d-gpt-3.5-turbo": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": api2d_endpoint,
- "max_token": 4096,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-
- "api2d-gpt-4": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": api2d_endpoint,
- "max_token": 8192,
- "tokenizer": tokenizer_gpt4,
- "token_cnt": get_token_num_gpt4,
- },
-
- # chatglm
- "chatglm": {
- "fn_with_ui": chatglm_ui,
- "fn_without_ui": chatglm_noui,
- "endpoint": None,
- "max_token": 1024,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
- # newbing
- "newbing": {
- "fn_with_ui": newbing_ui,
- "fn_without_ui": newbing_noui,
- "endpoint": newbing_endpoint,
- "max_token": 4096,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-}
-
-
-def LLM_CATCH_EXCEPTION(f):
- """
- 装饰器函数,将错误显示出来
- """
- def decorated(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience):
- try:
- return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
- except Exception as e:
- tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
- observe_window[0] = tb_str
- return tb_str
- return decorated
-
-
-def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False):
- """
- 发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
- inputs:
- 是本次问询的输入
- sys_prompt:
- 系统静默prompt
- llm_kwargs:
- LLM的内部调优参数
- history:
- 是之前的对话列表
- observe_window = None:
- 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
- """
- import threading, time, copy
-
- model = llm_kwargs['llm_model']
- n_model = 1
- if '&' not in model:
- assert not model.startswith("tgui"), "TGUI不支持函数插件的实现"
-
- # 如果只询问1个大语言模型:
- method = model_info[model]["fn_without_ui"]
- return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
- else:
- # 如果同时询问多个大语言模型:
- executor = ThreadPoolExecutor(max_workers=4)
- models = model.split('&')
- n_model = len(models)
-
- window_len = len(observe_window)
- assert window_len==3
- window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True]
-
- futures = []
- for i in range(n_model):
- model = models[i]
- method = model_info[model]["fn_without_ui"]
- llm_kwargs_feedin = copy.deepcopy(llm_kwargs)
- llm_kwargs_feedin['llm_model'] = model
- future = executor.submit(LLM_CATCH_EXCEPTION(method), inputs, llm_kwargs_feedin, history, sys_prompt, window_mutex[i], console_slience)
- futures.append(future)
-
- def mutex_manager(window_mutex, observe_window):
- while True:
- time.sleep(0.25)
- if not window_mutex[-1]: break
- # 看门狗(watchdog)
- for i in range(n_model):
- window_mutex[i][1] = observe_window[1]
- # 观察窗(window)
- chat_string = []
- for i in range(n_model):
- chat_string.append( f"【{str(models[i])} 说】: {window_mutex[i][0]} " )
- res = '
\n\n---\n\n'.join(chat_string)
- # # # # # # # # # # #
- observe_window[0] = res
-
- t_model = threading.Thread(target=mutex_manager, args=(window_mutex, observe_window), daemon=True)
- t_model.start()
-
- return_string_collect = []
- while True:
- worker_done = [h.done() for h in futures]
- if all(worker_done):
- executor.shutdown()
- break
- time.sleep(1)
-
- for i, future in enumerate(futures): # wait and get
- return_string_collect.append( f"【{str(models[i])} 说】: {future.result()} " )
-
- window_mutex[-1] = False # stop mutex thread
- res = '
\n\n---\n\n'.join(return_string_collect)
- return res
-
-
-def predict(inputs, llm_kwargs, *args, **kwargs):
- """
- 发送至LLM,流式获取输出。
- 用于基础的对话功能。
- inputs 是本次问询的输入
- top_p, temperature是LLM的内部调优参数
- history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
- chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
- additional_fn代表点击的哪个按钮,按钮见functional.py
- """
-
- method = model_info[llm_kwargs['llm_model']]["fn_with_ui"]
- yield from method(inputs, llm_kwargs, *args, **kwargs)
-
diff --git a/spaces/FranklinWillemen/TARS/gradio-ui.py b/spaces/FranklinWillemen/TARS/gradio-ui.py
deleted file mode 100644
index 245dfdf0f85417a970b8cd7935e57c74baa32c88..0000000000000000000000000000000000000000
--- a/spaces/FranklinWillemen/TARS/gradio-ui.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import gradio as gr
-import discourse as d
-
-# set a custom theme
-theme = gr.themes.Default().set(
- body_background_fill="#000000",
-)
-
-with gr.Blocks(theme=theme) as ui:
- with gr.Row():
- with gr.Column(scale=1):
- message = gr.Audio(source="microphone", type="filepath")
- with gr.Row():
- btn1 = gr.Button("Generate Reponse")
- with gr.Row():
- with gr.Column(scale=1):
- audio_response = gr.Audio()
-
- btn1.click(fn=d.respond, inputs=message, outputs=audio_response)
-
-ui.launch()
diff --git a/spaces/GT4SD/regression_transformer/utils.py b/spaces/GT4SD/regression_transformer/utils.py
deleted file mode 100644
index ab4e6efca0ab51fae41fc9e5dc90435f6af45e54..0000000000000000000000000000000000000000
--- a/spaces/GT4SD/regression_transformer/utils.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import json
-import logging
-import os
-from collections import defaultdict
-from typing import Dict, List, Tuple
-
-import mols2grid
-import pandas as pd
-from gt4sd.algorithms import (
- RegressionTransformerMolecules,
- RegressionTransformerProteins,
-)
-from gt4sd.algorithms.core import AlgorithmConfiguration
-from rdkit import Chem
-from terminator.selfies import decoder
-
-logger = logging.getLogger(__name__)
-logger.addHandler(logging.NullHandler())
-
-
-def get_application(application: str) -> AlgorithmConfiguration:
- """
- Convert application name to AlgorithmConfiguration.
-
- Args:
- application: Molecules or Proteins
-
- Returns:
- The corresponding AlgorithmConfiguration
- """
- if application == "Molecules":
- application = RegressionTransformerMolecules
- elif application == "Proteins":
- application = RegressionTransformerProteins
- else:
- raise ValueError(
- "Currently only models for molecules and proteins are supported"
- )
- return application
-
-
-def get_inference_dict(
- application: AlgorithmConfiguration, algorithm_version: str
-) -> Dict:
- """
- Get inference dictionary for a given application and algorithm version.
-
- Args:
- application: algorithm application (Molecules or Proteins)
- algorithm_version: algorithm version (e.g. qed)
-
- Returns:
- A dictionary with the inference parameters.
- """
- config = application(algorithm_version=algorithm_version)
- with open(os.path.join(config.ensure_artifacts(), "inference.json"), "r") as f:
- data = json.load(f)
- return data
-
-
-def get_rt_name(x: Dict) -> str:
- """
- Get the UI display name of the regression transformer.
-
- Args:
- x: dictionary with the inference parameters
-
- Returns:
- The display name
- """
- return (
- x["algorithm_application"].split("Transformer")[-1]
- + ": "
- + x["algorithm_version"].capitalize()
- )
-
-
-def draw_grid_predict(prediction: str, target: str, domain: str) -> str:
- """
- Uses mols2grid to draw a HTML grid for the prediction
-
- Args:
- prediction: Predicted sequence.
- target: Target molecule
- domain: Domain of the prediction (molecules or proteins)
-
- Returns:
- HTML to display
- """
-
- if domain not in ["Molecules", "Proteins"]:
- raise ValueError(f"Unsupported domain {domain}")
-
- seq = target.split("|")[-1]
- converter = (
- decoder
- if domain == "Molecules"
- else lambda x: Chem.MolToSmiles(Chem.MolFromFASTA(x))
- )
- try:
- seq = converter(seq)
- except Exception:
- logger.warning(f"Could not draw sequence {seq}")
-
- result = {"SMILES": [seq], "Name": ["Target"]}
- # Add properties
- for prop in prediction.split("<")[1:]:
- result[
- prop.split(">")[0]
- ] = f"{prop.split('>')[0].capitalize()} = {prop.split('>')[1]}"
- result_df = pd.DataFrame(result)
- obj = mols2grid.display(
- result_df,
- tooltip=list(result.keys()),
- height=900,
- n_cols=1,
- name="Results",
- size=(600, 700),
- )
- return obj.data
-
-
-def draw_grid_generate(
- samples: List[Tuple[str]], domain: str, n_cols: int = 5, size=(140, 200)
-) -> str:
- """
- Uses mols2grid to draw a HTML grid for the generated molecules
-
- Args:
- samples: The generated samples (with properties)
- domain: Domain of the prediction (molecules or proteins)
- n_cols: Number of columns in grid. Defaults to 5.
- size: Size of molecule in grid. Defaults to (140, 200).
-
- Returns:
- HTML to display
- """
-
- if domain not in ["Molecules", "Proteins"]:
- raise ValueError(f"Unsupported domain {domain}")
-
- if domain == "Proteins":
- try:
- smis = list(
- map(lambda x: Chem.MolToSmiles(Chem.MolFromFASTA(x[0])), samples)
- )
- except Exception:
- logger.warning(f"Could not convert some sequences {samples}")
- else:
- smis = [s[0] for s in samples]
-
- result = defaultdict(list)
- result.update({"SMILES": smis, "Name": [f"sample_{i}" for i in range(len(smis))]})
-
- # Create properties
- properties = [s.split("<")[1] for s in samples[0][1].split(">")[:-1]]
- # Fill properties
- for sample in samples:
- for prop in properties:
- value = float(sample[1].split(prop)[-1][1:].split("<")[0])
- result[prop].append(f"{prop} = {value}")
-
- result_df = pd.DataFrame(result)
- obj = mols2grid.display(
- result_df,
- tooltip=list(result.keys()),
- height=1100,
- n_cols=n_cols,
- name="Results",
- size=size,
- )
- return obj.data
diff --git a/spaces/Gen-Sim/Gen-Sim/misc/analyze_stats.py b/spaces/Gen-Sim/Gen-Sim/misc/analyze_stats.py
deleted file mode 100644
index 465c508f6ab2069ec3db4366045dc9569c6f2611..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/misc/analyze_stats.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import matplotlib as mpl
-
-mpl.use("Agg")
-import argparse
-import os
-import pandas as pd
-import seaborn as sns
-import matplotlib.pyplot as plt
-import matplotlib
-import IPython
-
-font = {
- "size": 22,
-}
-matplotlib.rc("font", **font)
-sns.set_context("paper", font_scale=2.0)
-
-
-def mkdir_if_missing(dst_dir):
- if not os.path.exists(dst_dir):
- os.makedirs(dst_dir)
-
-
-def save_figure(name, title=""):
- if len(title) > 0:
- plt.title(title)
- plt.tight_layout()
- print(f"output/output_figures/{name[:30]}")
- mkdir_if_missing(f"output/output_figures/{name[:30]}")
- plt.savefig(f"output/output_figures/{name[:30]}/output.png")
- plt.clf()
-
-
-def main(multirun_out, title):
- dfs = []
- suffix = ""
- run_num = 0
-
- for rundir in (sorted(multirun_out.split(","))):
- runpath = os.path.join('output/output_stats', rundir)
- statspath = os.path.join(runpath, "eval_results.csv")
- if os.path.exists(statspath):
- run_num += 1
- df = pd.read_csv(statspath)
- # print(df)
- # df.drop(df.iloc[-1], axis=0, inplace=True)
- # df.drop('diversity', axis=1)
- dfs.append(df)
- else:
- print("skip:", statspath)
-
- # merge dfs, which have shared column names
- df = pd.concat(dfs)
- print(df.iloc)
- title += f" run: {run_num} "
-
- # rewards
- fig, ax = plt.subplots(figsize=(16, 8))
- sns_plot = sns.barplot(
- data=df, x="metric", y="success", hue='model', errorbar=("sd", 1), palette="deep"
- )
-
- # label texts
- for container in ax.containers:
- ax.bar_label(container, label_type="center", fontsize="x-large", fmt="%.2f")
-
- # save plot
- save_figure(f"{multirun_out}_{title}{suffix}", title)
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--multirun_out", type=str)
- parser.add_argument("--title", type=str, default="")
-
- args = parser.parse_args()
- main(args.multirun_out, args.title)
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport3_new_pickplace_demo10.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport3_new_pickplace_demo10.sh
deleted file mode 100644
index 50cf40c1d466724c8f4123fbfb1f59d6bd4a9718..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport3_new_pickplace_demo10.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-#SBATCH -c 10
-#SBATCH -n 1
-#SBATCH -o logs/%j.out
-#SBATCH --exclusive
-STEPS=${1-'50000'}
-now=$(date "+%Y-%m-%d_%H-%M-%S")
-
-sh scripts/traintest_scripts/train_test_multi_task_goal_demo10.sh data \
- "[stack-block-pyramid,color-coordinated-sphere-insertion,rainbow-stack,put-block-in-bowl,vertical-insertion-blocks,stack-blocks-in-container]" \
- "[stack-block-pyramid,put-block-in-bowl]" \
- gpt5_mixcliport2_task_new_demo10_${now}
\ No newline at end of file
diff --git a/spaces/Giozh/openai-reverse-proxy/README.md b/spaces/Giozh/openai-reverse-proxy/README.md
deleted file mode 100644
index 653c1e497e4373c4b573e7e91b0926d14916df8d..0000000000000000000000000000000000000000
--- a/spaces/Giozh/openai-reverse-proxy/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Openai Reverse Proxy
-emoji: ⚡
-colorFrom: green
-colorTo: indigo
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/GloryGranger80888/Gradio/app.py b/spaces/GloryGranger80888/Gradio/app.py
deleted file mode 100644
index 545b098386a9e89b30d121537765f2ce203c84be..0000000000000000000000000000000000000000
--- a/spaces/GloryGranger80888/Gradio/app.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import gradio as gr
-from gradio.mix import Parallel
-
-title="My First Text Generator"
-desciption=" Input text."
-examples = [
- ["Once upon a time, "],
- ["Dr. Woo was teaching a coding workshop at Hong Kong True Light Colleage,"]
-]
-
-model1=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-model2=gr.Interface.load("huggingface/gpt2")
-model3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-125M")
-
-gr.Parallel(model1, model2, model3, title=title, desciption=desciption).launch()
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_512x512_20k_voc12aug.py
deleted file mode 100644
index 56345d1806482ac822d709893fe6942f44be6f74..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './upernet_r50_512x512_20k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/docs/METRICS.md b/spaces/GrandaddyShmax/AudioCraft_Plus/docs/METRICS.md
deleted file mode 100644
index e2ae9a184cbccb8bfefb4ce77afa5ddab743a051..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/docs/METRICS.md
+++ /dev/null
@@ -1,127 +0,0 @@
-# AudioCraft objective metrics
-
-In addition to training losses, AudioCraft provides a set of objective metrics
-for audio synthesis and audio generation. As these metrics may require
-extra dependencies and can be costly to train, they are often disabled by default.
-This section provides guidance for setting up and using these metrics in
-the AudioCraft training pipelines.
-
-## Available metrics
-
-### Audio synthesis quality metrics
-
-#### SI-SNR
-
-We provide an implementation of the Scale-Invariant Signal-to-Noise Ratio in PyTorch.
-No specific requirement is needed for this metric. Please activate the metric at the
-evaluation stage with the appropriate flag:
-
-```shell
-dora run <...> evaluate.metrics.sisnr=true
-```
-
-#### ViSQOL
-
-We provide a Python wrapper around the ViSQOL [official implementation](https://github.com/google/visqol)
-to conveniently run ViSQOL within the training pipelines.
-
-One must specify the path to the ViSQOL installation through the configuration in order
-to enable ViSQOL computations in AudioCraft:
-
-```shell
-# the first parameter is used to activate visqol computation while the second specify
-# the path to visqol's library to be used by our python wrapper
-dora run <...> evaluate.metrics.visqol=true metrics.visqol.bin=
-```
-
-See an example grid: [Compression with ViSQOL](../audiocraft/grids/compression/encodec_musicgen_32khz.py)
-
-To learn more about ViSQOL and how to build ViSQOL binary using bazel, please refer to the
-instructions available in the [open source repository](https://github.com/google/visqol).
-
-### Audio generation metrics
-
-#### Frechet Audio Distance
-
-Similarly to ViSQOL, we use a Python wrapper around the Frechet Audio Distance
-[official implementation](https://github.com/google-research/google-research/tree/master/frechet_audio_distance)
-in TensorFlow.
-
-Note that we had to make several changes to the actual code in order to make it work.
-Please refer to the [FrechetAudioDistanceMetric](../audiocraft/metrics/fad.py) class documentation
-for more details. We do not plan to provide further support in obtaining a working setup for the
-Frechet Audio Distance at this stage.
-
-```shell
-# the first parameter is used to activate FAD metric computation while the second specify
-# the path to FAD library to be used by our python wrapper
-dora run <...> evaluate.metrics.fad=true metrics.fad.bin=
-```
-
-See an example grid: [Evaluation with FAD](../audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py)
-
-#### Kullback-Leibler Divergence
-
-We provide a PyTorch implementation of the Kullback-Leibler Divergence computed over the probabilities
-of the labels obtained by a state-of-the-art audio classifier. We provide our implementation of the KLD
-using the [PaSST classifier](https://github.com/kkoutini/PaSST).
-
-In order to use the KLD metric over PaSST, you must install the PaSST library as an extra dependency:
-```shell
-pip install 'git+https://github.com/kkoutini/passt_hear21@0.0.19#egg=hear21passt'
-```
-
-Then similarly, you can use the metric activating the corresponding flag:
-
-```shell
-# one could extend the kld metric with additional audio classifier models that can then be picked through the configuration
-dora run <...> evaluate.metrics.kld=true metrics.kld.model=passt
-```
-
-#### Text consistency
-
-We provide a text-consistency metric, similarly to the MuLan Cycle Consistency from
-[MusicLM](https://arxiv.org/pdf/2301.11325.pdf) or the CLAP score used in
-[Make-An-Audio](https://arxiv.org/pdf/2301.12661v1.pdf).
-More specifically, we provide a PyTorch implementation of a Text consistency metric
-relying on a pre-trained [Contrastive Language-Audio Pretraining (CLAP)](https://github.com/LAION-AI/CLAP).
-
-Please install the CLAP library as an extra dependency prior to using the metric:
-```shell
-pip install laion_clap
-```
-
-Then similarly, you can use the metric activating the corresponding flag:
-
-```shell
-# one could extend the text consistency metric with additional audio classifier models that can then be picked through the configuration
-dora run ... evaluate.metrics.text_consistency=true metrics.text_consistency.model=clap
-```
-
-Note that the text consistency metric based on CLAP will require the CLAP checkpoint to be
-provided in the configuration.
-
-#### Chroma cosine similarity
-
-Finally, as introduced in MusicGen, we provide a Chroma Cosine Similarity metric in PyTorch.
-No specific requirement is needed for this metric. Please activate the metric at the
-evaluation stage with the appropriate flag:
-
-```shell
-dora run ... evaluate.metrics.chroma_cosine=true
-```
-
-#### Comparing against reconstructed audio
-
-For all the above audio generation metrics, we offer the option to compute the metric on the reconstructed audio
-fed in EnCodec instead of the generated sample using the flag `.use_gt=true`.
-
-## Example usage
-
-You will find example of configuration for the different metrics introduced above in:
-* The [musicgen's default solver](../config/solver/musicgen/default.yaml) for all audio generation metrics
-* The [compression's default solver](../config/solver/compression/default.yaml) for all audio synthesis metrics
-
-Similarly, we provide different examples in our grids:
-* [Evaluation with ViSQOL](../audiocraft/grids/compression/encodec_musicgen_32khz.py)
-* [Evaluation with FAD and others](../audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py)
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/lib/network_auxi.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/lib/network_auxi.py
deleted file mode 100644
index 7bf5bc541fe6f13f02e83e267f07490c1740af69..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/lib/network_auxi.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.init as init
-
-from lib import Resnet, Resnext_torch
-
-
-def resnet50_stride32():
- return DepthNet(backbone='resnet', depth=50, upfactors=[2, 2, 2, 2])
-
-def resnext101_stride32x8d():
- return DepthNet(backbone='resnext101_32x8d', depth=101, upfactors=[2, 2, 2, 2])
-
-
-class Decoder(nn.Module):
- def __init__(self):
- super(Decoder, self).__init__()
- self.inchannels = [256, 512, 1024, 2048]
- self.midchannels = [256, 256, 256, 512]
- self.upfactors = [2,2,2,2]
- self.outchannels = 1
-
- self.conv = FTB(inchannels=self.inchannels[3], midchannels=self.midchannels[3])
- self.conv1 = nn.Conv2d(in_channels=self.midchannels[3], out_channels=self.midchannels[2], kernel_size=3, padding=1, stride=1, bias=True)
- self.upsample = nn.Upsample(scale_factor=self.upfactors[3], mode='bilinear', align_corners=True)
-
- self.ffm2 = FFM(inchannels=self.inchannels[2], midchannels=self.midchannels[2], outchannels = self.midchannels[2], upfactor=self.upfactors[2])
- self.ffm1 = FFM(inchannels=self.inchannels[1], midchannels=self.midchannels[1], outchannels = self.midchannels[1], upfactor=self.upfactors[1])
- self.ffm0 = FFM(inchannels=self.inchannels[0], midchannels=self.midchannels[0], outchannels = self.midchannels[0], upfactor=self.upfactors[0])
-
- self.outconv = AO(inchannels=self.midchannels[0], outchannels=self.outchannels, upfactor=2)
- self._init_params()
-
- def _init_params(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.ConvTranspose2d):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.BatchNorm2d): #NN.BatchNorm2d
- init.constant_(m.weight, 1)
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.Linear):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
-
- def forward(self, features):
- x_32x = self.conv(features[3]) # 1/32
- x_32 = self.conv1(x_32x)
- x_16 = self.upsample(x_32) # 1/16
-
- x_8 = self.ffm2(features[2], x_16) # 1/8
- x_4 = self.ffm1(features[1], x_8) # 1/4
- x_2 = self.ffm0(features[0], x_4) # 1/2
- #-----------------------------------------
- x = self.outconv(x_2) # original size
- return x
-
-class DepthNet(nn.Module):
- __factory = {
- 18: Resnet.resnet18,
- 34: Resnet.resnet34,
- 50: Resnet.resnet50,
- 101: Resnet.resnet101,
- 152: Resnet.resnet152
- }
- def __init__(self,
- backbone='resnet',
- depth=50,
- upfactors=[2, 2, 2, 2]):
- super(DepthNet, self).__init__()
- self.backbone = backbone
- self.depth = depth
- self.pretrained = False
- self.inchannels = [256, 512, 1024, 2048]
- self.midchannels = [256, 256, 256, 512]
- self.upfactors = upfactors
- self.outchannels = 1
-
- # Build model
- if self.backbone == 'resnet':
- if self.depth not in DepthNet.__factory:
- raise KeyError("Unsupported depth:", self.depth)
- self.encoder = DepthNet.__factory[depth](pretrained=self.pretrained)
- elif self.backbone == 'resnext101_32x8d':
- self.encoder = Resnext_torch.resnext101_32x8d(pretrained=self.pretrained)
- else:
- self.encoder = Resnext_torch.resnext101(pretrained=self.pretrained)
-
- def forward(self, x):
- x = self.encoder(x) # 1/32, 1/16, 1/8, 1/4
- return x
-
-
-class FTB(nn.Module):
- def __init__(self, inchannels, midchannels=512):
- super(FTB, self).__init__()
- self.in1 = inchannels
- self.mid = midchannels
- self.conv1 = nn.Conv2d(in_channels=self.in1, out_channels=self.mid, kernel_size=3, padding=1, stride=1,
- bias=True)
- # NN.BatchNorm2d
- self.conv_branch = nn.Sequential(nn.ReLU(inplace=True), \
- nn.Conv2d(in_channels=self.mid, out_channels=self.mid, kernel_size=3,
- padding=1, stride=1, bias=True), \
- nn.BatchNorm2d(num_features=self.mid), \
- nn.ReLU(inplace=True), \
- nn.Conv2d(in_channels=self.mid, out_channels=self.mid, kernel_size=3,
- padding=1, stride=1, bias=True))
- self.relu = nn.ReLU(inplace=True)
-
- self.init_params()
-
- def forward(self, x):
- x = self.conv1(x)
- x = x + self.conv_branch(x)
- x = self.relu(x)
-
- return x
-
- def init_params(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.ConvTranspose2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- init.normal_(m.weight, std=0.01)
- # init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.BatchNorm2d): # NN.BatchNorm2d
- init.constant_(m.weight, 1)
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.Linear):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
-
-
-class ATA(nn.Module):
- def __init__(self, inchannels, reduction=8):
- super(ATA, self).__init__()
- self.inchannels = inchannels
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
- self.fc = nn.Sequential(nn.Linear(self.inchannels * 2, self.inchannels // reduction),
- nn.ReLU(inplace=True),
- nn.Linear(self.inchannels // reduction, self.inchannels),
- nn.Sigmoid())
- self.init_params()
-
- def forward(self, low_x, high_x):
- n, c, _, _ = low_x.size()
- x = torch.cat([low_x, high_x], 1)
- x = self.avg_pool(x)
- x = x.view(n, -1)
- x = self.fc(x).view(n, c, 1, 1)
- x = low_x * x + high_x
-
- return x
-
- def init_params(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- # init.normal(m.weight, std=0.01)
- init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.ConvTranspose2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- # init.normal_(m.weight, std=0.01)
- init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.BatchNorm2d): # NN.BatchNorm2d
- init.constant_(m.weight, 1)
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.Linear):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
-
-
-class FFM(nn.Module):
- def __init__(self, inchannels, midchannels, outchannels, upfactor=2):
- super(FFM, self).__init__()
- self.inchannels = inchannels
- self.midchannels = midchannels
- self.outchannels = outchannels
- self.upfactor = upfactor
-
- self.ftb1 = FTB(inchannels=self.inchannels, midchannels=self.midchannels)
- # self.ata = ATA(inchannels = self.midchannels)
- self.ftb2 = FTB(inchannels=self.midchannels, midchannels=self.outchannels)
-
- self.upsample = nn.Upsample(scale_factor=self.upfactor, mode='bilinear', align_corners=True)
-
- self.init_params()
-
- def forward(self, low_x, high_x):
- x = self.ftb1(low_x)
- x = x + high_x
- x = self.ftb2(x)
- x = self.upsample(x)
-
- return x
-
- def init_params(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- init.normal_(m.weight, std=0.01)
- # init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.ConvTranspose2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- init.normal_(m.weight, std=0.01)
- # init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.BatchNorm2d): # NN.Batchnorm2d
- init.constant_(m.weight, 1)
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.Linear):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
-
-
-class AO(nn.Module):
- # Adaptive output module
- def __init__(self, inchannels, outchannels, upfactor=2):
- super(AO, self).__init__()
- self.inchannels = inchannels
- self.outchannels = outchannels
- self.upfactor = upfactor
-
- self.adapt_conv = nn.Sequential(
- nn.Conv2d(in_channels=self.inchannels, out_channels=self.inchannels // 2, kernel_size=3, padding=1,
- stride=1, bias=True), \
- nn.BatchNorm2d(num_features=self.inchannels // 2), \
- nn.ReLU(inplace=True), \
- nn.Conv2d(in_channels=self.inchannels // 2, out_channels=self.outchannels, kernel_size=3, padding=1,
- stride=1, bias=True), \
- nn.Upsample(scale_factor=self.upfactor, mode='bilinear', align_corners=True))
-
- self.init_params()
-
- def forward(self, x):
- x = self.adapt_conv(x)
- return x
-
- def init_params(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- init.normal_(m.weight, std=0.01)
- # init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.ConvTranspose2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- init.normal_(m.weight, std=0.01)
- # init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.BatchNorm2d): # NN.Batchnorm2d
- init.constant_(m.weight, 1)
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.Linear):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
-
-
-
-# ==============================================================================================================
-
-
-class ResidualConv(nn.Module):
- def __init__(self, inchannels):
- super(ResidualConv, self).__init__()
- # NN.BatchNorm2d
- self.conv = nn.Sequential(
- # nn.BatchNorm2d(num_features=inchannels),
- nn.ReLU(inplace=False),
- # nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=3, padding=1, stride=1, groups=inchannels,bias=True),
- # nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=1, padding=0, stride=1, groups=1,bias=True)
- nn.Conv2d(in_channels=inchannels, out_channels=inchannels / 2, kernel_size=3, padding=1, stride=1,
- bias=False),
- nn.BatchNorm2d(num_features=inchannels / 2),
- nn.ReLU(inplace=False),
- nn.Conv2d(in_channels=inchannels / 2, out_channels=inchannels, kernel_size=3, padding=1, stride=1,
- bias=False)
- )
- self.init_params()
-
- def forward(self, x):
- x = self.conv(x) + x
- return x
-
- def init_params(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- init.normal_(m.weight, std=0.01)
- # init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.ConvTranspose2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- init.normal_(m.weight, std=0.01)
- # init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.BatchNorm2d): # NN.BatchNorm2d
- init.constant_(m.weight, 1)
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.Linear):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
-
-
-class FeatureFusion(nn.Module):
- def __init__(self, inchannels, outchannels):
- super(FeatureFusion, self).__init__()
- self.conv = ResidualConv(inchannels=inchannels)
- # NN.BatchNorm2d
- self.up = nn.Sequential(ResidualConv(inchannels=inchannels),
- nn.ConvTranspose2d(in_channels=inchannels, out_channels=outchannels, kernel_size=3,
- stride=2, padding=1, output_padding=1),
- nn.BatchNorm2d(num_features=outchannels),
- nn.ReLU(inplace=True))
-
- def forward(self, lowfeat, highfeat):
- return self.up(highfeat + self.conv(lowfeat))
-
- def init_params(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- init.normal_(m.weight, std=0.01)
- # init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.ConvTranspose2d):
- # init.kaiming_normal_(m.weight, mode='fan_out')
- init.normal_(m.weight, std=0.01)
- # init.xavier_normal_(m.weight)
- if m.bias is not None:
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.BatchNorm2d): # NN.BatchNorm2d
- init.constant_(m.weight, 1)
- init.constant_(m.bias, 0)
- elif isinstance(m, nn.Linear):
- init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- init.constant_(m.bias, 0)
-
-
-class SenceUnderstand(nn.Module):
- def __init__(self, channels):
- super(SenceUnderstand, self).__init__()
- self.channels = channels
- self.conv1 = nn.Sequential(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1),
- nn.ReLU(inplace=True))
- self.pool = nn.AdaptiveAvgPool2d(8)
- self.fc = nn.Sequential(nn.Linear(512 * 8 * 8, self.channels),
- nn.ReLU(inplace=True))
- self.conv2 = nn.Sequential(
- nn.Conv2d(in_channels=self.channels, out_channels=self.channels, kernel_size=1, padding=0),
- nn.ReLU(inplace=True))
- self.initial_params()
-
- def forward(self, x):
- n, c, h, w = x.size()
- x = self.conv1(x)
- x = self.pool(x)
- x = x.view(n, -1)
- x = self.fc(x)
- x = x.view(n, self.channels, 1, 1)
- x = self.conv2(x)
- x = x.repeat(1, 1, h, w)
- return x
-
- def initial_params(self, dev=0.01):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- # print torch.sum(m.weight)
- m.weight.data.normal_(0, dev)
- if m.bias is not None:
- m.bias.data.fill_(0)
- elif isinstance(m, nn.ConvTranspose2d):
- # print torch.sum(m.weight)
- m.weight.data.normal_(0, dev)
- if m.bias is not None:
- m.bias.data.fill_(0)
- elif isinstance(m, nn.Linear):
- m.weight.data.normal_(0, dev)
-
-
-if __name__ == '__main__':
- net = DepthNet(depth=50, pretrained=True)
- print(net)
- inputs = torch.ones(4,3,128,128)
- out = net(inputs)
- print(out.size())
-
diff --git a/spaces/HUBioDataLab/DrugGEN/new_dataloader.py b/spaces/HUBioDataLab/DrugGEN/new_dataloader.py
deleted file mode 100644
index 0fa72932817211b294b46bce48018818c02c1c68..0000000000000000000000000000000000000000
--- a/spaces/HUBioDataLab/DrugGEN/new_dataloader.py
+++ /dev/null
@@ -1,293 +0,0 @@
-import pickle
-import numpy as np
-import torch
-from rdkit import Chem
-from torch_geometric.data import (Data, InMemoryDataset)
-import os.path as osp
-from tqdm import tqdm
-import re
-from rdkit import RDLogger
-RDLogger.DisableLog('rdApp.*')
-class DruggenDataset(InMemoryDataset):
-
- def __init__(self, root, dataset_file, raw_files, max_atom, features, transform=None, pre_transform=None, pre_filter=None):
- self.dataset_name = dataset_file.split(".")[0]
- self.dataset_file = dataset_file
- self.raw_files = raw_files
- self.max_atom = max_atom
- self.features = features
- super().__init__(root, transform, pre_transform, pre_filter)
- path = osp.join(self.processed_dir, dataset_file)
- self.data, self.slices = torch.load(path)
- self.root = root
-
-
- @property
- def processed_dir(self):
-
- return self.root
-
- @property
- def raw_file_names(self):
- return self.raw_files
-
- @property
- def processed_file_names(self):
- return self.dataset_file
-
- def _generate_encoders_decoders(self, data):
-
- self.data = data
- print('Creating atoms encoder and decoder..')
- atom_labels = sorted(set([atom.GetAtomicNum() for mol in self.data for atom in mol.GetAtoms()] + [0]))
- self.atom_encoder_m = {l: i for i, l in enumerate(atom_labels)}
- self.atom_decoder_m = {i: l for i, l in enumerate(atom_labels)}
- self.atom_num_types = len(atom_labels)
- print('Created atoms encoder and decoder with {} atom types and 1 PAD symbol!'.format(
- self.atom_num_types - 1))
- print("atom_labels", atom_labels)
- print('Creating bonds encoder and decoder..')
- bond_labels = [Chem.rdchem.BondType.ZERO] + list(sorted(set(bond.GetBondType()
- for mol in self.data
- for bond in mol.GetBonds())))
- print("bond labels", bond_labels)
- self.bond_encoder_m = {l: i for i, l in enumerate(bond_labels)}
- self.bond_decoder_m = {i: l for i, l in enumerate(bond_labels)}
- self.bond_num_types = len(bond_labels)
- print('Created bonds encoder and decoder with {} bond types and 1 PAD symbol!'.format(
- self.bond_num_types - 1))
- #dataset_names = str(self.dataset_name)
- with open("data/encoders/" +"atom_" + self.dataset_name + ".pkl","wb") as atom_encoders:
- pickle.dump(self.atom_encoder_m,atom_encoders)
-
-
- with open("data/decoders/" +"atom_" + self.dataset_name + ".pkl","wb") as atom_decoders:
- pickle.dump(self.atom_decoder_m,atom_decoders)
-
-
- with open("data/encoders/" +"bond_" + self.dataset_name + ".pkl","wb") as bond_encoders:
- pickle.dump(self.bond_encoder_m,bond_encoders)
-
-
- with open("data/decoders/" +"bond_" + self.dataset_name + ".pkl","wb") as bond_decoders:
- pickle.dump(self.bond_decoder_m,bond_decoders)
-
-
-
- def _genA(self, mol, connected=True, max_length=None):
-
- max_length = max_length if max_length is not None else mol.GetNumAtoms()
-
- A = np.zeros(shape=(max_length, max_length))
-
- begin, end = [b.GetBeginAtomIdx() for b in mol.GetBonds()], [b.GetEndAtomIdx() for b in mol.GetBonds()]
- bond_type = [self.bond_encoder_m[b.GetBondType()] for b in mol.GetBonds()]
-
- A[begin, end] = bond_type
- A[end, begin] = bond_type
-
- degree = np.sum(A[:mol.GetNumAtoms(), :mol.GetNumAtoms()], axis=-1)
-
- return A if connected and (degree > 0).all() else None
-
- def _genX(self, mol, max_length=None):
-
- max_length = max_length if max_length is not None else mol.GetNumAtoms()
-
- return np.array([self.atom_encoder_m[atom.GetAtomicNum()] for atom in mol.GetAtoms()] + [0] * (
- max_length - mol.GetNumAtoms()))
-
- def _genF(self, mol, max_length=None):
-
- max_length = max_length if max_length is not None else mol.GetNumAtoms()
-
- features = np.array([[*[a.GetDegree() == i for i in range(5)],
- *[a.GetExplicitValence() == i for i in range(9)],
- *[int(a.GetHybridization()) == i for i in range(1, 7)],
- *[a.GetImplicitValence() == i for i in range(9)],
- a.GetIsAromatic(),
- a.GetNoImplicit(),
- *[a.GetNumExplicitHs() == i for i in range(5)],
- *[a.GetNumImplicitHs() == i for i in range(5)],
- *[a.GetNumRadicalElectrons() == i for i in range(5)],
- a.IsInRing(),
- *[a.IsInRingSize(i) for i in range(2, 9)]] for a in mol.GetAtoms()], dtype=np.int32)
-
- return np.vstack((features, np.zeros((max_length - features.shape[0], features.shape[1]))))
-
- def decoder_load(self, dictionary_name, file):
- with open("data/decoders/" + dictionary_name + "_" + file + '.pkl', 'rb') as f:
- return pickle.load(f)
-
- def drugs_decoder_load(self, dictionary_name):
- with open("data/decoders/" + dictionary_name +'.pkl', 'rb') as f:
- return pickle.load(f)
-
- def matrices2mol(self, node_labels, edge_labels, strict=True, file_name=None):
- mol = Chem.RWMol()
- RDLogger.DisableLog('rdApp.*')
- atom_decoders = self.decoder_load("atom", file_name)
- bond_decoders = self.decoder_load("bond", file_name)
-
- for node_label in node_labels:
- mol.AddAtom(Chem.Atom(atom_decoders[node_label]))
-
- for start, end in zip(*np.nonzero(edge_labels)):
- if start > end:
- mol.AddBond(int(start), int(end), bond_decoders[edge_labels[start, end]])
- #mol = self.correct_mol(mol)
- if strict:
- try:
-
- Chem.SanitizeMol(mol)
- except:
- mol = None
-
- return mol
-
- def drug_decoder_load(self, dictionary_name, file):
-
- ''' Loading the atom and bond decoders '''
-
- with open("data/decoders/" + dictionary_name +"_" + file +'.pkl', 'rb') as f:
-
- return pickle.load(f)
- def matrices2mol_drugs(self, node_labels, edge_labels, strict=True, file_name=None):
- mol = Chem.RWMol()
- RDLogger.DisableLog('rdApp.*')
- atom_decoders = self.drug_decoder_load("atom", file_name)
- bond_decoders = self.drug_decoder_load("bond", file_name)
-
- for node_label in node_labels:
-
- mol.AddAtom(Chem.Atom(atom_decoders[node_label]))
-
- for start, end in zip(*np.nonzero(edge_labels)):
- if start > end:
- mol.AddBond(int(start), int(end), bond_decoders[edge_labels[start, end]])
- #mol = self.correct_mol(mol)
- if strict:
- try:
- Chem.SanitizeMol(mol)
- except:
- mol = None
-
- return mol
- def check_valency(self,mol):
- """
- Checks that no atoms in the mol have exceeded their possible
- valency
- :return: True if no valency issues, False otherwise
- """
- try:
- Chem.SanitizeMol(mol, sanitizeOps=Chem.SanitizeFlags.SANITIZE_PROPERTIES)
- return True, None
- except ValueError as e:
- e = str(e)
- p = e.find('#')
- e_sub = e[p:]
- atomid_valence = list(map(int, re.findall(r'\d+', e_sub)))
- return False, atomid_valence
-
-
- def correct_mol(self,x):
- xsm = Chem.MolToSmiles(x, isomericSmiles=True)
- mol = x
- while True:
- flag, atomid_valence = self.check_valency(mol)
- if flag:
- break
- else:
- assert len (atomid_valence) == 2
- idx = atomid_valence[0]
- v = atomid_valence[1]
- queue = []
- for b in mol.GetAtomWithIdx(idx).GetBonds():
- queue.append(
- (b.GetIdx(), int(b.GetBondType()), b.GetBeginAtomIdx(), b.GetEndAtomIdx())
- )
- queue.sort(key=lambda tup: tup[1], reverse=True)
- if len(queue) > 0:
- start = queue[0][2]
- end = queue[0][3]
- t = queue[0][1] - 1
- mol.RemoveBond(start, end)
-
- #if t >= 1:
-
- #mol.AddBond(start, end, self.decoder_load('bond_decoders')[t])
- # if '.' in Chem.MolToSmiles(mol, isomericSmiles=True):
- # mol.AddBond(start, end, self.decoder_load('bond_decoders')[t])
- # print(tt)
- # print(Chem.MolToSmiles(mol, isomericSmiles=True))
-
- return mol
-
-
-
- def label2onehot(self, labels, dim):
-
- """Convert label indices to one-hot vectors."""
-
- out = torch.zeros(list(labels.size())+[dim])
- out.scatter_(len(out.size())-1,labels.unsqueeze(-1),1.)
-
- return out.float()
-
- def process(self, size= None):
-
- mols = [Chem.MolFromSmiles(line) for line in open(self.raw_files, 'r').readlines()]
-
- mols = list(filter(lambda x: x.GetNumAtoms() <= self.max_atom, mols))
- mols = mols[:size]
- indices = range(len(mols))
-
- self._generate_encoders_decoders(mols)
-
-
-
- pbar = tqdm(total=len(indices))
- pbar.set_description(f'Processing chembl dataset')
- max_length = max(mol.GetNumAtoms() for mol in mols)
- data_list = []
-
- self.m_dim = len(self.atom_decoder_m)
- for idx in indices:
- mol = mols[idx]
- A = self._genA(mol, connected=True, max_length=max_length)
- if A is not None:
-
-
- x = torch.from_numpy(self._genX(mol, max_length=max_length)).to(torch.long).view(1, -1)
-
- x = self.label2onehot(x,self.m_dim).squeeze()
- if self.features:
- f = torch.from_numpy(self._genF(mol, max_length=max_length)).to(torch.long).view(x.shape[0], -1)
- x = torch.concat((x,f), dim=-1)
-
- adjacency = torch.from_numpy(A)
-
- edge_index = adjacency.nonzero(as_tuple=False).t().contiguous()
- edge_attr = adjacency[edge_index[0], edge_index[1]].to(torch.long)
-
- data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
-
- if self.pre_filter is not None and not self.pre_filter(data):
- continue
-
- if self.pre_transform is not None:
- data = self.pre_transform(data)
-
- data_list.append(data)
- pbar.update(1)
-
- pbar.close()
-
- torch.save(self.collate(data_list), osp.join(self.processed_dir, self.dataset_file))
-
-
-
-
-if __name__ == '__main__':
- data = DruggenDataset("data")
-
\ No newline at end of file
diff --git a/spaces/Hallucinate/demo/taming/modules/losses/segmentation.py b/spaces/Hallucinate/demo/taming/modules/losses/segmentation.py
deleted file mode 100644
index 4ba77deb5159a6307ed2acba9945e4764a4ff0a5..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/taming/modules/losses/segmentation.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class BCELoss(nn.Module):
- def forward(self, prediction, target):
- loss = F.binary_cross_entropy_with_logits(prediction,target)
- return loss, {}
-
-
-class BCELossWithQuant(nn.Module):
- def __init__(self, codebook_weight=1.):
- super().__init__()
- self.codebook_weight = codebook_weight
-
- def forward(self, qloss, target, prediction, split):
- bce_loss = F.binary_cross_entropy_with_logits(prediction,target)
- loss = bce_loss + self.codebook_weight*qloss
- return loss, {"{}/total_loss".format(split): loss.clone().detach().mean(),
- "{}/bce_loss".format(split): bce_loss.detach().mean(),
- "{}/quant_loss".format(split): qloss.detach().mean()
- }
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/tokenization_megatron_t5.py b/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/tokenization_megatron_t5.py
deleted file mode 100644
index d96b7e1743ae8c7ecb4aa3871907a9dc070cf74b..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/tokenization_megatron_t5.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" T5Tokenizer """
-
-from transformers import BertTokenizer
-
-
-class T5Tokenizer():
- def __init__(self, extra_id_num=118):
- self.extra_id_num = extra_id_num
-
- @classmethod
- def from_pretrained(self, vocab_path):
- self.extra_id_num = 118
- self.T5_special_tokens = ['[BOS]', '[EOS]']
- for i in range(self.extra_id_num):
- self.T5_special_tokens.append(f'')
- tokenizer = BertTokenizer.from_pretrained(vocab_path, additional_special_tokens=self.T5_special_tokens)
-
- return tokenizer
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/commonsense_qa/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/commonsense_qa/__init__.py
deleted file mode 100644
index 42d21f35eb3dd33a053dcf0edd5eadd2dff11294..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/commonsense_qa/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import commonsense_qa_task # noqa
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/bleu_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/bleu_utils.py
deleted file mode 100644
index 75cc5272d367c4f3be98d698b512a529bdb2e4f5..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/bleu_utils.py
+++ /dev/null
@@ -1,166 +0,0 @@
-"""
-
-TODO: the code is take from Apache-2 Licensed NLTK: make sure we do this properly!
-
-
-Copied over from nltk.tranlate.bleu_score. This code has two major changes:
- - allows to turn off length/brevity penalty --- it has no sense for self-bleu,
- - allows to use arithmetic instead of geometric mean
-"""
-
-import math
-import sys
-from fractions import Fraction
-import warnings
-from collections import Counter
-from nltk.translate.bleu_score import modified_precision, closest_ref_length, brevity_penalty, SmoothingFunction
-
-
-def corpus_bleu(
- list_of_references,
- hypotheses,
- weights=(0.25, 0.25, 0.25, 0.25),
- smoothing_function=None,
- auto_reweigh=False,
- averaging_mode="geometric",
- no_length_penalty=False
-):
- """
- Calculate a single corpus-level BLEU score (aka. system-level BLEU) for all
- the hypotheses and their respective references.
-
- Instead of averaging the sentence level BLEU scores (i.e. marco-average
- precision), the original BLEU metric (Papineni et al. 2002) accounts for
- the micro-average precision (i.e. summing the numerators and denominators
- for each hypothesis-reference(s) pairs before the division).
-
- >>> hyp1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which',
- ... 'ensures', 'that', 'the', 'military', 'always',
- ... 'obeys', 'the', 'commands', 'of', 'the', 'party']
- >>> ref1a = ['It', 'is', 'a', 'guide', 'to', 'action', 'that',
- ... 'ensures', 'that', 'the', 'military', 'will', 'forever',
- ... 'heed', 'Party', 'commands']
- >>> ref1b = ['It', 'is', 'the', 'guiding', 'principle', 'which',
- ... 'guarantees', 'the', 'military', 'forces', 'always',
- ... 'being', 'under', 'the', 'command', 'of', 'the', 'Party']
- >>> ref1c = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the',
- ... 'army', 'always', 'to', 'heed', 'the', 'directions',
- ... 'of', 'the', 'party']
-
- >>> hyp2 = ['he', 'read', 'the', 'book', 'because', 'he', 'was',
- ... 'interested', 'in', 'world', 'history']
- >>> ref2a = ['he', 'was', 'interested', 'in', 'world', 'history',
- ... 'because', 'he', 'read', 'the', 'book']
-
- >>> list_of_references = [[ref1a, ref1b, ref1c], [ref2a]]
- >>> hypotheses = [hyp1, hyp2]
- >>> corpus_bleu(list_of_references, hypotheses) # doctest: +ELLIPSIS
- 0.5920...
-
- The example below show that corpus_bleu() is different from averaging
- sentence_bleu() for hypotheses
-
- >>> score1 = sentence_bleu([ref1a, ref1b, ref1c], hyp1)
- >>> score2 = sentence_bleu([ref2a], hyp2)
- >>> (score1 + score2) / 2 # doctest: +ELLIPSIS
- 0.6223...
-
- :param list_of_references: a corpus of lists of reference sentences, w.r.t. hypotheses
- :type list_of_references: list(list(list(str)))
- :param hypotheses: a list of hypothesis sentences
- :type hypotheses: list(list(str))
- :param weights: weights for unigrams, bigrams, trigrams and so on
- :type weights: list(float)
- :param smoothing_function:
- :type smoothing_function: SmoothingFunction
- :param auto_reweigh: Option to re-normalize the weights uniformly.
- :type auto_reweigh: bool
- :return: The corpus-level BLEU score.
- :rtype: float
- """
- # Before proceeding to compute BLEU, perform sanity checks.
-
- p_numerators = Counter() # Key = ngram order, and value = no. of ngram matches.
- p_denominators = Counter() # Key = ngram order, and value = no. of ngram in ref.
- hyp_lengths, ref_lengths = 0, 0
-
- assert len(list_of_references) == len(hypotheses), (
- "The number of hypotheses and their reference(s) should be the " "same "
- )
-
- # Iterate through each hypothesis and their corresponding references.
- for references, hypothesis in zip(list_of_references, hypotheses):
- # For each order of ngram, calculate the numerator and
- # denominator for the corpus-level modified precision.
- for i, _ in enumerate(weights, start=1):
- p_i = modified_precision(references, hypothesis, i)
- p_numerators[i] += p_i.numerator
- p_denominators[i] += p_i.denominator
-
- # Calculate the hypothesis length and the closest reference length.
- # Adds them to the corpus-level hypothesis and reference counts.
- hyp_len = len(hypothesis)
- hyp_lengths += hyp_len
- ref_lengths += closest_ref_length(references, hyp_len)
-
- # Calculate corpus-level brevity penalty.
- if no_length_penalty and averaging_mode == 'geometric':
- bp = 1.0
- elif no_length_penalty and averaging_mode == 'arithmetic':
- bp = 0.0
- else:
- assert not no_length_penalty
- assert averaging_mode != 'arithmetic', 'Not sure how to apply length penalty when aurithmetic mode'
- bp = brevity_penalty(ref_lengths, hyp_lengths)
-
- # Uniformly re-weighting based on maximum hypothesis lengths if largest
- # order of n-grams < 4 and weights is set at default.
- if auto_reweigh:
- if hyp_lengths < 4 and weights == (0.25, 0.25, 0.25, 0.25):
- weights = (1 / hyp_lengths,) * hyp_lengths
-
- # Collects the various precision values for the different ngram orders.
- p_n = [
- Fraction(p_numerators[i], p_denominators[i], _normalize=False)
- for i, _ in enumerate(weights, start=1)
- ]
-
- # Returns 0 if there's no matching n-grams
- # We only need to check for p_numerators[1] == 0, since if there's
- # no unigrams, there won't be any higher order ngrams.
- if p_numerators[1] == 0:
- return 0
-
- # If there's no smoothing, set use method0 from SmoothinFunction class.
- if not smoothing_function:
- smoothing_function = SmoothingFunction().method0
- # Smoothen the modified precision.
- # Note: smoothing_function() may convert values into floats;
- # it tries to retain the Fraction object as much as the
- # smoothing method allows.
- p_n = smoothing_function(
- p_n, references=references, hypothesis=hypothesis, hyp_len=hyp_lengths
- )
-
- if averaging_mode == "geometric":
- s = (w_i * math.log(p_i) for w_i, p_i in zip(weights, p_n))
- s = bp * math.exp(math.fsum(s))
- elif averaging_mode == "arithmetic":
- s = (w_i * p_i for w_i, p_i in zip(weights, p_n))
- s = math.fsum(s)
-
- return s
-
-
-def sentence_bleu(
- references,
- hypothesis,
- weights=(0.25, 0.25, 0.25, 0.25),
- smoothing_function=None,
- auto_reweigh=False,
- averaging_mode="geometric",
- no_length_penalty=False
-):
- return corpus_bleu(
- [references], [hypothesis], weights, smoothing_function, auto_reweigh, averaging_mode, no_length_penalty
- )
\ No newline at end of file
diff --git a/spaces/Hellisotherpeople/DebateKG/README.md b/spaces/Hellisotherpeople/DebateKG/README.md
deleted file mode 100644
index d699d36181e20df197d3207e1cff53036766904e..0000000000000000000000000000000000000000
--- a/spaces/Hellisotherpeople/DebateKG/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DebateKG
-emoji: 💬📊
-colorFrom: indigo
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Hoodady/3DFuse/voxnerf/README.md b/spaces/Hoodady/3DFuse/voxnerf/README.md
deleted file mode 100644
index f4e4d256e5b72615f5c7ca25cf4c66980ea093df..0000000000000000000000000000000000000000
--- a/spaces/Hoodady/3DFuse/voxnerf/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-This is a custom implementation of voxel radiance field. The codebase
-is adapted from TensoRF but with fairly heavy changes; we do not use tensor factorization for simplicity.
-It achieves comparable performance to vanilla NeRF absent view dependencies.
diff --git a/spaces/Ibtehaj10/cheating-detection/dwell_time_calculation.py b/spaces/Ibtehaj10/cheating-detection/dwell_time_calculation.py
deleted file mode 100644
index 1e3f90e092b4510522a167a880e269c3295284e6..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection/dwell_time_calculation.py
+++ /dev/null
@@ -1,147 +0,0 @@
-import cv2
-import datetime
-import imutils
-import numpy as np
-from centroidtracker import CentroidTracker
-
-protopath = "MobileNetSSD_deploy.prototxt"
-modelpath = "MobileNetSSD_deploy.caffemodel"
-detector = cv2.dnn.readNetFromCaffe(prototxt=protopath, caffeModel=modelpath)
-# Only enable it if you are using OpenVino environment
-# detector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
-# detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
-
-
-CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
- "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
- "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
- "sofa", "train", "tvmonitor"]
-
-tracker = CentroidTracker(maxDisappeared=80, maxDistance=90)
-
-
-def non_max_suppression_fast(boxes, overlapThresh):
- try:
- if len(boxes) == 0:
- return []
-
- if boxes.dtype.kind == "i":
- boxes = boxes.astype("float")
-
- pick = []
-
- x1 = boxes[:, 0]
- y1 = boxes[:, 1]
- x2 = boxes[:, 2]
- y2 = boxes[:, 3]
-
- area = (x2 - x1 + 1) * (y2 - y1 + 1)
- idxs = np.argsort(y2)
-
- while len(idxs) > 0:
- last = len(idxs) - 1
- i = idxs[last]
- pick.append(i)
-
- xx1 = np.maximum(x1[i], x1[idxs[:last]])
- yy1 = np.maximum(y1[i], y1[idxs[:last]])
- xx2 = np.minimum(x2[i], x2[idxs[:last]])
- yy2 = np.minimum(y2[i], y2[idxs[:last]])
-
- w = np.maximum(0, xx2 - xx1 + 1)
- h = np.maximum(0, yy2 - yy1 + 1)
-
- overlap = (w * h) / area[idxs[:last]]
-
- idxs = np.delete(idxs, np.concatenate(([last],
- np.where(overlap > overlapThresh)[0])))
-
- return boxes[pick].astype("int")
- except Exception as e:
- print("Exception occurred in non_max_suppression : {}".format(e))
-
-
-def main():
- cap = cv2.VideoCapture('test_video.mp4')
-
- fps_start_time = datetime.datetime.now()
- fps = 0
- total_frames = 0
-
- object_id_list = []
- dtime = dict()
- dwell_time = dict()
-
- while True:
- ret, frame = cap.read()
- frame = imutils.resize(frame, width=600)
- total_frames = total_frames + 1
-
- (H, W) = frame.shape[:2]
-
- blob = cv2.dnn.blobFromImage(frame, 0.007843, (W, H), 127.5)
-
- detector.setInput(blob)
- person_detections = detector.forward()
- rects = []
- for i in np.arange(0, person_detections.shape[2]):
- confidence = person_detections[0, 0, i, 2]
- if confidence > 0.5:
- idx = int(person_detections[0, 0, i, 1])
-
- if CLASSES[idx] != "person":
- continue
-
- person_box = person_detections[0, 0, i, 3:7] * np.array([W, H, W, H])
- (startX, startY, endX, endY) = person_box.astype("int")
- rects.append(person_box)
-
- boundingboxes = np.array(rects)
- boundingboxes = boundingboxes.astype(int)
- rects = non_max_suppression_fast(boundingboxes, 0.3)
-
- objects = tracker.update(rects)
- for (objectId, bbox) in objects.items():
- x1, y1, x2, y2 = bbox
- x1 = int(x1)
- y1 = int(y1)
- x2 = int(x2)
- y2 = int(y2)
-
- if objectId not in object_id_list:
- object_id_list.append(objectId)
- dtime[objectId] = datetime.datetime.now()
- dwell_time[objectId] = 0
- else:
- curr_time = datetime.datetime.now()
- old_time = dtime[objectId]
- time_diff = curr_time - old_time
- dtime[objectId] = datetime.datetime.now()
- sec = time_diff.total_seconds()
- dwell_time[objectId] += sec
-
-
- cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 2)
- text = "{}|{}".format(objectId, int(dwell_time[objectId]))
- cv2.putText(frame, text, (x1, y1-5), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1)
-
- fps_end_time = datetime.datetime.now()
- time_diff = fps_end_time - fps_start_time
- if time_diff.seconds == 0:
- fps = 0.0
- else:
- fps = (total_frames / time_diff.seconds)
-
- fps_text = "FPS: {:.2f}".format(fps)
-
- cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1)
-
- cv2.imshow("Application", frame)
- key = cv2.waitKey(1)
- if key == ord('q'):
- break
-
- cv2.destroyAllWindows()
-
-
-main()
diff --git a/spaces/Intel/Q8-Chat/app.py b/spaces/Intel/Q8-Chat/app.py
deleted file mode 100644
index 3dc3d14a1130fedbd19e261a929a5784f7233f36..0000000000000000000000000000000000000000
--- a/spaces/Intel/Q8-Chat/app.py
+++ /dev/null
@@ -1,453 +0,0 @@
-"""
-The gradio demo server for chatting with a single model.
-"""
-
-import datetime
-import json
-import os
-import time
-import uuid
-import logging
-
-import gradio as gr
-import requests
-
-from conversation import get_conv_template
-from gradio_patch import Chatbot as grChatbot
-from gradio_css import code_highlight_css
-from utils import (
- WORKER_API_TIMEOUT,
- ErrorCode,
- server_error_msg,
- get_window_url_params_js,
-)
-
-
-logging.basicConfig(
- format='%(asctime)s %(levelname)s: %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
-logger = logging.getLogger(__name__)
-logger.setLevel(logging.INFO)
-
-
-headers = {"User-Agent": "fastchat Client"}
-
-no_change_btn = gr.Button.update()
-enable_btn = gr.Button.update(interactive=True)
-disable_btn = gr.Button.update(interactive=False)
-
-controller_url = os.environ['controller_url']
-concurrency_count = int(os.environ['concurrency_count'])
-
-learn_more_md = ("""
-### Notice
-- All the models in this demo run on 4th Generation Intel® Xeon® (Sapphire Rapids) utilizing AMX operations and mixed precision inference
-- This demo is based on the FastChat demo server. [[GitHub]](https://github.com/lm-sys/FastChat)
-
-### Terms of use
-By using this service, users are required to agree to the following terms: The service is a research preview intended for non-commercial use only. It can produce factually incorrect output, and should not be relied on to produce factually accurate information. The service only provides limited safety measures and may generate lewd, biased or otherwise offensive content. It must not be used for any illegal, harmful, violent, racist, or sexual purposes. The service may collect user dialogue data for future research.
-
-### License
-The service is a research preview intended for non-commercial use only, subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
-""")
-
-
-def get_model_list(controller_url):
- ret = requests.post(controller_url + "/refresh_all_workers")
- assert ret.status_code == 200
- ret = requests.post(controller_url + "/list_models")
- models = ret.json()["models"]
- models.sort()
- logger.info(f"Models: {models}")
- return models
-
-
-def load_demo_refresh_model_list(url_params):
- models = get_model_list(controller_url)
- selected_model = models[0] if len(models) > 0 else ""
- if "model" in url_params:
- model = url_params["model"]
- if model in models:
- selected_model = model
-
- dropdown_update = gr.Dropdown.update(
- choices=models, value=selected_model, visible=True
- )
-
- state = None
- return (
- state,
- dropdown_update,
- gr.Chatbot.update(visible=True),
- gr.Textbox.update(visible=True),
- gr.Button.update(visible=True),
- gr.Row.update(visible=True),
- gr.Accordion.update(visible=True),
- )
-
-
-def load_demo_reload_model(url_params, request: gr.Request):
- logger.info(
- f"load_demo_reload_model. ip: {request.client.host}. params: {url_params}"
- )
- return load_demo_refresh_model_list(url_params)
-
-
-def load_demo_single(models, url_params):
- dropdown_update = gr.Dropdown.update(visible=True)
- if "model" in url_params:
- model = url_params["model"]
- if model in models:
- dropdown_update = gr.Dropdown.update(value=model, visible=True)
-
- state = None
- return (
- state,
- dropdown_update,
- gr.Chatbot.update(visible=True),
- gr.Textbox.update(visible=True),
- gr.Button.update(visible=True),
- gr.Row.update(visible=True),
- gr.Accordion.update(visible=True),
- )
-
-
-def load_demo(url_params, request: gr.Request):
- logger.info(f"load_demo. ip: {request.client.host}. params: {url_params}")
- return load_demo_single(models, url_params)
-
-
-def regenerate(state, request: gr.Request):
- logger.info(f"regenerate. ip: {request.client.host}")
- state.messages[-1][-1] = None
- state.skip_next = False
- return (state, state.to_gradio_chatbot(), "") + (disable_btn,) * 5
-
-
-def clear_history(request: gr.Request):
- logger.info(f"clear_history. ip: {request.client.host}")
- state = None
- return (state, [], "") + (disable_btn,) * 5
-
-
-def add_text(state, text, request: gr.Request):
- logger.info(f"add_text. ip: {request.client.host}. len: {len(text)}")
-
- if state is None:
- state = get_conv_template("vicuna_v1.1")
-
- if len(text) <= 0:
- state.skip_next = True
- return (state, state.to_gradio_chatbot(), "") + (no_change_btn,) * 5
-
- text = text[:1536] # Hard cut-off
- state.append_message(state.roles[0], text)
- state.append_message(state.roles[1], None)
- state.skip_next = False
- return (state, state.to_gradio_chatbot(), "") + (disable_btn,) * 5
-
-
-def post_process_code(code):
- sep = "\n```"
- if sep in code:
- blocks = code.split(sep)
- if len(blocks) % 2 == 1:
- for i in range(1, len(blocks), 2):
- blocks[i] = blocks[i].replace("\\_", "_")
- code = sep.join(blocks)
- return code
-
-
-def model_worker_stream_iter(
- conv, model_name, worker_addr, prompt, temperature, top_p, max_new_tokens
-):
- # Make requests
- gen_params = {
- "model": model_name,
- "prompt": prompt,
- "temperature": temperature,
- "top_p": top_p,
- "max_new_tokens": max_new_tokens,
- "stop": conv.stop_str,
- "stop_token_ids": conv.stop_token_ids,
- "echo": False,
- }
- logger.info(f"==== request ====\n{gen_params}")
-
- # Stream output
- response = requests.post(
- worker_addr + "/worker_generate_stream",
- headers=headers,
- json=gen_params,
- stream=True,
- timeout=WORKER_API_TIMEOUT,
- )
- for chunk in response.iter_lines(decode_unicode=False, delimiter=b"\0"):
- if chunk:
- data = json.loads(chunk.decode())
- yield data
-
-
-def http_bot(
- state, model_selector, temperature, top_p, max_new_tokens, request: gr.Request
-):
- logger.info(f"http_bot. ip: {request.client.host}")
- start_tstamp = time.time()
- model_name = model_selector
- temperature = float(temperature)
- top_p = float(top_p)
- max_new_tokens = int(max_new_tokens)
-
- if state.skip_next:
- # This generate call is skipped due to invalid inputs
- yield (state, state.to_gradio_chatbot()) + (no_change_btn,) * 5
- return
-
- if len(state.messages) == state.offset + 2:
- # First round of conversation
- new_state = get_conv_template(model_name.lower())
- new_state.conv_id = uuid.uuid4().hex
- new_state.model_name = state.model_name or model_selector
- new_state.append_message(new_state.roles[0], state.messages[-2][1])
- new_state.append_message(new_state.roles[1], None)
- state = new_state
-
- # Construct prompt
- conv = state
- if "chatglm" in model_name:
- prompt = list(list(x) for x in conv.messages[conv.offset :])
- else:
- prompt = conv.get_prompt()
- stream_iter = model_worker_stream_iter(
- conv, model_name, controller_url, prompt, temperature, top_p, max_new_tokens
- )
-
- state.messages[-1][-1] = "▌"
- yield (state, state.to_gradio_chatbot()) + (disable_btn,) * 5
-
- try:
- for data in stream_iter:
- if data["error_code"] == 0:
- output = data["text"].strip()
- if "vicuna" in model_name:
- output = post_process_code(output)
- state.messages[-1][-1] = output + "▌"
- yield (state, state.to_gradio_chatbot()) + (disable_btn,) * 5
- else:
- output = data["text"] + f"\n\n(error_code: {data['error_code']})"
- state.messages[-1][-1] = output
- yield (state, state.to_gradio_chatbot()) + (
- disable_btn,
- disable_btn,
- disable_btn,
- enable_btn,
- enable_btn,
- )
- return
- time.sleep(0.02)
- except requests.exceptions.RequestException as e:
- state.messages[-1][-1] = (
- f"{server_error_msg}\n\n"
- f"(error_code: {ErrorCode.GRADIO_REQUEST_ERROR}, {e})"
- )
- yield (state, state.to_gradio_chatbot()) + (
- disable_btn,
- disable_btn,
- disable_btn,
- enable_btn,
- enable_btn,
- )
- return
- except Exception as e:
- state.messages[-1][-1] = (
- f"{server_error_msg}\n\n"
- f"(error_code: {ErrorCode.GRADIO_STREAM_UNKNOWN_ERROR}, {e})"
- )
- yield (state, state.to_gradio_chatbot()) + (
- disable_btn,
- disable_btn,
- disable_btn,
- enable_btn,
- enable_btn,
- )
- return
-
- state.messages[-1][-1] = state.messages[-1][-1][:-1]
- yield (state, state.to_gradio_chatbot()) + (enable_btn,) * 5
-
- finish_tstamp = time.time()
- logger.info(f"{output}")
-
- # TODO
- # with open(get_conv_log_filename(), "a") as fout:
- # data = {
- # "tstamp": round(finish_tstamp, 4),
- # "type": "chat",
- # "model": model_name,
- # "gen_params": {
- # "temperature": temperature,
- # "top_p": top_p,
- # "max_new_tokens": max_new_tokens,
- # },
- # "start": round(start_tstamp, 4),
- # "finish": round(start_tstamp, 4),
- # "state": state.dict(),
- # "ip": request.client.host,
- # }
- # fout.write(json.dumps(data) + "\n")
-
-
-block_css = (
- code_highlight_css
- + """
-pre {
- white-space: pre-wrap; /* Since CSS 2.1 */
- white-space: -moz-pre-wrap; /* Mozilla, since 1999 */
- white-space: -pre-wrap; /* Opera 4-6 */
- white-space: -o-pre-wrap; /* Opera 7 */
- word-wrap: break-word; /* Internet Explorer 5.5+ */
-}
-#notice_markdown th {
- display: none;
-}
-"""
-)
-
-
-def build_single_model_ui(models):
- notice_markdown = ("""
-#
Chat with Intel Labs optimized Large Language Models
-
-### Choose a model to chat with
-""")
-
- state = gr.State()
- gr.Markdown(notice_markdown, elem_id="notice_markdown")
-
- with gr.Row(elem_id="model_selector_row"):
- model_selector = gr.Dropdown(
- choices=models,
- value=models[0] if len(models) > 0 else "",
- interactive=True,
- show_label=False,
- ).style(container=False)
-
- chatbot = grChatbot(
- elem_id="chatbot", label="Scroll down and start chatting", visible=False,
- ).style(height=550)
- with gr.Row():
- with gr.Column(scale=20):
- textbox = gr.Textbox(
- show_label=False,
- placeholder="Type your message...",
- visible=False,
- ).style(container=False)
- with gr.Column(scale=1, min_width=50):
- send_btn = gr.Button(value="Send", visible=False)
-
- with gr.Row(visible=False) as button_row:
- regenerate_btn = gr.Button(value="Regenerate", interactive=False)
- clear_btn = gr.Button(value="Clear history", interactive=False)
-
- with gr.Accordion("Parameters", open=False, visible=False) as parameter_row:
- temperature = gr.Slider(
- minimum=0.0,
- maximum=1.0,
- value=0.1,
- step=0.1,
- interactive=True,
- label="Temperature",
- )
- top_p = gr.Slider(
- minimum=0.0,
- maximum=1.0,
- value=1.0,
- step=0.1,
- interactive=True,
- label="Top P",
- )
- max_output_tokens = gr.Slider(
- minimum=0,
- maximum=1024,
- value=512,
- step=64,
- interactive=True,
- label="Max output tokens",
- )
-
- gr.Markdown(learn_more_md)
-
- btn_list = [regenerate_btn, clear_btn]
- regenerate_btn.click(regenerate, state, [state, chatbot, textbox] + btn_list).then(
- http_bot,
- [state, model_selector, temperature, top_p, max_output_tokens],
- [state, chatbot] + btn_list,
- )
- clear_btn.click(clear_history, None, [state, chatbot, textbox] + btn_list)
-
- model_selector.change(clear_history, None, [state, chatbot, textbox] + btn_list)
-
- textbox.submit(
- add_text, [state, textbox], [state, chatbot, textbox] + btn_list
- ).then(
- http_bot,
- [state, model_selector, temperature, top_p, max_output_tokens],
- [state, chatbot] + btn_list,
- )
- send_btn.click(
- add_text, [state, textbox], [state, chatbot, textbox] + btn_list
- ).then(
- http_bot,
- [state, model_selector, temperature, top_p, max_output_tokens],
- [state, chatbot] + btn_list,
- )
-
- return state, model_selector, chatbot, textbox, send_btn, button_row, parameter_row
-
-
-def build_demo(models):
- with gr.Blocks(
- title="Chat with Open Large Language Models",
- theme=gr.themes.Soft(),
- css=block_css,
- ) as demo:
- url_params = gr.JSON(visible=False)
-
- with gr.Row():
- gr.Column(scale=1, min_width=0)
- with gr.Column(scale=9):
- (
- state,
- model_selector,
- chatbot,
- textbox,
- send_btn,
- button_row,
- parameter_row,
- ) = build_single_model_ui(models)
- gr.Column(scale=1, min_width=0)
-
- demo.load(
- load_demo_reload_model,
- [url_params],
- [
- state,
- model_selector,
- chatbot,
- textbox,
- send_btn,
- button_row,
- parameter_row,
- ],
- _js=get_window_url_params_js,
- )
-
- return demo
-
-
-if __name__ == "__main__":
- models = get_model_list(controller_url)
-
- demo = build_demo(models)
- demo.queue(
- concurrency_count=concurrency_count, status_update_rate=10, api_open=False
- ).launch()
diff --git a/spaces/JDWebProgrammer/space-weather/app.py b/spaces/JDWebProgrammer/space-weather/app.py
deleted file mode 100644
index 75f57803526be37bdb7c6e2d25c6588494914e90..0000000000000000000000000000000000000000
--- a/spaces/JDWebProgrammer/space-weather/app.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import gradio as gr
-from gradio.components import Markdown, Textbox, Button
-import matplotlib.pyplot as plt
-import numpy as np
-from sklearn.model_selection import train_test_split
-from sklearn.linear_model import LinearRegression
-from sklearn.preprocessing import PolynomialFeatures
-from sklearn.svm import SVR
-from sklearn.pipeline import make_pipeline
-from sunpy.net import Fido
-from sunpy.net import attrs as a
-from sunpy.timeseries import TimeSeries
-
-
-
-
-def process_data():
- # Define the time range for data retrieval
- tstart = "2015-06-21 01:00"
- tend = "2015-06-21 23:00"
-
- # Query and fetch GOES XRS data
- result_goes15 = Fido.search(a.Time(tstart, tend), a.Instrument("XRS"), a.goes.SatelliteNumber(15), a.Resolution("flx1s"))
- files = Fido.fetch(result_goes15)
-
- # Load the data into a TimeSeries
- goes_15 = TimeSeries(files, concatenate=True)
-
- # Extract X-ray flux and time data
- flux_data = goes_15.quantity("xrsb").value
- time_data = goes_15.time.datetime
-
- # Create a feature matrix with time data (as numerical values)
- X = np.array([(t - time_data[0]).total_seconds() for t in time_data]).reshape(-1, 1)
-
- # Split the data into training and testing sets
- X_train, X_test, y_train, y_test = train_test_split(X, flux_data, test_size=0.2, random_state=42)
-
- # Train a linear regression model
- linear_model = LinearRegression()
- linear_model.fit(X_train, y_train)
-
- # Train a quadratic regression model
- quadratic_model = make_pipeline(PolynomialFeatures(degree=2), LinearRegression())
- quadratic_model.fit(X_train, y_train)
-
- # Train a cubic regression model
- cubic_model = make_pipeline(PolynomialFeatures(degree=3), LinearRegression())
- cubic_model.fit(X_train, y_train)
-
- # Train a support vector regression (SVR) model
- svr_model = SVR(kernel='linear')
- svr_model.fit(X_train, y_train)
-
- # Make predictions using all models
- y_pred_linear = linear_model.predict(X_test)
- y_pred_quadratic = quadratic_model.predict(X_test)
- y_pred_cubic = cubic_model.predict(X_test)
- y_pred_svr = svr_model.predict(X_test)
-
- # Plot the actual and predicted data from all models
- plt.figure(figsize=(12, 6))
- plt.scatter(X_test, y_test, color='blue', label='Actual Data')
- plt.plot(X_test, y_pred_linear, color='red', linewidth=2, label='Linear Prediction')
- plt.plot(X_test, y_pred_quadratic, color='green', linewidth=2, label='Quadratic Prediction')
- plt.plot(X_test, y_pred_cubic, color='orange', linewidth=2, label='Cubic Prediction')
- plt.plot(X_test, y_pred_svr, color='purple', linewidth=2, label='SVR Prediction')
-
- # Include solar flux data as an additional line in the plot
- plt.plot(X, flux_data, color='cyan', linestyle='dashed', label='Solar Flux')
-
- plt.title('GOES XRS Space Weather Forecast')
- plt.xlabel('Time (seconds since start)')
- plt.ylabel('X-ray Flux / Solar Flux')
- plt.legend()
-
- # Save the image
- plt.savefig('space_weather_forecast.png')
-
- # Display the plot
- #plt.show()
- fig = plt.figure()
-
-
-process_data()
-
-with gr.Blocks(title="Space Weather Forecast", analytics_enabled=False) as spaceml:
- gr.Markdown("# Space Weather Forecast")
- gr.Markdown("Welcome to the Space Weather Forecast!")
- with gr.Row():
- with gr.Column(scale=1):
- gradio_plot = gr.Image('space_weather_forecast.png')
-
-spaceml.queue().launch(show_api=True, share=True)
-
-
diff --git a/spaces/Joabutt/waifugeneration/app.py b/spaces/Joabutt/waifugeneration/app.py
deleted file mode 100644
index 512213c44a4691c9c56a5df800b540d4a81304eb..0000000000000000000000000000000000000000
--- a/spaces/Joabutt/waifugeneration/app.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import gradio as gr
-import torch
-from torch import autocast
-from diffusers import StableDiffusionPipeline
-
-model_id = "hakurei/waifu-diffusion"
-device = "cuda"
-
-
-pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision='fp16')
-pipe = pipe.to(device)
-
-
-block = gr.Blocks(css=".container { max-width: 800px; margin: auto; }")
-
-num_samples = 2
-
-def infer(prompt):
- with autocast("cuda"):
- images = pipe([prompt] * num_samples, guidance_scale=7.5)["sample"]
-
- return images
-
-
-with block as demo:
- gr.Markdown("
Waifu Diffusion
")
- gr.Markdown(
- "waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning."
- )
- with gr.Group():
- with gr.Box():
- with gr.Row().style(mobile_collapse=False, equal_height=True):
-
- text = gr.Textbox(
- label="Enter your prompt", show_label=False, max_lines=1
- ).style(
- border=(True, False, True, True),
- rounded=(True, False, False, True),
- container=False,
- )
- btn = gr.Button("Run").style(
- margin=False,
- rounded=(False, True, True, False),
- )
-
- gallery = gr.Gallery(label="Generated images", show_label=False).style(
- grid=[2], height="auto"
- )
- text.submit(infer, inputs=[text], outputs=gallery)
- btn.click(infer, inputs=[text], outputs=gallery)
-
- gr.Markdown(
- """___
-
- Created by https://huggingface.co/hakurei
-
-
"""
- )
-
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/shared.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/shared.py
deleted file mode 100644
index 89f0779459225957c13865ef7f7448efae6d1998..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/shared.py
+++ /dev/null
@@ -1,65 +0,0 @@
-from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST
-import os
-import queue
-import openai
-
-class State:
- interrupted = False
- multi_api_key = False
- completion_url = COMPLETION_URL
- balance_api_url = BALANCE_API_URL
- usage_api_url = USAGE_API_URL
-
- def interrupt(self):
- self.interrupted = True
-
- def recover(self):
- self.interrupted = False
-
- def set_api_host(self, api_host: str):
- api_host = api_host.rstrip("/")
- if not api_host.startswith("http"):
- api_host = f"https://{api_host}"
- if api_host.endswith("/v1"):
- api_host = api_host[:-3]
- self.completion_url = f"{api_host}/v1/chat/completions"
- self.balance_api_url = f"{api_host}/dashboard/billing/credit_grants"
- self.usage_api_url = f"{api_host}/dashboard/billing/usage"
- os.environ["OPENAI_API_BASE"] = api_host
-
- def reset_api_host(self):
- self.completion_url = COMPLETION_URL
- self.balance_api_url = BALANCE_API_URL
- self.usage_api_url = USAGE_API_URL
- os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}"
- return API_HOST
-
- def reset_all(self):
- self.interrupted = False
- self.completion_url = COMPLETION_URL
-
- def set_api_key_queue(self, api_key_list):
- self.multi_api_key = True
- self.api_key_queue = queue.Queue()
- for api_key in api_key_list:
- self.api_key_queue.put(api_key)
-
- def switching_api_key(self, func):
- if not hasattr(self, "api_key_queue"):
- return func
-
- def wrapped(*args, **kwargs):
- api_key = self.api_key_queue.get()
- args[0].api_key = api_key
- ret = func(*args, **kwargs)
- self.api_key_queue.put(api_key)
- return ret
-
- return wrapped
-
-
-state = State()
-
-modules_path = os.path.dirname(os.path.realpath(__file__))
-chuanhu_path = os.path.dirname(modules_path)
-assets_path = os.path.join(chuanhu_path, "web_assets")
\ No newline at end of file
diff --git a/spaces/Joom/Front-end-code-generation-from-images/classes/model/Config.py b/spaces/Joom/Front-end-code-generation-from-images/classes/model/Config.py
deleted file mode 100644
index 9610d2db7d3396ab9ce463b5b14dfaff19448ac1..0000000000000000000000000000000000000000
--- a/spaces/Joom/Front-end-code-generation-from-images/classes/model/Config.py
+++ /dev/null
@@ -1,7 +0,0 @@
-__author__ = 'Taneem Jan, taneemishere.github.io'
-
-CONTEXT_LENGTH = 48
-IMAGE_SIZE = 256
-BATCH_SIZE = 64
-EPOCHS = 10
-STEPS_PER_EPOCH = 72000
diff --git a/spaces/JustinLin610/ImageBind_zeroshot_demo/README.md b/spaces/JustinLin610/ImageBind_zeroshot_demo/README.md
deleted file mode 100644
index e5b11875a77d508855472a0bea7aed6592c81d14..0000000000000000000000000000000000000000
--- a/spaces/JustinLin610/ImageBind_zeroshot_demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ImageBind
-emoji: 🔥
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.30.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Kaori1707/Image-enhancement/app.py b/spaces/Kaori1707/Image-enhancement/app.py
deleted file mode 100644
index 73a292a41b68d477bb3a712c036f4be6f257b25f..0000000000000000000000000000000000000000
--- a/spaces/Kaori1707/Image-enhancement/app.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import gradio as gr
-import os
-import numpy as np
-import torch
-from models.network_swinir import SwinIR
-
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-print("device: %s" % device)
-default_models = {
- "sr": "weights/003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth",
- "denoise": "weights/005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth"
- }
-torch.backends.cudnn.enabled = True
-torch.backends.cudnn.benchmark = True
-
-
-denoise_model = SwinIR(upscale=1, in_chans=3, img_size=128, window_size=8,
- img_range=1., depths=[6, 6, 6, 6, 6, 6], embed_dim=180, num_heads=[6, 6, 6, 6, 6, 6],
- mlp_ratio=2, upsampler='', resi_connection='1conv').to(device)
-param_key_g = 'params'
-try:
- pretrained_model = torch.load(default_models["denoise"])
- denoise_model.load_state_dict(pretrained_model[param_key_g] if param_key_g in pretrained_model.keys() else pretrained_model, strict=True)
-except: print("Loading model failed")
-denoise_model.eval()
-
-sr_model = SwinIR(upscale=4, in_chans=3, img_size=64, window_size=8,
- img_range=1., depths=[6, 6, 6, 6, 6, 6], embed_dim=180, num_heads=[6, 6, 6, 6, 6, 6],
- mlp_ratio=2, upsampler='nearest+conv', resi_connection='1conv').to(device)
-param_key_g = 'params_ema'
-try:
- pretrained_model = torch.load(default_models["sr"])
- sr_model.load_state_dict(pretrained_model[param_key_g] if param_key_g in pretrained_model.keys() else pretrained_model, strict=True)
-except: print("Loading model failed")
-sr_model.eval()
-
-
-def sr(input_img):
-
- window_size = 8
- # read image
- img_lq = input_img.astype(np.float32) / 255.
- img_lq = np.transpose(img_lq if img_lq.shape[2] == 1 else img_lq[:, :, [2, 1, 0]], (2, 0, 1)) # HCW-BGR to CHW-RGB
- img_lq = torch.from_numpy(img_lq).float().unsqueeze(0).to(device) # CHW-RGB to NCHW-RGB
-
- # inference
- with torch.no_grad():
- # pad input image to be a multiple of window_size
- _, _, h_old, w_old = img_lq.size()
- h_pad = (h_old // window_size + 1) * window_size - h_old
- w_pad = (w_old // window_size + 1) * window_size - w_old
- img_lq = torch.cat([img_lq, torch.flip(img_lq, [2])], 2)[:, :, :h_old + h_pad, :]
- img_lq = torch.cat([img_lq, torch.flip(img_lq, [3])], 3)[:, :, :, :w_old + w_pad]
- output = sr_model(img_lq)
- output = output[..., :h_old * 4, :w_old * 4]
-
- # save image
- output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- if output.ndim == 3:
- output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0)) # CHW-RGB to HCW-BGR
- output = (output * 255.0).round().astype(np.uint8) # float32 to uint8
-
- return output
-
-def denoise(input_img):
-
- window_size = 8
- # read image
- img_lq = input_img.astype(np.float32) / 255.
- img_lq = np.transpose(img_lq if img_lq.shape[2] == 1 else img_lq[:, :, [2, 1, 0]], (2, 0, 1)) # HCW-BGR to CHW-RGB
- img_lq = torch.from_numpy(img_lq).float().unsqueeze(0).to(device) # CHW-RGB to NCHW-RGB
-
- # inference
- with torch.no_grad():
- # pad input image to be a multiple of window_size
- _, _, h_old, w_old = img_lq.size()
- h_pad = (h_old // window_size + 1) * window_size - h_old
- w_pad = (w_old // window_size + 1) * window_size - w_old
- img_lq = torch.cat([img_lq, torch.flip(img_lq, [2])], 2)[:, :, :h_old + h_pad, :]
- img_lq = torch.cat([img_lq, torch.flip(img_lq, [3])], 3)[:, :, :, :w_old + w_pad]
- output = denoise_model(img_lq)
- output = output[..., :h_old * 4, :w_old * 4]
-
- # save image
- output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- if output.ndim == 3:
- output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0)) # CHW-RGB to HCW-BGR
- output = (output * 255.0).round().astype(np.uint8) # float32 to uint8
-
- return output
-
-title = " AISeed AI Application Demo "
-description = "# A Demo of Deep Learning for Image Restoration"
-example_list = [["examples/" + example] for example in os.listdir("examples")]
-
-with gr.Blocks() as demo:
- demo.title = title
- gr.Markdown(description)
- with gr.Row():
- with gr.Column():
- im = gr.Image(label="Input Image")
- im_2 = gr.Image(label="Enhanced Image")
-
- with gr.Column():
-
- btn1 = gr.Button(value="Enhance Resolution")
- btn1.click(sr, inputs=[im], outputs=[im_2])
- btn2 = gr.Button(value="Denoise")
- btn2.click(denoise, inputs=[im], outputs=[im_2])
- gr.Examples(examples=example_list,
- inputs=[im],
- outputs=[im_2])
-
-if __name__ == "__main__":
- demo.launch()
\ No newline at end of file
diff --git a/spaces/Kevin676/Clone-Your-Voice/encoder/params_model.py b/spaces/Kevin676/Clone-Your-Voice/encoder/params_model.py
deleted file mode 100644
index 3e356472fb5a27f370cb3920976a11d12a76c1b7..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Clone-Your-Voice/encoder/params_model.py
+++ /dev/null
@@ -1,11 +0,0 @@
-
-## Model parameters
-model_hidden_size = 256
-model_embedding_size = 256
-model_num_layers = 3
-
-
-## Training parameters
-learning_rate_init = 1e-4
-speakers_per_batch = 64
-utterances_per_speaker = 10
diff --git a/spaces/LZRi/LZR-Bert-VITS2/monotonic_align/__init__.py b/spaces/LZRi/LZR-Bert-VITS2/monotonic_align/__init__.py
deleted file mode 100644
index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000
--- a/spaces/LZRi/LZR-Bert-VITS2/monotonic_align/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import numpy as np
-import torch
-from .monotonic_align.core import maximum_path_c
-
-
-def maximum_path(neg_cent, mask):
- """ Cython optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(np.float32)
- path = np.zeros(neg_cent.shape, dtype=np.int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32)
- maximum_path_c(path, neg_cent, t_t_max, t_s_max)
- return torch.from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/model_fetcher.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/model_fetcher.py
deleted file mode 100644
index 23fdb04d4924a393fcc9d4549c5bd1b87ae9d7f6..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/model_fetcher.py
+++ /dev/null
@@ -1,103 +0,0 @@
-import os
-import requests
-from tqdm import tqdm
-import subprocess
-import shutil
-import platform
-import logging
-logger = logging.getLogger(__name__)
-
-URL_BASE = "https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main"
-models_download = [
- ("pretrained/", [
- "D32k.pth", "D40k.pth", "D48k.pth",
- "G32k.pth", "G40k.pth", "G48k.pth",
- "f0D32k.pth", "f0D40k.pth", "f0D48k.pth",
- "f0G32k.pth", "f0G40k.pth", "f0G48k.pth",
- ]),
- ("pretrained_v2/", [
- "D32k.pth", "D40k.pth", "D48k.pth",
- "G32k.pth", "G40k.pth", "G48k.pth",
- "f0D32k.pth", "f0D40k.pth", "f0D48k.pth",
- "f0G32k.pth", "f0G40k.pth", "f0G48k.pth",
- ]),
- ("uvr5_weights/", [
- "HP2_all_vocals.pth", "HP3_all_vocals.pth",
- "HP5_only_main_vocal.pth", "VR-DeEchoAggressive.pth",
- "VR-DeEchoDeReverb.pth", "VR-DeEchoNormal.pth",
- ]),
- ("", ["ffmpeg.exe", "ffprobe.exe"]), # ffmpeg and ffprobe go to the main folder
-]
-
-# List of individual files with their respective local and remote paths
-individual_files = [
- ("hubert_base.pt", "assets/hubert/"),
- ("rmvpe.pt", "assets/rmvpe/"),
- ("rmvpe.onnx", "assets/rmvpe/"),
-]
-
-# Create a dictionary to map remote folders to local folders
-folder_mapping = {
- "pretrained/": "assets/pretrained/",
- "pretrained_v2/": "assets/pretrained_v2/",
- "uvr5_weights/": "assets/uvr5_weights/",
- "": "", # Default folder for files without a remote folder
-}
-
-# Function to download a file with tqdm progress bar
-def download_file_with_progress(url, destination_path):
- response = requests.get(url, stream=True)
- total_size = int(response.headers.get("content-length", 0))
- block_size = 1024 # 1 KB blocks
-
- with open(destination_path, 'wb') as file, tqdm(
- desc=os.path.basename(destination_path),
- total=total_size,
- unit='B',
- unit_scale=True,
- unit_divisor=1024,
- ) as bar:
- for data in response.iter_content(block_size):
- file.write(data)
- bar.update(len(data))
-
-# Download torch crepe if not exists
-if not os.path.exists("torchcrepe"):
- os_name = platform.system()
- # Cloning the GitHub repository into the temporary directory
- print("Cloning the GitHub repository into the temporary directory...")
- subprocess.run(["git", "clone", "https://github.com/maxrmorrison/torchcrepe.git", "temp_torchcrepe"])
-
- # Copying the torchcrepe folder to a different location
- print("Copying the torchcrepe folder...")
- shutil.copytree("temp_torchcrepe/torchcrepe", "./torchcrepe")
-
- # Removing the temporary directory
- print("Removing the temporary directory...")
- print(os_name)
- if os_name == "Windows":
- subprocess.run("rmdir /s /q temp_torchcrepe", shell=True)
- if os_name == "Linux":
- shutil.rmtree("temp_torchcrepe")
-
-# Download files that do not exist
-for remote_folder, file_list in models_download:
- local_folder = folder_mapping.get(remote_folder, "")
- for file in file_list:
- destination_path = os.path.join(local_folder, file)
- url = f"{URL_BASE}/{remote_folder}{file}"
- if not os.path.exists(destination_path):
- print(f"Downloading {url} to {destination_path}...")
- download_file_with_progress(url, destination_path) # Use the function tdqm
-
-# Download individual files
-for file_name, local_folder in individual_files:
- destination_path = os.path.join(local_folder, file_name)
- url = f"{URL_BASE}/{file_name}"
- if not os.path.exists(destination_path):
- print(f"Downloading {url} to {destination_path}...")
- download_file_with_progress(url, destination_path) # Use the function tdqm
-
-os.system('cls' if os.name == 'nt' else 'clear')
-logger.info("Applio download suscessfully continuing...")
-
diff --git a/spaces/LaynzKunz/Model-RCV/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/LaynzKunz/Model-RCV/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Model-RCV/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import parselmouth
-import numpy as np
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git a/spaces/Liu-LAB/GPT-academic/request_llm/bridge_chatgpt.py b/spaces/Liu-LAB/GPT-academic/request_llm/bridge_chatgpt.py
deleted file mode 100644
index 929a7546c56cff1a305ced54df819bb992f6b8a5..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/request_llm/bridge_chatgpt.py
+++ /dev/null
@@ -1,300 +0,0 @@
-# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目
-
-"""
- 该文件中主要包含三个函数
-
- 不具备多线程能力的函数:
- 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
-
- 具备多线程调用能力的函数
- 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑
- 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程
-"""
-
-import json
-import time
-import gradio as gr
-import logging
-import traceback
-import requests
-import importlib
-
-# config_private.py放自己的秘密如API和代理网址
-# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
-from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc
-proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG = \
- get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG')
-
-timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
- '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
-
-def get_full_error(chunk, stream_response):
- """
- 获取完整的从Openai返回的报错
- """
- while True:
- try:
- chunk += next(stream_response)
- except:
- break
- return chunk
-
-
-def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
- """
- 发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
- inputs:
- 是本次问询的输入
- sys_prompt:
- 系统静默prompt
- llm_kwargs:
- chatGPT的内部调优参数
- history:
- 是之前的对话列表
- observe_window = None:
- 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
- """
- watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
- headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
- retry = 0
- while True:
- try:
- # make a POST request to the API endpoint, stream=False
- from .bridge_all import model_info
- endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
- response = requests.post(endpoint, headers=headers, proxies=proxies,
- json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
- except requests.exceptions.ReadTimeout as e:
- retry += 1
- traceback.print_exc()
- if retry > MAX_RETRY: raise TimeoutError
- if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
-
- stream_response = response.iter_lines()
- result = ''
- while True:
- try: chunk = next(stream_response).decode()
- except StopIteration:
- break
- except requests.exceptions.ConnectionError:
- chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。
- if len(chunk)==0: continue
- if not chunk.startswith('data:'):
- error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
- if "reduce the length" in error_msg:
- raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
- else:
- raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
- if ('data: [DONE]' in chunk): break # api2d 正常完成
- json_data = json.loads(chunk.lstrip('data:'))['choices'][0]
- delta = json_data["delta"]
- if len(delta) == 0: break
- if "role" in delta: continue
- if "content" in delta:
- result += delta["content"]
- if not console_slience: print(delta["content"], end='')
- if observe_window is not None:
- # 观测窗,把已经获取的数据显示出去
- if len(observe_window) >= 1: observe_window[0] += delta["content"]
- # 看门狗,如果超过期限没有喂狗,则终止
- if len(observe_window) >= 2:
- if (time.time()-observe_window[1]) > watch_dog_patience:
- raise RuntimeError("用户取消了程序。")
- else: raise RuntimeError("意外Json结构:"+delta)
- if json_data['finish_reason'] == 'content_filter':
- raise RuntimeError("由于提问含不合规内容被Azure过滤。")
- if json_data['finish_reason'] == 'length':
- raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。")
- return result
-
-
-def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- 发送至chatGPT,流式获取输出。
- 用于基础的对话功能。
- inputs 是本次问询的输入
- top_p, temperature是chatGPT的内部调优参数
- history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
- chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
- additional_fn代表点击的哪个按钮,按钮见functional.py
- """
- if is_any_api_key(inputs):
- chatbot._cookies['api_key'] = inputs
- chatbot.append(("输入已识别为openai的api_key", what_keys(inputs)))
- yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面
- return
- elif not is_any_api_key(chatbot._cookies['api_key']):
- chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。"))
- yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面
- return
-
- if additional_fn is not None:
- from core_functional import handle_core_functionality
- inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
-
- raw_input = inputs
- logging.info(f'[raw_input] {raw_input}')
- chatbot.append((inputs, ""))
- yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
-
- # check mis-behavior
- if raw_input.startswith('private_upload/') and len(raw_input) == 34:
- chatbot[-1] = (inputs, f"[Local Message] 检测到操作错误!当您上传文档之后,需要点击“函数插件区”按钮进行处理,而不是点击“提交”按钮。")
- yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
- time.sleep(2)
-
- try:
- headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
- except RuntimeError as e:
- chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
- yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
- return
-
- history.append(inputs); history.append("")
-
- retry = 0
- while True:
- try:
- # make a POST request to the API endpoint, stream=True
- from .bridge_all import model_info
- endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
- response = requests.post(endpoint, headers=headers, proxies=proxies,
- json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
- except:
- retry += 1
- chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
- retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
- yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
- if retry > MAX_RETRY: raise TimeoutError
-
- gpt_replying_buffer = ""
-
- is_head_of_the_stream = True
- if stream:
- stream_response = response.iter_lines()
- while True:
- try:
- chunk = next(stream_response)
- except StopIteration:
- # 非OpenAI官方接口的出现这样的报错,OpenAI和API2D不会走这里
- chunk_decoded = chunk.decode()
- error_msg = chunk_decoded
- chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
- yield from update_ui(chatbot=chatbot, history=history, msg="非Openai官方接口返回了错误:" + chunk.decode()) # 刷新界面
- return
-
- chunk_decoded = chunk.decode()
- if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded):
- # 数据流的第一帧不携带content
- is_head_of_the_stream = False; continue
-
- if chunk:
- try:
- # 前者是API2D的结束条件,后者是OPENAI的结束条件
- if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0):
- # 判定为数据流的结束,gpt_replying_buffer也写完了
- logging.info(f'[response] {gpt_replying_buffer}')
- break
- # 处理数据流的主体
- chunkjson = json.loads(chunk_decoded[6:])
- status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
- # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
- gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk_decoded[6:])['choices'][0]["delta"]["content"]
- history[-1] = gpt_replying_buffer
- chatbot[-1] = (history[-2], history[-1])
- yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
- except Exception as e:
- yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
- chunk = get_full_error(chunk, stream_response)
- chunk_decoded = chunk.decode()
- error_msg = chunk_decoded
- chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
- yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
- print(error_msg)
- return
-
-def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg):
- from .bridge_all import model_info
- openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
- if "reduce the length" in error_msg:
- if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
- history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
- max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
- chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
- elif "does not exist" in error_msg:
- chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
- elif "Incorrect API key" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务. " + openai_website)
- elif "exceeded your current quota" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务." + openai_website)
- elif "account is not active" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website)
- elif "associated with a deactivated account" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website)
- elif "bad forward key" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
- elif "Not enough point" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
- else:
- from toolbox import regular_txt_to_markdown
- tb_str = '```\n' + trimmed_format_exc() + '```'
- chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
- return chatbot, history
-
-def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
- """
- 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
- """
- if not is_any_api_key(llm_kwargs['api_key']):
- raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。")
-
- api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {api_key}"
- }
- if API_ORG.startswith('org-'): headers.update({"OpenAI-Organization": API_ORG})
- if llm_kwargs['llm_model'].startswith('azure-'): headers.update({"api-key": api_key})
-
- conversation_cnt = len(history) // 2
-
- messages = [{"role": "system", "content": system_prompt}]
- if conversation_cnt:
- for index in range(0, 2*conversation_cnt, 2):
- what_i_have_asked = {}
- what_i_have_asked["role"] = "user"
- what_i_have_asked["content"] = history[index]
- what_gpt_answer = {}
- what_gpt_answer["role"] = "assistant"
- what_gpt_answer["content"] = history[index+1]
- if what_i_have_asked["content"] != "":
- if what_gpt_answer["content"] == "": continue
- if what_gpt_answer["content"] == timeout_bot_msg: continue
- messages.append(what_i_have_asked)
- messages.append(what_gpt_answer)
- else:
- messages[-1]['content'] = what_gpt_answer['content']
-
- what_i_ask_now = {}
- what_i_ask_now["role"] = "user"
- what_i_ask_now["content"] = inputs
- messages.append(what_i_ask_now)
-
- payload = {
- "model": llm_kwargs['llm_model'].strip('api2d-'),
- "messages": messages,
- "temperature": llm_kwargs['temperature'], # 1.0,
- "top_p": llm_kwargs['top_p'], # 1.0,
- "n": 1,
- "stream": stream,
- "presence_penalty": 0,
- "frequency_penalty": 0,
- }
- try:
- print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
- except:
- print('输入中可能存在乱码。')
- return headers,payload
-
-
diff --git a/spaces/MarcusSu1216/XingTong/inference/slicer.py b/spaces/MarcusSu1216/XingTong/inference/slicer.py
deleted file mode 100644
index b05840bcf6bdced0b6e2adbecb1a1dd5b3dee462..0000000000000000000000000000000000000000
--- a/spaces/MarcusSu1216/XingTong/inference/slicer.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import librosa
-import torch
-import torchaudio
-
-
-class Slicer:
- def __init__(self,
- sr: int,
- threshold: float = -40.,
- min_length: int = 5000,
- min_interval: int = 300,
- hop_size: int = 20,
- max_sil_kept: int = 5000):
- if not min_length >= min_interval >= hop_size:
- raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size')
- if not max_sil_kept >= hop_size:
- raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size')
- min_interval = sr * min_interval / 1000
- self.threshold = 10 ** (threshold / 20.)
- self.hop_size = round(sr * hop_size / 1000)
- self.win_size = min(round(min_interval), 4 * self.hop_size)
- self.min_length = round(sr * min_length / 1000 / self.hop_size)
- self.min_interval = round(min_interval / self.hop_size)
- self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size)
-
- def _apply_slice(self, waveform, begin, end):
- if len(waveform.shape) > 1:
- return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)]
- else:
- return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)]
-
- # @timeit
- def slice(self, waveform):
- if len(waveform.shape) > 1:
- samples = librosa.to_mono(waveform)
- else:
- samples = waveform
- if samples.shape[0] <= self.min_length:
- return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}}
- rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0)
- sil_tags = []
- silence_start = None
- clip_start = 0
- for i, rms in enumerate(rms_list):
- # Keep looping while frame is silent.
- if rms < self.threshold:
- # Record start of silent frames.
- if silence_start is None:
- silence_start = i
- continue
- # Keep looping while frame is not silent and silence start has not been recorded.
- if silence_start is None:
- continue
- # Clear recorded silence start if interval is not enough or clip is too short
- is_leading_silence = silence_start == 0 and i > self.max_sil_kept
- need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length
- if not is_leading_silence and not need_slice_middle:
- silence_start = None
- continue
- # Need slicing. Record the range of silent frames to be removed.
- if i - silence_start <= self.max_sil_kept:
- pos = rms_list[silence_start: i + 1].argmin() + silence_start
- if silence_start == 0:
- sil_tags.append((0, pos))
- else:
- sil_tags.append((pos, pos))
- clip_start = pos
- elif i - silence_start <= self.max_sil_kept * 2:
- pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin()
- pos += i - self.max_sil_kept
- pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start
- pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept
- if silence_start == 0:
- sil_tags.append((0, pos_r))
- clip_start = pos_r
- else:
- sil_tags.append((min(pos_l, pos), max(pos_r, pos)))
- clip_start = max(pos_r, pos)
- else:
- pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start
- pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept
- if silence_start == 0:
- sil_tags.append((0, pos_r))
- else:
- sil_tags.append((pos_l, pos_r))
- clip_start = pos_r
- silence_start = None
- # Deal with trailing silence.
- total_frames = rms_list.shape[0]
- if silence_start is not None and total_frames - silence_start >= self.min_interval:
- silence_end = min(total_frames, silence_start + self.max_sil_kept)
- pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start
- sil_tags.append((pos, total_frames + 1))
- # Apply and return slices.
- if len(sil_tags) == 0:
- return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}}
- else:
- chunks = []
- # 第一段静音并非从头开始,补上有声片段
- if sil_tags[0][0]:
- chunks.append(
- {"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"})
- for i in range(0, len(sil_tags)):
- # 标识有声片段(跳过第一段)
- if i:
- chunks.append({"slice": False,
- "split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"})
- # 标识所有静音片段
- chunks.append({"slice": True,
- "split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"})
- # 最后一段静音并非结尾,补上结尾片段
- if sil_tags[-1][1] * self.hop_size < len(waveform):
- chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"})
- chunk_dict = {}
- for i in range(len(chunks)):
- chunk_dict[str(i)] = chunks[i]
- return chunk_dict
-
-
-def cut(audio_path, db_thresh=-30, min_len=5000):
- audio, sr = librosa.load(audio_path, sr=None)
- slicer = Slicer(
- sr=sr,
- threshold=db_thresh,
- min_length=min_len
- )
- chunks = slicer.slice(audio)
- return chunks
-
-
-def chunks2audio(audio_path, chunks):
- chunks = dict(chunks)
- audio, sr = torchaudio.load(audio_path)
- if len(audio.shape) == 2 and audio.shape[1] >= 2:
- audio = torch.mean(audio, dim=0).unsqueeze(0)
- audio = audio.cpu().numpy()[0]
- result = []
- for k, v in chunks.items():
- tag = v["split_time"].split(",")
- if tag[0] != tag[1]:
- result.append((v["slice"], audio[int(tag[0]):int(tag[1])]))
- return result, sr
diff --git a/spaces/Mashir0/pximg/scripts/pixivLogin.js b/spaces/Mashir0/pximg/scripts/pixivLogin.js
deleted file mode 100644
index cc092937f64ea737f59b856acde6088190edff1c..0000000000000000000000000000000000000000
--- a/spaces/Mashir0/pximg/scripts/pixivLogin.js
+++ /dev/null
@@ -1,46 +0,0 @@
-const Crypto = require('crypto');
-const { Base64 } = require('js-base64');
-const { stringify } = require('qs');
-const readline = require('readline-sync');
-const PixivApi = require('pixiv-api-client');
-
-const LOGIN_URL = 'https://app-api.pixiv.net/web/v1/login';
-
-const randToken = (len = 32) => Crypto.randomBytes(len);
-const sha256 = data => Crypto.createHash('sha256').update(data).digest();
-
-const generateOauthCode = () => {
- const code_verifier = Base64.fromUint8Array(randToken(), true);
- const code_challenge = Base64.encodeURI(sha256(code_verifier));
- return { code_verifier, code_challenge };
-};
-
-const getLoginInfo = () => {
- const { code_verifier, code_challenge } = generateOauthCode();
- const params = {
- code_challenge,
- code_challenge_method: 'S256',
- client: 'pixiv-android',
- };
- return {
- loginUrl: `${LOGIN_URL}?${stringify(params)}`,
- codeVerifier: code_verifier,
- };
-};
-
-const login = async () => {
- const { loginUrl, codeVerifier } = getLoginInfo();
- console.log('Login URL:', loginUrl);
- const code = (() => {
- while (true) {
- const input = readline.question('Code: ');
- if (input) return input;
- }
- })();
-
- const pixiv = new PixivApi();
- await pixiv.tokenRequest(code, codeVerifier);
- console.log('\nYour refresh token:', pixiv.authInfo().refresh_token);
-};
-
-login();
diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/bid_converter.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/bid_converter.py
deleted file mode 100644
index a16a3439e5cf1802e24505d97b1e94a790010698..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/bid_converter.py
+++ /dev/null
@@ -1,185 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import argparse
-import os
-import os.path as osp
-
-import mmcv
-import mmengine
-
-from mmocr.utils import dump_ocr_data
-
-
-def collect_files(img_dir, gt_dir):
- """Collect all images and their corresponding groundtruth files.
-
- Args:
- img_dir (str): The image directory
- gt_dir (str): The groundtruth directory
-
- Returns:
- files (list): The list of tuples (img_file, groundtruth_file)
- """
- assert isinstance(img_dir, str)
- assert img_dir
- assert isinstance(gt_dir, str)
- assert gt_dir
-
- ann_list, imgs_list = [], []
- for img_file in os.listdir(img_dir):
- ann_file = img_file.split('_')[0] + '_gt_ocr.txt'
- ann_list.append(osp.join(gt_dir, ann_file))
- imgs_list.append(osp.join(img_dir, img_file))
-
- files = list(zip(imgs_list, ann_list))
- assert len(files), f'No images found in {img_dir}'
- print(f'Loaded {len(files)} images from {img_dir}')
-
- return files
-
-
-def collect_annotations(files, nproc=1):
- """Collect the annotation information.
-
- Args:
- files (list): The list of tuples (image_file, groundtruth_file)
- nproc (int): The number of process to collect annotations
-
- Returns:
- images (list): The list of image information dicts
- """
- assert isinstance(files, list)
- assert isinstance(nproc, int)
-
- if nproc > 1:
- images = mmengine.track_parallel_progress(
- load_img_info, files, nproc=nproc)
- else:
- images = mmengine.track_progress(load_img_info, files)
-
- return images
-
-
-def load_img_info(files):
- """Load the information of one image.
-
- Args:
- files (tuple): The tuple of (img_file, groundtruth_file)
-
- Returns:
- img_info (dict): The dict of the img and annotation information
- """
- assert isinstance(files, tuple)
-
- img_file, gt_file = files
- assert osp.basename(gt_file).split('_')[0] == osp.basename(gt_file).split(
- '_')[0]
- # read imgs while ignoring orientations
- img = mmcv.imread(img_file, 'unchanged')
-
- img_info = dict(
- file_name=osp.basename(img_file),
- height=img.shape[0],
- width=img.shape[1],
- segm_file=osp.basename(gt_file))
-
- if osp.splitext(gt_file)[1] == '.txt':
- img_info = load_txt_info(gt_file, img_info)
- else:
- raise NotImplementedError
-
- return img_info
-
-
-def load_txt_info(gt_file, img_info):
- """Collect the annotation information.
-
- The annotation format is as the following:
- x, y, w, h, text
- 977, 152, 16, 49, NOME
- 962, 143, 12, 323, APPINHANESI BLAZEK PASSOTTO
- 906, 446, 12, 94, 206940361
- 905, 641, 12, 44, SPTC
-
- Args:
- gt_file (str): The path to ground-truth
- img_info (dict): The dict of the img and annotation information
-
- Returns:
- img_info (dict): The dict of the img and annotation information
- """
- with open(gt_file, encoding='latin1') as f:
- anno_info = []
- for line in f:
- line = line.strip('\n')
- if line[0] == '[' or line[0] == 'x':
- continue
- ann = line.split(',')
- bbox = ann[0:4]
- bbox = [int(coord) for coord in bbox]
- x, y, w, h = bbox
- segmentation = [x, y, x + w, y, x + w, y + h, x, y + h]
- anno = dict(
- iscrowd=0,
- category_id=1,
- bbox=bbox,
- area=w * h,
- segmentation=[segmentation])
- anno_info.append(anno)
-
- img_info.update(anno_info=anno_info)
-
- return img_info
-
-
-def split_train_val_list(full_list, val_ratio):
- """Split list by val_ratio.
-
- Args:
- full_list (list): list to be split
- val_ratio (float): split ratio for val set
-
- return:
- list(list, list): train_list and val_list
- """
- n_total = len(full_list)
- offset = int(n_total * val_ratio)
- if n_total == 0 or offset < 1:
- return [], full_list
- val_list = full_list[:offset]
- train_list = full_list[offset:]
- return [train_list, val_list]
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='Generate training and val set of BID ')
- parser.add_argument('root_path', help='Root dir path of BID')
- parser.add_argument(
- '--nproc', default=1, type=int, help='Number of processes')
- parser.add_argument(
- '--val-ratio', help='Split ratio for val set', default=0., type=float)
- args = parser.parse_args()
- return args
-
-
-def main():
- args = parse_args()
- root_path = args.root_path
- with mmengine.Timer(print_tmpl='It takes {}s to convert BID annotation'):
- files = collect_files(
- osp.join(root_path, 'imgs'), osp.join(root_path, 'annotations'))
- image_infos = collect_annotations(files, nproc=args.nproc)
- if args.val_ratio:
- image_infos = split_train_val_list(image_infos, args.val_ratio)
- splits = ['training', 'val']
- else:
- image_infos = [image_infos]
- splits = ['training']
- for i, split in enumerate(splits):
- dump_ocr_data(image_infos[i],
- osp.join(root_path, 'instances_' + split + '.json'),
- 'textdet')
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/MrBodean/VoiceClone/synthesizer/preprocess.py b/spaces/MrBodean/VoiceClone/synthesizer/preprocess.py
deleted file mode 100644
index cde325c4163d6800404de214202d773addfff296..0000000000000000000000000000000000000000
--- a/spaces/MrBodean/VoiceClone/synthesizer/preprocess.py
+++ /dev/null
@@ -1,259 +0,0 @@
-from multiprocessing.pool import Pool
-from synthesizer import audio
-from functools import partial
-from itertools import chain
-from encoder import inference as encoder
-from pathlib import Path
-from utils import logmmse
-from tqdm import tqdm
-import numpy as np
-import librosa
-
-
-def preprocess_dataset(datasets_root: Path, out_dir: Path, n_processes: int,
- skip_existing: bool, hparams, no_alignments: bool,
- datasets_name: str, subfolders: str):
- # Gather the input directories
- dataset_root = datasets_root.joinpath(datasets_name)
- input_dirs = [dataset_root.joinpath(subfolder.strip()) for subfolder in subfolders.split(",")]
- print("\n ".join(map(str, ["Using data from:"] + input_dirs)))
- assert all(input_dir.exists() for input_dir in input_dirs)
-
- # Create the output directories for each output file type
- out_dir.joinpath("mels").mkdir(exist_ok=True)
- out_dir.joinpath("audio").mkdir(exist_ok=True)
-
- # Create a metadata file
- metadata_fpath = out_dir.joinpath("train.txt")
- metadata_file = metadata_fpath.open("a" if skip_existing else "w", encoding="utf-8")
-
- # Preprocess the dataset
- speaker_dirs = list(chain.from_iterable(input_dir.glob("*") for input_dir in input_dirs))
- func = partial(preprocess_speaker, out_dir=out_dir, skip_existing=skip_existing,
- hparams=hparams, no_alignments=no_alignments)
- job = Pool(n_processes).imap(func, speaker_dirs)
- for speaker_metadata in tqdm(job, datasets_name, len(speaker_dirs), unit="speakers"):
- for metadatum in speaker_metadata:
- metadata_file.write("|".join(str(x) for x in metadatum) + "\n")
- metadata_file.close()
-
- # Verify the contents of the metadata file
- with metadata_fpath.open("r", encoding="utf-8") as metadata_file:
- metadata = [line.split("|") for line in metadata_file]
- mel_frames = sum([int(m[4]) for m in metadata])
- timesteps = sum([int(m[3]) for m in metadata])
- sample_rate = hparams.sample_rate
- hours = (timesteps / sample_rate) / 3600
- print("The dataset consists of %d utterances, %d mel frames, %d audio timesteps (%.2f hours)." %
- (len(metadata), mel_frames, timesteps, hours))
- print("Max input length (text chars): %d" % max(len(m[5]) for m in metadata))
- print("Max mel frames length: %d" % max(int(m[4]) for m in metadata))
- print("Max audio timesteps length: %d" % max(int(m[3]) for m in metadata))
-
-
-def preprocess_speaker(speaker_dir, out_dir: Path, skip_existing: bool, hparams, no_alignments: bool):
- metadata = []
- for book_dir in speaker_dir.glob("*"):
- if no_alignments:
- # Gather the utterance audios and texts
- # LibriTTS uses .wav but we will include extensions for compatibility with other datasets
- extensions = ["*.wav", "*.flac", "*.mp3"]
- for extension in extensions:
- wav_fpaths = book_dir.glob(extension)
-
- for wav_fpath in wav_fpaths:
- # Load the audio waveform
- wav, _ = librosa.load(str(wav_fpath), hparams.sample_rate)
- if hparams.rescale:
- wav = wav / np.abs(wav).max() * hparams.rescaling_max
-
- # Get the corresponding text
- # Check for .txt (for compatibility with other datasets)
- text_fpath = wav_fpath.with_suffix(".txt")
- if not text_fpath.exists():
- # Check for .normalized.txt (LibriTTS)
- text_fpath = wav_fpath.with_suffix(".normalized.txt")
- assert text_fpath.exists()
- with text_fpath.open("r") as text_file:
- text = "".join([line for line in text_file])
- text = text.replace("\"", "")
- text = text.strip()
-
- # Process the utterance
- metadata.append(process_utterance(wav, text, out_dir, str(wav_fpath.with_suffix("").name),
- skip_existing, hparams))
- else:
- # Process alignment file (LibriSpeech support)
- # Gather the utterance audios and texts
- try:
- alignments_fpath = next(book_dir.glob("*.alignment.txt"))
- with alignments_fpath.open("r") as alignments_file:
- alignments = [line.rstrip().split(" ") for line in alignments_file]
- except StopIteration:
- # A few alignment files will be missing
- continue
-
- # Iterate over each entry in the alignments file
- for wav_fname, words, end_times in alignments:
- wav_fpath = book_dir.joinpath(wav_fname + ".flac")
- assert wav_fpath.exists()
- words = words.replace("\"", "").split(",")
- end_times = list(map(float, end_times.replace("\"", "").split(",")))
-
- # Process each sub-utterance
- wavs, texts = split_on_silences(wav_fpath, words, end_times, hparams)
- for i, (wav, text) in enumerate(zip(wavs, texts)):
- sub_basename = "%s_%02d" % (wav_fname, i)
- metadata.append(process_utterance(wav, text, out_dir, sub_basename,
- skip_existing, hparams))
-
- return [m for m in metadata if m is not None]
-
-
-def split_on_silences(wav_fpath, words, end_times, hparams):
- # Load the audio waveform
- wav, _ = librosa.load(str(wav_fpath), hparams.sample_rate)
- if hparams.rescale:
- wav = wav / np.abs(wav).max() * hparams.rescaling_max
-
- words = np.array(words)
- start_times = np.array([0.0] + end_times[:-1])
- end_times = np.array(end_times)
- assert len(words) == len(end_times) == len(start_times)
- assert words[0] == "" and words[-1] == ""
-
- # Find pauses that are too long
- mask = (words == "") & (end_times - start_times >= hparams.silence_min_duration_split)
- mask[0] = mask[-1] = True
- breaks = np.where(mask)[0]
-
- # Profile the noise from the silences and perform noise reduction on the waveform
- silence_times = [[start_times[i], end_times[i]] for i in breaks]
- silence_times = (np.array(silence_times) * hparams.sample_rate).astype(np.int)
- noisy_wav = np.concatenate([wav[stime[0]:stime[1]] for stime in silence_times])
- if len(noisy_wav) > hparams.sample_rate * 0.02:
- profile = logmmse.profile_noise(noisy_wav, hparams.sample_rate)
- wav = logmmse.denoise(wav, profile, eta=0)
-
- # Re-attach segments that are too short
- segments = list(zip(breaks[:-1], breaks[1:]))
- segment_durations = [start_times[end] - end_times[start] for start, end in segments]
- i = 0
- while i < len(segments) and len(segments) > 1:
- if segment_durations[i] < hparams.utterance_min_duration:
- # See if the segment can be re-attached with the right or the left segment
- left_duration = float("inf") if i == 0 else segment_durations[i - 1]
- right_duration = float("inf") if i == len(segments) - 1 else segment_durations[i + 1]
- joined_duration = segment_durations[i] + min(left_duration, right_duration)
-
- # Do not re-attach if it causes the joined utterance to be too long
- if joined_duration > hparams.hop_size * hparams.max_mel_frames / hparams.sample_rate:
- i += 1
- continue
-
- # Re-attach the segment with the neighbour of shortest duration
- j = i - 1 if left_duration <= right_duration else i
- segments[j] = (segments[j][0], segments[j + 1][1])
- segment_durations[j] = joined_duration
- del segments[j + 1], segment_durations[j + 1]
- else:
- i += 1
-
- # Split the utterance
- segment_times = [[end_times[start], start_times[end]] for start, end in segments]
- segment_times = (np.array(segment_times) * hparams.sample_rate).astype(np.int)
- wavs = [wav[segment_time[0]:segment_time[1]] for segment_time in segment_times]
- texts = [" ".join(words[start + 1:end]).replace(" ", " ") for start, end in segments]
-
- # # DEBUG: play the audio segments (run with -n=1)
- # import sounddevice as sd
- # if len(wavs) > 1:
- # print("This sentence was split in %d segments:" % len(wavs))
- # else:
- # print("There are no silences long enough for this sentence to be split:")
- # for wav, text in zip(wavs, texts):
- # # Pad the waveform with 1 second of silence because sounddevice tends to cut them early
- # # when playing them. You shouldn't need to do that in your parsers.
- # wav = np.concatenate((wav, [0] * 16000))
- # print("\t%s" % text)
- # sd.play(wav, 16000, blocking=True)
- # print("")
-
- return wavs, texts
-
-
-def process_utterance(wav: np.ndarray, text: str, out_dir: Path, basename: str,
- skip_existing: bool, hparams):
- ## FOR REFERENCE:
- # For you not to lose your head if you ever wish to change things here or implement your own
- # synthesizer.
- # - Both the audios and the mel spectrograms are saved as numpy arrays
- # - There is no processing done to the audios that will be saved to disk beyond volume
- # normalization (in split_on_silences)
- # - However, pre-emphasis is applied to the audios before computing the mel spectrogram. This
- # is why we re-apply it on the audio on the side of the vocoder.
- # - Librosa pads the waveform before computing the mel spectrogram. Here, the waveform is saved
- # without extra padding. This means that you won't have an exact relation between the length
- # of the wav and of the mel spectrogram. See the vocoder data loader.
-
-
- # Skip existing utterances if needed
- mel_fpath = out_dir.joinpath("mels", "mel-%s.npy" % basename)
- wav_fpath = out_dir.joinpath("audio", "audio-%s.npy" % basename)
- if skip_existing and mel_fpath.exists() and wav_fpath.exists():
- return None
-
- # Trim silence
- if hparams.trim_silence:
- wav = encoder.preprocess_wav(wav, normalize=False, trim_silence=True)
-
- # Skip utterances that are too short
- if len(wav) < hparams.utterance_min_duration * hparams.sample_rate:
- return None
-
- # Compute the mel spectrogram
- mel_spectrogram = audio.melspectrogram(wav, hparams).astype(np.float32)
- mel_frames = mel_spectrogram.shape[1]
-
- # Skip utterances that are too long
- if mel_frames > hparams.max_mel_frames and hparams.clip_mels_length:
- return None
-
- # Write the spectrogram, embed and audio to disk
- np.save(mel_fpath, mel_spectrogram.T, allow_pickle=False)
- np.save(wav_fpath, wav, allow_pickle=False)
-
- # Return a tuple describing this training example
- return wav_fpath.name, mel_fpath.name, "embed-%s.npy" % basename, len(wav), mel_frames, text
-
-
-def embed_utterance(fpaths, encoder_model_fpath):
- if not encoder.is_loaded():
- encoder.load_model(encoder_model_fpath)
-
- # Compute the speaker embedding of the utterance
- wav_fpath, embed_fpath = fpaths
- wav = np.load(wav_fpath)
- wav = encoder.preprocess_wav(wav)
- embed = encoder.embed_utterance(wav)
- np.save(embed_fpath, embed, allow_pickle=False)
-
-
-def create_embeddings(synthesizer_root: Path, encoder_model_fpath: Path, n_processes: int):
- wav_dir = synthesizer_root.joinpath("audio")
- metadata_fpath = synthesizer_root.joinpath("train.txt")
- assert wav_dir.exists() and metadata_fpath.exists()
- embed_dir = synthesizer_root.joinpath("embeds")
- embed_dir.mkdir(exist_ok=True)
-
- # Gather the input wave filepath and the target output embed filepath
- with metadata_fpath.open("r") as metadata_file:
- metadata = [line.split("|") for line in metadata_file]
- fpaths = [(wav_dir.joinpath(m[0]), embed_dir.joinpath(m[2])) for m in metadata]
-
- # TODO: improve on the multiprocessing, it's terrible. Disk I/O is the bottleneck here.
- # Embed the utterances in separate threads
- func = partial(embed_utterance, encoder_model_fpath=encoder_model_fpath)
- job = Pool(n_processes).imap(func, fpaths)
- list(tqdm(job, "Embedding", len(fpaths), unit="utterances"))
-
diff --git a/spaces/NATSpeech/PortaSpeech/data_gen/tts/runs/train_mfa_align.py b/spaces/NATSpeech/PortaSpeech/data_gen/tts/runs/train_mfa_align.py
deleted file mode 100644
index daaeebe57690a8032be3d15c05d71701211604a7..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/data_gen/tts/runs/train_mfa_align.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import utils.commons.single_thread_env # NOQA
-import glob
-import subprocess
-from textgrid import TextGrid
-import os
-from utils.commons.hparams import hparams, set_hparams
-
-
-def train_mfa_align(mfa_outputs="mfa_outputs",
- mfa_inputs="mfa_inputs",
- model_name=None, pretrain_model_name=None,
- mfa_cmd='train'):
- CORPUS = hparams['processed_data_dir'].split("/")[-1]
- NUM_JOB = int(os.getenv('N_PROC', os.cpu_count()))
- env_vars = [f'CORPUS={CORPUS}', f'NUM_JOB={NUM_JOB}']
- if mfa_outputs is not None:
- env_vars.append(f'MFA_OUTPUTS={mfa_outputs}')
- if mfa_inputs is not None:
- env_vars.append(f'MFA_INPUTS={mfa_inputs}')
- if model_name is not None:
- env_vars.append(f'MODEL_NAME={model_name}')
- if pretrain_model_name is not None:
- env_vars.append(f'PRETRAIN_MODEL_NAME={pretrain_model_name}')
- if mfa_cmd is not None:
- env_vars.append(f'MFA_CMD={mfa_cmd}')
- env_str = ' '.join(env_vars)
- print(f"| Run MFA for {CORPUS}. Env vars: {env_str}")
- subprocess.check_call(f'{env_str} bash mfa_usr/run_mfa_train_align.sh', shell=True)
- mfa_offset = hparams['preprocess_args']['mfa_offset']
- if mfa_offset > 0:
- for tg_fn in glob.glob(f'{hparams["processed_data_dir"]}/{mfa_outputs}/*.TextGrid'):
- tg = TextGrid.fromFile(tg_fn)
- max_time = tg.maxTime
- for tier in tg.tiers:
- for interval in tier.intervals:
- interval.maxTime = min(interval.maxTime + mfa_offset, max_time)
- interval.minTime = min(interval.minTime + mfa_offset, max_time)
- tier.intervals[0].minTime = 0
- tier.maxTime = min(tier.maxTime + mfa_offset, max_time)
- tg.write(tg_fn)
- TextGrid.fromFile(tg_fn)
-
-
-if __name__ == '__main__':
- set_hparams(print_hparams=False)
- train_mfa_align()
diff --git a/spaces/NN520/AI/src/pages/api/kblob.ts b/spaces/NN520/AI/src/pages/api/kblob.ts
deleted file mode 100644
index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000
--- a/spaces/NN520/AI/src/pages/api/kblob.ts
+++ /dev/null
@@ -1,56 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import FormData from 'form-data'
-import { fetch } from '@/lib/isomorphic'
-import { KBlobRequest } from '@/lib/bots/bing/types'
-
-const API_DOMAIN = 'https://bing.vcanbb.top'
-
-export const config = {
- api: {
- bodyParser: {
- sizeLimit: '10mb' // Set desired value here
- }
- }
-}
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest
-
- const formData = new FormData()
- formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest))
- if (imageBase64) {
- formData.append('imageBase64', imageBase64)
- }
-
- const response = await fetch(`${API_DOMAIN}/images/kblob`,
- {
- method: 'POST',
- body: formData.getBuffer(),
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referer": `${API_DOMAIN}/web/index.html`,
- "Referrer-Policy": "origin-when-cross-origin",
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- ...formData.getHeaders()
- }
- }
- ).then(res => res.text())
-
- res.writeHead(200, {
- 'Content-Type': 'application/json',
- })
- res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } }))
- } catch (e) {
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/NimaBoscarino/climategan/climategan/deeplab/resnetmulti_v2.py b/spaces/NimaBoscarino/climategan/climategan/deeplab/resnetmulti_v2.py
deleted file mode 100644
index fe36361f3ea41d182e348ffb98fb9160e718bf88..0000000000000000000000000000000000000000
--- a/spaces/NimaBoscarino/climategan/climategan/deeplab/resnetmulti_v2.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import torch.nn as nn
-from climategan.blocks import ResBlocks
-
-affine_par = True
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None):
- super(Bottleneck, self).__init__()
- # change
- self.conv1 = nn.Conv2d(
- inplanes, planes, kernel_size=1, stride=stride, bias=False
- )
- self.bn1 = nn.BatchNorm2d(planes, affine=affine_par)
- for i in self.bn1.parameters():
- i.requires_grad = False
- padding = dilation
- # change
- self.conv2 = nn.Conv2d(
- planes,
- planes,
- kernel_size=3,
- stride=1,
- padding=padding,
- bias=False,
- dilation=dilation,
- )
- self.bn2 = nn.BatchNorm2d(planes, affine=affine_par)
- for i in self.bn2.parameters():
- i.requires_grad = False
- self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
- self.bn3 = nn.BatchNorm2d(planes * 4, affine=affine_par)
- for i in self.bn3.parameters():
- i.requires_grad = False
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
- out = self.conv3(out)
- out = self.bn3(out)
- if self.downsample is not None:
- residual = self.downsample(x)
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ResNetMulti(nn.Module):
- def __init__(
- self,
- layers,
- n_res=4,
- res_norm="instance",
- activ="lrelu",
- pad_type="reflect",
- ):
- self.inplanes = 64
- block = Bottleneck
- super(ResNetMulti, self).__init__()
- self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
- self.bn1 = nn.BatchNorm2d(64, affine=affine_par)
- for i in self.bn1.parameters():
- i.requires_grad = False
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(
- kernel_size=3, stride=2, padding=0, ceil_mode=True
- ) # changed padding from 1 to 0
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
- self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2)
- self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- m.weight.data.normal_(0, 0.01)
- elif isinstance(m, nn.BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
- self.layer_res = ResBlocks(
- n_res, 2048, norm=res_norm, activation=activ, pad_type=pad_type
- )
-
- def _make_layer(self, block, planes, blocks, stride=1, dilation=1):
- downsample = None
- if (
- stride != 1
- or self.inplanes != planes * block.expansion
- or dilation == 2
- or dilation == 4
- ):
- downsample = nn.Sequential(
- nn.Conv2d(
- self.inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False,
- ),
- nn.BatchNorm2d(planes * block.expansion, affine=affine_par),
- )
- for i in downsample._modules["1"].parameters():
- i.requires_grad = False
- layers = []
- layers.append(
- block(
- self.inplanes, planes, stride, dilation=dilation, downsample=downsample
- )
- )
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes, dilation=dilation))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.maxpool(x)
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
- x = self.layer_res(x)
- return x
diff --git a/spaces/NohTow/Llama2_watermarking/watermark.py b/spaces/NohTow/Llama2_watermarking/watermark.py
deleted file mode 100644
index 7e395dce0a50121e78915b9542d6b32f6472e1a6..0000000000000000000000000000000000000000
--- a/spaces/NohTow/Llama2_watermarking/watermark.py
+++ /dev/null
@@ -1,284 +0,0 @@
-import transformers
-from transformers import AutoTokenizer
-
-from transformers import pipeline, set_seed, LogitsProcessor
-from transformers.generation.logits_process import TopPLogitsWarper, TopKLogitsWarper
-import torch
-from scipy.special import gamma, gammainc, gammaincc, betainc
-from scipy.optimize import fminbound
-import numpy as np
-
-device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
-
-def hash_tokens(input_ids: torch.LongTensor, key: int):
- seed = key
- salt = 35317
- for i in input_ids:
- seed = (seed * salt + i.item()) % (2 ** 64 - 1)
- return seed
-
-class WatermarkingLogitsProcessor(LogitsProcessor):
- def __init__(self, n, key, messages, window_size, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.batch_size = len(messages)
- self.generators = [ torch.Generator(device=device) for _ in range(self.batch_size) ]
-
- self.n = n
- self.key = key
- self.window_size = window_size
- if not self.window_size:
- for b in range(self.batch_size):
- self.generators[b].manual_seed(self.key)
-
- self.messages = messages
-
-class WatermarkingAaronsonLogitsProcessor( WatermarkingLogitsProcessor):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
- # get random uniform variables
- B, V = scores.shape
-
- r = torch.zeros_like(scores)
- for b in range(B):
- if self.window_size:
- window = input_ids[b, -self.window_size:]
- seed = hash_tokens(window, self.key)
- self.generators[b].manual_seed(seed)
- r[b] = torch.rand(self.n, generator=self.generators[b], device=self.generators[b].device).log().roll(-self.messages[b])
- # generate n but keep only V, as we want to keep the pseudo-random sequences in sync with the decoder
- r = r[:,:V]
-
- # modify law as r^(1/p)
- # Since we want to return logits (logits processor takes and outputs logits),
- # we return log(q), hence torch.log(r) * torch.log(torch.exp(1/p)) = torch.log(r) / p
- return r / scores.exp()
-
-class WatermarkingKirchenbauerLogitsProcessor(WatermarkingLogitsProcessor):
- def __init__(self, *args,
- gamma = 0.25,
- delta = 15.0,
- **kwargs):
- super().__init__(*args, **kwargs)
- self.gamma = gamma
- self.delta = delta
-
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
- B, V = scores.shape
-
- for b in range(B):
- if self.window_size:
- window = input_ids[b, -self.window_size:]
- seed = hash_tokens(window, self.key)
- self.generators[b].manual_seed(seed)
- vocab_permutation = torch.randperm(self.n, generator=self.generators[b], device=self.generators[b].device)
- greenlist = vocab_permutation[:int(self.gamma * self.n)] # gamma * n
- bias = torch.zeros(self.n).to(scores.device)
- bias[greenlist] = self.delta
- bias = bias.roll(-self.messages[b])[:V]
- scores[b] += bias # add bias to greenlist words
-
- return scores
-
-class Watermarker(object):
- def __init__(self, tokenizer=None, model=None, window_size = 0, payload_bits = 0, logits_processor = None, *args, **kwargs):
- self.tokenizer = tokenizer
- self.model = model
- self.model.eval()
- self.window_size = window_size
-
- # preprocessing wrappers
- self.logits_processor = logits_processor or []
-
- self.payload_bits = payload_bits
- self.V = max(2**payload_bits, self.model.config.vocab_size)
- self.generator = torch.Generator(device=device)
-
-
- def embed(self, key=42, messages=[1234], prompt="", max_length=30, method='aaronson'):
-
- B = len(messages) # batch size
- length = max_length
-
- # compute capacity
- if self.payload_bits:
- assert min([message >= 0 and message < 2**self.payload_bits for message in messages])
-
- # tokenize prompt
- inputs = self.tokenizer([ prompt ] * B, return_tensors="pt")
-
- if method == 'aaronson':
- # generate with greedy search
- generated_ids = self.model.generate(inputs.input_ids.to(device), max_length=max_length, do_sample=False,
- logits_processor = self.logits_processor + [
- WatermarkingAaronsonLogitsProcessor(n=self.V,
- key=key,
- messages=messages,
- window_size = self.window_size)])
- elif method == 'kirchenbauer':
- # use sampling
- generated_ids = self.model.generate(inputs.input_ids.to(device), max_length=max_length, do_sample=True,
- logits_processor = self.logits_processor + [
- WatermarkingKirchenbauerLogitsProcessor(n=self.V,
- key=key,
- messages=messages,
- window_size = self.window_size)])
- elif method == 'greedy':
- # generate with greedy search
- generated_ids = self.model.generate(inputs.input_ids.to(device), max_length=max_length, do_sample=False,
- logits_processor = self.logits_processor)
- elif method == 'sampling':
- # generate with greedy search
- generated_ids = self.model.generate(inputs.input_ids.to(device), max_length=max_length, do_sample=True,
- logits_processor = self.logits_processor)
- else:
- raise Exception('Unknown method %s' % method)
- decoded_texts = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
-
- return decoded_texts
-
-
- def detect(self, attacked_texts, key=42, method='aaronson', gamma=0.5, prompts=None):
- if(prompts==None):
- prompts = [""] * len(attacked_texts)
- generator = self.generator
-
- print("attacked_texts = ", attacked_texts)
- print("prompts = ", prompts)
-
- cdfs = []
- ms = []
-
- MAX = 2**self.payload_bits
-
- # tokenize input
- inputs = self.tokenizer(attacked_texts, return_tensors="pt", padding=True, return_attention_mask=True)
-
- input_ids = inputs["input_ids"].to(self.model.device)
- attention_masks = inputs["attention_mask"].to(self.model.device)
-
- B,T = input_ids.shape
-
- if method == 'aaronson_neyman_pearson':
- # compute logits
- outputs = self.model.forward(input_ids, return_dict=True)
- logits = outputs['logits']
- # TODO
- # reapply logits processors to get same distribution
- #for i in range(T):
- # for processor in self.logits_processor:
- # logits[:,i] = processor(input_ids[:, :i], logits[:, i])
-
- probs = logits.softmax(dim=-1)
- ps = torch.gather(probs, 2, input_ids[:,1:,None]).squeeze_(-1)
-
-
- seq_len = input_ids.shape[1]
- length = seq_len
-
- V = self.V
-
- Z = torch.zeros(size=(B, V), dtype=torch.float32, device=device)
-
-
- # keep a history of contexts we have already seen,
- # to exclude them from score aggregation and allow
- # correct p-value computation under H0
- history = [set() for _ in range(B)]
-
- attention_masks_prompts = self.tokenizer(prompts, return_tensors="pt", padding=True, return_attention_mask=True)["attention_mask"]
- prompts_length = torch.sum(attention_masks_prompts, dim=1)
- for b in range(B):
- attention_masks[b, :prompts_length[b]] = 0
- if not self.window_size:
- generator.manual_seed(key)
- # We can go from seq_len - prompt_len, need to change +1 to + prompt_len
- for i in range(seq_len-1):
-
- if self.window_size:
- window = input_ids[b, max(0, i-self.window_size+1):i+1]
- #print("window = ", window)
- seed = hash_tokens(window, key)
- if seed not in history[b]:
- generator.manual_seed(seed)
- history[b].add(seed)
- else:
- # ignore the token
- attention_masks[b, i+1] = 0
-
- if not attention_masks[b,i+1]:
- continue
-
- token = int(input_ids[b,i+1])
-
- if method in {'aaronson', 'aaronson_simplified', 'aaronson_neyman_pearson'}:
- R = torch.rand(V, generator = generator, device = generator.device)
-
- if method == 'aaronson':
- r = -(1-R).log()
- elif method in {'aaronson_simplified', 'aaronson_neyman_pearson'}:
- r = -R.log()
- elif method == 'kirchenbauer':
- r = torch.zeros(V, device=device)
- vocab_permutation = torch.randperm(V, generator = generator, device=generator.device)
- greenlist = vocab_permutation[:int(gamma * V)]
- r[greenlist] = 1
- else:
- raise Exception('Unknown method %s' % method)
-
- if method in {'aaronson', 'aaronson_simplified', 'kirchenbauer'}:
- # independent of probs
- Z[b] += r.roll(-token)
- elif method == 'aaronson_neyman_pearson':
- # Neyman-Pearson
- Z[b] += r.roll(-token) * (1/ps[b,i] - 1)
-
- for b in range(B):
- if method in {'aaronson', 'kirchenbauer'}:
- m = torch.argmax(Z[b,:MAX])
- elif method in {'aaronson_simplified', 'aaronson_neyman_pearson'}:
- m = torch.argmin(Z[b,:MAX])
-
- i = int(m)
- S = Z[b, i].item()
- m = i
-
- # actual sequence length
- k = torch.sum(attention_masks[b]).item() - 1
-
- if method == 'aaronson':
- cdf = gammaincc(k, S)
- elif method == 'aaronson_simplified':
- cdf = gammainc(k, S)
- elif method == 'aaronson_neyman_pearson':
- # Chernoff bound
- ratio = ps[b,:k] / (1 - ps[b,:k])
- E = (1/ratio).sum()
-
- if S > E:
- cdf = 1.0
- else:
- # to compute p-value we must solve for c*:
- # (1/(c* + ps/(1-ps))).sum() = S
- func = lambda c : (((1 / (c + ratio)).sum() - S)**2).item()
- c1 = (k / S - torch.min(ratio)).item()
- print("max = ", c1)
- c = fminbound(func, 0, c1)
- print("solved c = ", c)
- print("solved s = ", ((1/(c + ratio)).sum()).item())
- # upper bound
- cdf = torch.exp(torch.sum(-torch.log(1 + c / ratio)) + c * S)
- elif method == 'kirchenbauer':
- cdf = betainc(S, k - S + 1, gamma)
-
- if cdf > min(1 / MAX, 1e-5):
- cdf = 1 - (1 - cdf)**MAX # true value
- else:
- cdf = cdf * MAX # numerically stable upper bound
- cdfs.append(float(cdf))
- ms.append(m)
-
- return cdfs, ms
-
-
diff --git a/spaces/OAOA/DifFace/basicsr/data/realesrgan_paired_dataset.py b/spaces/OAOA/DifFace/basicsr/data/realesrgan_paired_dataset.py
deleted file mode 100644
index 604b026d590273aedd3a1b59465cbd5426962bc2..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/data/realesrgan_paired_dataset.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import os
-from torch.utils import data as data
-from torchvision.transforms.functional import normalize
-
-from basicsr.data.data_util import paired_paths_from_folder, paired_paths_from_lmdb
-from basicsr.data.transforms import augment, paired_random_crop
-from basicsr.utils import FileClient, imfrombytes, img2tensor
-from basicsr.utils.registry import DATASET_REGISTRY
-
-
-@DATASET_REGISTRY.register(suffix='basicsr')
-class RealESRGANPairedDataset(data.Dataset):
- """Paired image dataset for image restoration.
-
- Read LQ (Low Quality, e.g. LR (Low Resolution), blurry, noisy, etc) and GT image pairs.
-
- There are three modes:
-
- 1. **lmdb**: Use lmdb files. If opt['io_backend'] == lmdb.
- 2. **meta_info_file**: Use meta information file to generate paths. \
- If opt['io_backend'] != lmdb and opt['meta_info_file'] is not None.
- 3. **folder**: Scan folders to generate paths. The rest.
-
- Args:
- opt (dict): Config for train datasets. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- dataroot_lq (str): Data root path for lq.
- meta_info (str): Path for meta information file.
- io_backend (dict): IO backend type and other kwarg.
- filename_tmpl (str): Template for each filename. Note that the template excludes the file extension.
- Default: '{}'.
- gt_size (int): Cropped patched size for gt patches.
- use_hflip (bool): Use horizontal flips.
- use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation).
- scale (bool): Scale, which will be added automatically.
- phase (str): 'train' or 'val'.
- """
-
- def __init__(self, opt):
- super(RealESRGANPairedDataset, self).__init__()
- self.opt = opt
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
- # mean and std for normalizing the input images
- self.mean = opt['mean'] if 'mean' in opt else None
- self.std = opt['std'] if 'std' in opt else None
-
- self.gt_folder, self.lq_folder = opt['dataroot_gt'], opt['dataroot_lq']
- self.filename_tmpl = opt['filename_tmpl'] if 'filename_tmpl' in opt else '{}'
-
- # file client (lmdb io backend)
- if self.io_backend_opt['type'] == 'lmdb':
- self.io_backend_opt['db_paths'] = [self.lq_folder, self.gt_folder]
- self.io_backend_opt['client_keys'] = ['lq', 'gt']
- self.paths = paired_paths_from_lmdb([self.lq_folder, self.gt_folder], ['lq', 'gt'])
- elif 'meta_info' in self.opt and self.opt['meta_info'] is not None:
- # disk backend with meta_info
- # Each line in the meta_info describes the relative path to an image
- with open(self.opt['meta_info']) as fin:
- paths = [line.strip() for line in fin]
- self.paths = []
- for path in paths:
- gt_path, lq_path = path.split(', ')
- gt_path = os.path.join(self.gt_folder, gt_path)
- lq_path = os.path.join(self.lq_folder, lq_path)
- self.paths.append(dict([('gt_path', gt_path), ('lq_path', lq_path)]))
- else:
- # disk backend
- # it will scan the whole folder to get meta info
- # it will be time-consuming for folders with too many files. It is recommended using an extra meta txt file
- self.paths = paired_paths_from_folder([self.lq_folder, self.gt_folder], ['lq', 'gt'], self.filename_tmpl)
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- scale = self.opt['scale']
-
- # Load gt and lq images. Dimension order: HWC; channel order: BGR;
- # image range: [0, 1], float32.
- gt_path = self.paths[index]['gt_path']
- img_bytes = self.file_client.get(gt_path, 'gt')
- img_gt = imfrombytes(img_bytes, float32=True)
- lq_path = self.paths[index]['lq_path']
- img_bytes = self.file_client.get(lq_path, 'lq')
- img_lq = imfrombytes(img_bytes, float32=True)
-
- # augmentation for training
- if self.opt['phase'] == 'train':
- gt_size = self.opt['gt_size']
- # random crop
- img_gt, img_lq = paired_random_crop(img_gt, img_lq, gt_size, scale, gt_path)
- # flip, rotation
- img_gt, img_lq = augment([img_gt, img_lq], self.opt['use_hflip'], self.opt['use_rot'])
-
- # BGR to RGB, HWC to CHW, numpy to tensor
- img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True)
- # normalize
- if self.mean is not None or self.std is not None:
- normalize(img_lq, self.mean, self.std, inplace=True)
- normalize(img_gt, self.mean, self.std, inplace=True)
-
- return {'lq': img_lq, 'gt': img_gt, 'lq_path': lq_path, 'gt_path': gt_path}
-
- def __len__(self):
- return len(self.paths)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/rxf/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/rxf/__init__.py
deleted file mode 100644
index b24cb6b797b4159c9862bab1f882ee6ae95614ab..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/rxf/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import rxf_src # noqa
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/unsupervised_quality_estimation/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/unsupervised_quality_estimation/README.md
deleted file mode 100644
index e86a0d13b883af0c37fdc2c1fee9b0b9dff4d18c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/unsupervised_quality_estimation/README.md
+++ /dev/null
@@ -1,126 +0,0 @@
-# Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020)
-
-This page includes instructions for reproducing results from the paper [Unsupervised Quality Estimation for Neural
-Machine Translation (Fomicheva et al., 2020)](https://arxiv.org/abs/2005.10608)
-
-## Requirements:
-
-* mosesdecoder: https://github.com/moses-smt/mosesdecoder
-* subword-nmt: https://github.com/rsennrich/subword-nmt
-* flores: https://github.com/facebookresearch/flores
-
-## Download Models and Test Data
-
-Download translation models and test data from [MLQE dataset repository](https://github.com/facebookresearch/mlqe).
-
-## Set up:
-
-Given a testset consisting of source sentences and reference translations:
-
-* `SRC_LANG`: source language
-* `TGT_LANG`: target language
-* `INPUT`: input prefix, such that the file `$INPUT.$SRC_LANG` contains source sentences and `$INPUT.$TGT_LANG`
-contains the reference sentences
-* `OUTPUT_DIR`: output path to store results
-* `MOSES_DECODER`: path to mosesdecoder installation
-* `BPE_ROOT`: path to subword-nmt installation
-* `BPE`: path to BPE model
-* `MODEL_DIR`: directory containing the NMT model `.pt` file as well as the source and target vocabularies.
-* `TMP`: directory for intermediate temporary files
-* `GPU`: if translating with GPU, id of the GPU to use for inference
-* `DROPOUT_N`: number of stochastic forward passes
-
-`$DROPOUT_N` is set to 30 in the experiments reported in the paper. However, we observed that increasing it beyond 10
-does not bring substantial improvements.
-
-## Translate the data using standard decoding
-
-Preprocess the input data:
-```
-for LANG in $SRC_LANG $TGT_LANG; do
- perl $MOSES_DECODER/scripts/tokenizer/tokenizer.perl -threads 80 -a -l $LANG < $INPUT.$LANG > $TMP/preprocessed.tok.$LANG
- python $BPE_ROOT/apply_bpe.py -c ${BPE} < $TMP/preprocessed.tok.$LANG > $TMP/preprocessed.tok.bpe.$LANG
-done
-```
-
-Binarize the data for faster translation:
-
-```
-fairseq-preprocess --srcdict $MODEL_DIR/dict.$SRC_LANG.txt --tgtdict $MODEL_DIR/dict.$TGT_LANG.txt
---source-lang ${SRC_LANG} --target-lang ${TGT_LANG} --testpref $TMP/preprocessed.tok.bpe --destdir $TMP/bin --workers 4
-```
-
-Translate
-
-```
-CUDA_VISIBLE_DEVICES=$GPU fairseq-generate $TMP/bin --path ${MODEL_DIR}/${SRC_LANG}-${TGT_LANG}.pt --beam 5
---source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 > $TMP/fairseq.out
-grep ^H $TMP/fairseq.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/mt.out
-```
-
-Post-process
-
-```
-sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/mt.out | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl
--l $TGT_LANG > $OUTPUT_DIR/mt.out
-```
-
-## Produce uncertainty estimates
-
-### Scoring
-
-Make temporary files to store the translations repeated N times.
-
-```
-python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/preprocessed.tok.bpe.$SRC_LANG -n $DROPOUT_N
--o $TMP/repeated.$SRC_LANG
-python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/mt.out -n $DROPOUT_N -o $TMP/repeated.$TGT_LANG
-
-fairseq-preprocess --srcdict ${MODEL_DIR}/dict.${SRC_LANG}.txt $TGT_DIC --source-lang ${SRC_LANG}
---target-lang ${TGT_LANG} --testpref ${TMP}/repeated --destdir ${TMP}/bin-repeated
-```
-
-Produce model scores for the generated translations using `--retain-dropout` option to apply dropout at inference time:
-
-```
-CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt --beam 5
- --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 --score-reference --retain-dropout
- --retain-dropout-modules '["TransformerModel","TransformerEncoder","TransformerDecoder","TransformerEncoderLayer"]'
- TransformerDecoderLayer --seed 46 > $TMP/dropout.scoring.out
-
-grep ^H $TMP/dropout.scoring.out | cut -d- -f2- | sort -n | cut -f2 > $TMP/dropout.scores
-
-```
-
-Use `--retain-dropout-modules` to specify the modules. By default, dropout is applied in the same places
-as for training.
-
-Compute the mean of the resulting output distribution:
-
-```
-python $SCRIPTS/scripts/uncertainty/aggregate_scores.py -i $TMP/dropout.scores -o $OUTPUT_DIR/dropout.scores.mean
--n $DROPOUT_N
-```
-
-### Generation
-
-Produce multiple translation hypotheses for the same source using `--retain-dropout` option:
-
-```
-CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt
- --beam 5 --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --retain-dropout
- --unkpen 5 --retain-dropout-modules TransformerModel TransformerEncoder TransformerDecoder
-TransformerEncoderLayer TransformerDecoderLayer --seed 46 > $TMP/dropout.generation.out
-
-grep ^H $TMP/dropout.generation.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/dropout.hypotheses_
-
-sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/dropout.hypotheses_ | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl
--l $TGT_LANG > $TMP/dropout.hypotheses
-```
-
-Compute similarity between multiple hypotheses corresponding to the same source sentence using Meteor
-evaluation metric:
-```
-python meteor.py -i $TMP/dropout.hypotheses -m -n $DROPOUT_N -o
-$OUTPUT_DIR/dropout.gen.sim.meteor
-```
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/sentence_prediction.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/sentence_prediction.py
deleted file mode 100644
index 482b97985a36aca07146772f52dde41df76bf643..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/sentence_prediction.py
+++ /dev/null
@@ -1,104 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-
-import torch
-import torch.nn.functional as F
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-
-
-@dataclass
-class SentencePredictionConfig(FairseqDataclass):
- classification_head_name: str = field(
- default="sentence_classification_head",
- metadata={"help": "name of the classification head to use"},
- )
- regression_target: bool = field(
- default=False,
- )
-
-
-@register_criterion("sentence_prediction", dataclass=SentencePredictionConfig)
-class SentencePredictionCriterion(FairseqCriterion):
- def __init__(self, cfg: SentencePredictionConfig, task):
- super().__init__(task)
- self.classification_head_name = cfg.classification_head_name
- self.regression_target = cfg.regression_target
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- assert (
- hasattr(model, "classification_heads")
- and self.classification_head_name in model.classification_heads
- ), "model must provide sentence classification head for --criterion=sentence_prediction"
-
- logits, _ = model(
- **sample["net_input"],
- features_only=True,
- classification_head_name=self.classification_head_name,
- )
- targets = model.get_targets(sample, [logits]).view(-1)
- sample_size = targets.numel()
-
- if not self.regression_target:
- lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)
- loss = F.nll_loss(lprobs, targets, reduction="sum")
- else:
- logits = logits.view(-1).float()
- targets = targets.float()
- loss = F.mse_loss(logits, targets, reduction="sum")
-
- logging_output = {
- "loss": loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample_size,
- "sample_size": sample_size,
- }
- if not self.regression_target:
- preds = logits.argmax(dim=1)
- logging_output["ncorrect"] = (preds == targets).sum()
-
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- nsentences = sum(log.get("nsentences", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- if sample_size != ntokens:
- metrics.log_scalar(
- "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3
- )
-
- if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]:
- ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs)
- metrics.log_scalar(
- "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/file_chunker_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/file_chunker_utils.py
deleted file mode 100644
index 443100c61ab26808d820b7ea2b1307df6475007c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/file_chunker_utils.py
+++ /dev/null
@@ -1,84 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-import typing as tp
-
-
-def _safe_readline(fd) -> str:
- pos = fd.tell()
- while True:
- try:
- return fd.readline()
- except UnicodeDecodeError:
- pos -= 1
- fd.seek(pos) # search where this character begins
-
-
-def find_offsets(filename: str, num_chunks: int) -> tp.List[int]:
- """
- given a file and a number of chuncks, find the offsets in the file
- to be able to chunk around full lines.
- """
- with open(filename, "r", encoding="utf-8") as f:
- size = os.fstat(f.fileno()).st_size
- chunk_size = size // num_chunks
- offsets = [0 for _ in range(num_chunks + 1)]
- for i in range(1, num_chunks):
- f.seek(chunk_size * i)
- _safe_readline(f)
- offsets[i] = f.tell()
- offsets[-1] = size
- return offsets
-
-
-class ChunkLineIterator:
- """
- Iterator to properly iterate over lines of a file chunck.
- """
-
- def __init__(self, fd, start_offset: int, end_offset: int):
- self._fd = fd
- self._start_offset = start_offset
- self._end_offset = end_offset
-
- def __iter__(self) -> tp.Iterable[str]:
- self._fd.seek(self._start_offset)
- # next(f) breaks f.tell(), hence readline() must be used
- line = _safe_readline(self._fd)
- while line:
- pos = self._fd.tell()
- # f.tell() does not always give the byte position in the file
- # sometimes it skips to a very large number
- # it is unlikely that through a normal read we go from
- # end bytes to end + 2**32 bytes (4 GB) and this makes it unlikely
- # that the procedure breaks by the undeterministic behavior of
- # f.tell()
- if (
- self._end_offset > 0
- and pos > self._end_offset
- and pos < self._end_offset + 2 ** 32
- ):
- break
- yield line
- line = self._fd.readline()
-
-
-class Chunker:
- """
- contextmanager to read a chunck of a file line by line.
- """
-
- def __init__(self, path: str, start_offset: int, end_offset: int):
- self.path = path
- self.start_offset = start_offset
- self.end_offset = end_offset
-
- def __enter__(self) -> ChunkLineIterator:
- self.fd = open(self.path, "r", encoding="utf-8")
- return ChunkLineIterator(self.fd, self.start_offset, self.end_offset)
-
- def __exit__(self, exc_type, exc_val, exc_tb) -> None:
- self.fd.close()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/model_parallel/modules/multihead_attention.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/model_parallel/modules/multihead_attention.py
deleted file mode 100644
index 8eb9d09dad37ab132295166d691873beec63eaf1..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/model_parallel/modules/multihead_attention.py
+++ /dev/null
@@ -1,349 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Dict, Optional, Tuple
-
-import torch
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.incremental_decoding_utils import with_incremental_state
-from fairseq.modules.fairseq_dropout import FairseqDropout
-from torch import Tensor, nn
-
-
-try:
- from fairseq.model_parallel.megatron.mpu import (
- get_cuda_rng_tracker,
- get_model_parallel_world_size,
- ColumnParallelLinear,
- RowParallelLinear,
- )
-
- has_megatron_submodule = True
-except (ImportError, ModuleNotFoundError):
- has_megatron_submodule = False
-
-
-@with_incremental_state
-class ModelParallelMultiheadAttention(nn.Module):
- """Model parallel Multi-headed attention.
- This performs the Multi-headed attention over multiple gpus.
-
- See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details.
- """
-
- def __init__(
- self,
- embed_dim,
- num_heads,
- kdim=None,
- vdim=None,
- dropout=0.0,
- bias=True,
- self_attention=False,
- encoder_decoder_attention=False,
- ):
- super().__init__()
- if not has_megatron_submodule:
- raise ImportError(
- "\n\nPlease install the megatron submodule:"
- "\n\n git submodule update --init "
- "fairseq/model_parallel/megatron"
- )
- self.embed_dim = embed_dim
- self.kdim = kdim if kdim is not None else embed_dim
- self.vdim = vdim if vdim is not None else embed_dim
- self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim
-
- self.model_parallel_size = get_model_parallel_world_size()
-
- self.num_heads_partition = num_heads // self.model_parallel_size
- assert (
- self.num_heads_partition * self.model_parallel_size == num_heads
- ), "Number of heads must be divisible by model parallel size"
-
- self.dropout_module = FairseqDropout(
- dropout, module_name=self.__class__.__name__
- )
- self.head_dim = embed_dim // num_heads
- assert (
- self.head_dim * num_heads == self.embed_dim
- ), "embed_dim must be divisible by num_heads"
- self.scaling = self.head_dim ** -0.5
-
- self.self_attention = self_attention
- self.encoder_decoder_attention = encoder_decoder_attention
-
- assert (
- not self.self_attention or self.qkv_same_dim
- ), "Self-attention requires query, key and value to be of the same size"
-
- self.k_proj = ColumnParallelLinear(
- self.kdim, embed_dim, bias=bias, gather_output=False
- )
- self.v_proj = ColumnParallelLinear(
- self.vdim, embed_dim, bias=bias, gather_output=False
- )
- self.q_proj = ColumnParallelLinear(
- embed_dim, embed_dim, bias=bias, gather_output=False
- )
- self.out_proj = RowParallelLinear(
- embed_dim, embed_dim, bias=bias, input_is_parallel=True
- )
-
- def forward(
- self,
- query,
- key: Optional[Tensor],
- value: Optional[Tensor],
- key_padding_mask: Optional[Tensor] = None,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
- static_kv: bool = False,
- attn_mask: Optional[Tensor] = None,
- **unused_kwargs,
- ) -> Tuple[Tensor, Optional[Tensor]]:
- """Input shape: Time x Batch x Channel
-
- Args:
- key_padding_mask (ByteTensor, optional): mask to exclude
- keys that are pads, of shape `(batch, src_len)`, where
- padding elements are indicated by 1s.
- attn_mask (ByteTensor, optional): typically used to
- implement causal attention, where the mask prevents the
- attention from looking forward in time (default: None).
- """
- tgt_len, bsz, embed_dim = query.size()
- assert embed_dim == self.embed_dim
- assert list(query.size()) == [tgt_len, bsz, embed_dim]
-
- is_tpu = query.device.type == "xla"
-
- if incremental_state is not None:
- saved_state = self._get_input_buffer(incremental_state)
- if saved_state is not None and "prev_key" in saved_state:
- # previous time steps are cached - no need to recompute
- # key and value if they are static
- if static_kv:
- assert self.encoder_decoder_attention and not self.self_attention
- key = value = None
- else:
- saved_state = None
-
- if self.self_attention:
- q = self.q_proj(query)
- k = self.k_proj(query)
- v = self.v_proj(query)
- elif self.encoder_decoder_attention:
- # encoder-decoder attention
- q = self.q_proj(query)
- if key is None:
- assert value is None
- k = v = None
- else:
- k = self.k_proj(key)
- v = self.v_proj(key)
-
- else:
- assert key is not None and value is not None
- q = self.q_proj(query)
- k = self.k_proj(key)
- v = self.v_proj(value)
- q *= self.scaling
-
- q = (
- q.contiguous()
- .view(tgt_len, bsz * self.num_heads_partition, self.head_dim)
- .transpose(0, 1)
- )
- if k is not None:
- k = (
- k.contiguous()
- .view(-1, bsz * self.num_heads_partition, self.head_dim)
- .transpose(0, 1)
- )
- if v is not None:
- v = (
- v.contiguous()
- .view(-1, bsz * self.num_heads_partition, self.head_dim)
- .transpose(0, 1)
- )
-
- if saved_state is not None:
- # saved states are stored with shape (bsz, num_heads_partition, seq_len, head_dim)
- if "prev_key" in saved_state:
- _prev_key = saved_state["prev_key"]
- assert _prev_key is not None
- prev_key = _prev_key.view(
- bsz * self.num_heads_partition, -1, self.head_dim
- )
- if static_kv:
- k = prev_key
- else:
- assert k is not None
- k = torch.cat([prev_key, k], dim=1)
- if "prev_value" in saved_state:
- _prev_value = saved_state["prev_value"]
- assert _prev_value is not None
- prev_value = _prev_value.view(
- bsz * self.num_heads_partition, -1, self.head_dim
- )
- if static_kv:
- v = prev_value
- else:
- assert v is not None
- v = torch.cat([prev_value, v], dim=1)
- prev_key_padding_mask: Optional[Tensor] = None
- if "prev_key_padding_mask" in saved_state:
- prev_key_padding_mask = saved_state["prev_key_padding_mask"]
- assert k is not None and v is not None
- key_padding_mask = (
- ModelParallelMultiheadAttention._append_prev_key_padding_mask(
- key_padding_mask=key_padding_mask,
- prev_key_padding_mask=prev_key_padding_mask,
- batch_size=bsz,
- src_len=k.size(1),
- static_kv=static_kv,
- )
- )
-
- saved_state["prev_key"] = k.view(
- bsz, self.num_heads_partition, -1, self.head_dim
- )
- saved_state["prev_value"] = v.view(
- bsz, self.num_heads_partition, -1, self.head_dim
- )
- saved_state["prev_key_padding_mask"] = key_padding_mask
- # In this branch incremental_state is never None
- assert incremental_state is not None
- incremental_state = self._set_input_buffer(incremental_state, saved_state)
- assert k is not None
- src_len = k.size(1)
-
- # This is part of a workaround to get around fork/join parallelism
- # not supporting Optional types.
- if key_padding_mask is not None and key_padding_mask.dim() == 0:
- key_padding_mask = None
-
- if key_padding_mask is not None:
- assert key_padding_mask.size(0) == bsz
- assert key_padding_mask.size(1) == src_len
-
- attn_weights = torch.bmm(q, k.transpose(1, 2))
-
- assert list(attn_weights.size()) == [
- bsz * self.num_heads_partition,
- tgt_len,
- src_len,
- ]
-
- if attn_mask is not None:
- attn_mask = attn_mask.unsqueeze(0)
- attn_weights += attn_mask
-
- if key_padding_mask is not None:
- # don't attend to padding symbols
- attn_weights = attn_weights.view(
- bsz, self.num_heads_partition, tgt_len, src_len
- )
- if not is_tpu:
- attn_weights = attn_weights.masked_fill(
- key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool),
- float("-inf"),
- )
- else:
- attn_weights = attn_weights.transpose(0, 2)
- attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf"))
- attn_weights = attn_weights.transpose(0, 2)
- attn_weights = attn_weights.view(
- bsz * self.num_heads_partition, tgt_len, src_len
- )
-
- attn_weights_float = utils.softmax(attn_weights, dim=-1)
- attn_weights = attn_weights_float.type_as(attn_weights)
-
- with get_cuda_rng_tracker().fork():
- attn_probs = self.dropout_module(attn_weights)
-
- assert v is not None
- attn = torch.bmm(attn_probs, v)
- assert list(attn.size()) == [
- bsz * self.num_heads_partition,
- tgt_len,
- self.head_dim,
- ]
- embed_dim_partition = embed_dim // self.model_parallel_size
- attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim_partition)
- attn = self.out_proj(attn)
- # return attn_weights None to keep the return type same as single gpu multihead attention
- # This will be deprecated.
- attn_weights: Optional[Tensor] = None
-
- return attn, attn_weights
-
- @staticmethod
- def _append_prev_key_padding_mask(
- key_padding_mask: Optional[Tensor],
- prev_key_padding_mask: Optional[Tensor],
- batch_size: int,
- src_len: int,
- static_kv: bool,
- ) -> Optional[Tensor]:
- # saved key padding masks have shape (bsz, seq_len)
- if prev_key_padding_mask is not None and static_kv:
- new_key_padding_mask = prev_key_padding_mask
- elif prev_key_padding_mask is not None and key_padding_mask is not None:
- new_key_padding_mask = torch.cat(
- [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1
- )
- # During incremental decoding, as the padding token enters and
- # leaves the frame, there will be a time when prev or current
- # is None
- elif prev_key_padding_mask is not None:
-
- filler = torch.zeros(batch_size, src_len - prev_key_padding_mask.size(1))
- if prev_key_padding_mask.is_cuda:
- filler = filler.cuda()
- new_key_padding_mask = torch.cat(
- [prev_key_padding_mask.float(), filler.float()], dim=1
- )
- elif key_padding_mask is not None:
- filler = torch.zeros(batch_size, src_len - key_padding_mask.size(1))
- if key_padding_mask.is_cuda:
- filler = filler.cuda()
- new_key_padding_mask = torch.cat(
- [filler.float(), key_padding_mask.float()], dim=1
- )
- else:
- new_key_padding_mask = prev_key_padding_mask
- return new_key_padding_mask
-
- def reorder_incremental_state(
- self, incremental_state: Dict[str, Dict[str, Optional[Tensor]]], new_order
- ):
- """Reorder buffered internal state (for incremental generation)."""
- input_buffer = self._get_input_buffer(incremental_state)
- if input_buffer is not None:
- for k in input_buffer.keys():
- if input_buffer[k] is not None:
- input_buffer[k] = input_buffer[k].index_select(0, new_order)
- incremental_state = self._set_input_buffer(incremental_state, input_buffer)
- return incremental_state
-
- def _get_input_buffer(
- self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]]
- ) -> Dict[str, Optional[Tensor]]:
- result = self.get_incremental_state(incremental_state, "attn_state")
- if result is not None:
- return result
- else:
- empty_result: Dict[str, Optional[Tensor]] = {}
- return empty_result
-
- def _set_input_buffer(
- self,
- incremental_state: Dict[str, Dict[str, Optional[Tensor]]],
- buffer: Dict[str, Optional[Tensor]],
- ):
- return self.set_incremental_state(incremental_state, "attn_state", buffer)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/fairseq_encoder.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/fairseq_encoder.py
deleted file mode 100644
index 08cbde15a46e9b6d58e11c2f6052e7cf2d0cc8b2..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/fairseq_encoder.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Dict, List, NamedTuple, Optional
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-
-
-EncoderOut = NamedTuple(
- "EncoderOut",
- [
- ("encoder_out", Tensor), # T x B x C
- ("encoder_padding_mask", Optional[Tensor]), # B x T
- ("encoder_embedding", Optional[Tensor]), # B x T x C
- ("encoder_states", Optional[List[Tensor]]), # List[T x B x C]
- ("src_tokens", Optional[Tensor]), # B x T
- ("src_lengths", Optional[Tensor]), # B x 1
- ],
-)
-
-
-class FairseqEncoder(nn.Module):
- """Base class for encoders."""
-
- def __init__(self, dictionary):
- super().__init__()
- self.dictionary = dictionary
-
- def forward(self, src_tokens, src_lengths=None, **kwargs):
- """
- Args:
- src_tokens (LongTensor): tokens in the source language of shape
- `(batch, src_len)`
- src_lengths (LongTensor): lengths of each source sentence of shape
- `(batch)`
- """
- raise NotImplementedError
-
- def forward_torchscript(self, net_input: Dict[str, Tensor]):
- """A TorchScript-compatible version of forward.
-
- Encoders which use additional arguments may want to override
- this method for TorchScript compatibility.
- """
- if torch.jit.is_scripting():
- return self.forward(
- src_tokens=net_input["src_tokens"],
- src_lengths=net_input["src_lengths"],
- )
- else:
- return self.forward_non_torchscript(net_input)
-
- @torch.jit.unused
- def forward_non_torchscript(self, net_input: Dict[str, Tensor]):
- encoder_input = {
- k: v for k, v in net_input.items() if k != "prev_output_tokens"
- }
- return self.forward(**encoder_input)
-
- def reorder_encoder_out(self, encoder_out, new_order):
- """
- Reorder encoder output according to `new_order`.
-
- Args:
- encoder_out: output from the ``forward()`` method
- new_order (LongTensor): desired order
-
- Returns:
- `encoder_out` rearranged according to `new_order`
- """
- raise NotImplementedError
-
- def max_positions(self):
- """Maximum input length supported by the encoder."""
- return 1e6 # an arbitrary large number
-
- def upgrade_state_dict_named(self, state_dict, name):
- """Upgrade old state dicts to work with newer code."""
- return state_dict
-
- def set_num_updates(self, num_updates):
- """State from trainer to pass along to model at every update."""
-
- def _apply(m):
- if hasattr(m, "set_num_updates") and m != self:
- m.set_num_updates(num_updates)
-
- self.apply(_apply)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/lightconv.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/lightconv.py
deleted file mode 100644
index 4edfe359379bc2445c1ae1ada04bd34ca4a32798..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/lightconv.py
+++ /dev/null
@@ -1,1019 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- FairseqIncrementalDecoder,
- register_model,
- register_model_architecture,
-)
-from fairseq.modules import (
- AdaptiveSoftmax,
- DynamicConv,
- FairseqDropout,
- LayerNorm,
- LightweightConv,
- MultiheadAttention,
- PositionalEmbedding,
-)
-from fairseq.utils import safe_hasattr
-
-
-@register_model("lightconv")
-class LightConvModel(FairseqEncoderDecoderModel):
- """
- LightConv and DynamicConv model from `"Pay Less Attention with Lightweight and Dynamic Convolutions" (Wu, et al, 2019)
- `_.
- To use LightConv please set ``--encoder-conv-type lightweight --decoder-conv-type lightweight``
- To use DynamicConv please set ``--encoder-conv-type dynamic --decoder-conv-type dynamic``
-
- Args:
- encoder (LightConvEncoder): the encoder
- decoder (LightConvDecoder): the decoder
-
- The LightConv model provides the following named architectures and
- command-line arguments:
-
- .. argparse::
- :ref: fairseq.models.lightconv_parser
- :prog:
- """
-
- @classmethod
- def hub_models(cls):
- # fmt: off
-
- def moses_subword(path):
- return {
- 'path': path,
- 'tokenizer': 'moses',
- 'bpe': 'subword_nmt',
- }
-
- return {
- 'lightconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz'),
- 'dynamicconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz'),
- 'lightconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz'),
- 'dynamicconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz'),
- 'lightconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'),
- 'dynamicconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'),
- 'lightconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'),
- 'dynamicconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'),
- 'lightconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz'),
- 'dynamicconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz'),
- 'lightconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz'),
- 'dynamicconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz'),
- }
- # fmt: on
-
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- parser.add_argument(
- "--dropout", type=float, metavar="D", help="dropout probability"
- )
- parser.add_argument(
- "--attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--relu-dropout",
- type=float,
- metavar="D",
- help="dropout probability after ReLU in FFN",
- )
- parser.add_argument(
- "--input-dropout",
- type=float,
- metavar="D",
- help="dropout probability of the inputs",
- )
- parser.add_argument(
- "--encoder-embed-path",
- type=str,
- metavar="STR",
- help="path to pre-trained encoder embedding",
- )
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-conv-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--encoder-layers", type=int, metavar="N", help="num encoder layers"
- )
- parser.add_argument(
- "--encoder-attention-heads",
- type=int,
- metavar="N",
- help="num encoder attention heads or LightConv/DynamicConv heads",
- )
- parser.add_argument(
- "--encoder-normalize-before",
- action="store_true",
- help="apply layernorm before each encoder block",
- )
- parser.add_argument(
- "--encoder-learned-pos",
- action="store_true",
- help="use learned positional embeddings in the encoder",
- )
- parser.add_argument(
- "--decoder-embed-path",
- type=str,
- metavar="STR",
- help="path to pre-trained decoder embedding",
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-conv-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--decoder-layers", type=int, metavar="N", help="num decoder layers"
- )
- parser.add_argument(
- "--decoder-attention-heads",
- type=int,
- metavar="N",
- help="num decoder attention heads or LightConv/DynamicConv heads",
- )
- parser.add_argument(
- "--decoder-learned-pos",
- action="store_true",
- help="use learned positional embeddings in the decoder",
- )
- parser.add_argument(
- "--decoder-normalize-before",
- action="store_true",
- help="apply layernorm before each decoder block",
- )
- parser.add_argument(
- "--share-decoder-input-output-embed",
- action="store_true",
- help="share decoder input and output embeddings",
- )
- parser.add_argument(
- "--share-all-embeddings",
- action="store_true",
- help="share encoder, decoder and output embeddings"
- " (requires shared dictionary and embed dim)",
- )
- parser.add_argument(
- "--adaptive-softmax-cutoff",
- metavar="EXPR",
- help="comma separated list of adaptive softmax cutoff points. "
- "Must be used with adaptive_loss criterion",
- ),
- parser.add_argument(
- "--adaptive-softmax-dropout",
- type=float,
- metavar="D",
- help="sets adaptive softmax dropout for the tail projections",
- )
-
- """LightConv and DynamicConv arguments"""
- parser.add_argument(
- "--encoder-kernel-size-list",
- type=lambda x: utils.eval_str_list(x, int),
- help='list of kernel size (default: "[3,7,15,31,31,31,31]")',
- )
- parser.add_argument(
- "--decoder-kernel-size-list",
- type=lambda x: utils.eval_str_list(x, int),
- help='list of kernel size (default: "[3,7,15,31,31,31]")',
- )
- parser.add_argument(
- "--encoder-glu", type=utils.eval_bool, help="glu after in proj"
- )
- parser.add_argument(
- "--decoder-glu", type=utils.eval_bool, help="glu after in proj"
- )
- parser.add_argument(
- "--encoder-conv-type",
- default="dynamic",
- type=str,
- choices=["dynamic", "lightweight"],
- help="type of convolution",
- )
- parser.add_argument(
- "--decoder-conv-type",
- default="dynamic",
- type=str,
- choices=["dynamic", "lightweight"],
- help="type of convolution",
- )
- parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool)
- parser.add_argument(
- "--weight-dropout",
- type=float,
- metavar="D",
- help="dropout probability for conv weights",
- )
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present in older models
- base_architecture(args)
-
- if not safe_hasattr(args, "max_source_positions"):
- args.max_source_positions = 1024
- if not safe_hasattr(args, "max_target_positions"):
- args.max_target_positions = 1024
-
- src_dict, tgt_dict = task.source_dictionary, task.target_dictionary
-
- def build_embedding(dictionary, embed_dim, path=None):
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- emb = Embedding(num_embeddings, embed_dim, padding_idx)
- # if provided, load from preloaded dictionaries
- if path:
- embed_dict = utils.parse_embedding(path)
- utils.load_embedding(embed_dict, dictionary, emb)
- return emb
-
- if args.share_all_embeddings:
- if src_dict != tgt_dict:
- raise RuntimeError(
- "--share-all-embeddings requires a joined dictionary"
- )
- if args.encoder_embed_dim != args.decoder_embed_dim:
- raise RuntimeError(
- "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim"
- )
- if args.decoder_embed_path and (
- args.decoder_embed_path != args.encoder_embed_path
- ):
- raise RuntimeError(
- "--share-all-embeddings not compatible with --decoder-embed-path"
- )
- encoder_embed_tokens = build_embedding(
- src_dict, args.encoder_embed_dim, args.encoder_embed_path
- )
- decoder_embed_tokens = encoder_embed_tokens
- args.share_decoder_input_output_embed = True
- else:
- encoder_embed_tokens = build_embedding(
- src_dict, args.encoder_embed_dim, args.encoder_embed_path
- )
- decoder_embed_tokens = build_embedding(
- tgt_dict, args.decoder_embed_dim, args.decoder_embed_path
- )
-
- encoder = LightConvEncoder(args, src_dict, encoder_embed_tokens)
- decoder = LightConvDecoder(args, tgt_dict, decoder_embed_tokens)
- return LightConvModel(encoder, decoder)
-
-
-class LightConvEncoder(FairseqEncoder):
- """
- LightConv encoder consisting of *args.encoder_layers* layers. Each layer
- is a :class:`LightConvEncoderLayer`.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- dictionary (~fairseq.data.Dictionary): encoding dictionary
- embed_tokens (torch.nn.Embedding): input embedding
- """
-
- def __init__(self, args, dictionary, embed_tokens):
- super().__init__(dictionary)
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
-
- embed_dim = embed_tokens.embedding_dim
- self.padding_idx = embed_tokens.padding_idx
- self.max_source_positions = args.max_source_positions
-
- self.embed_tokens = embed_tokens
- self.embed_scale = math.sqrt(embed_dim)
- self.embed_positions = (
- PositionalEmbedding(
- args.max_source_positions,
- embed_dim,
- self.padding_idx,
- learned=args.encoder_learned_pos,
- )
- if not args.no_token_positional_embeddings
- else None
- )
-
- self.layers = nn.ModuleList([])
- self.layers.extend(
- [
- LightConvEncoderLayer(
- args, kernel_size=args.encoder_kernel_size_list[i]
- )
- for i in range(args.encoder_layers)
- ]
- )
- self.register_buffer("version", torch.Tensor([2]))
- self.normalize = args.encoder_normalize_before
- if self.normalize:
- self.layer_norm = LayerNorm(embed_dim)
-
- def forward(self, src_tokens, **unused):
- """
- Args:
- src_tokens (LongTensor): tokens in the source language of shape
- `(batch, src_len)`
-
- Returns:
- dict:
- - **encoder_out** (Tensor): the last encoder layer's output of
- shape `(src_len, batch, embed_dim)`
- - **encoder_padding_mask** (ByteTensor): the positions of
- padding elements of shape `(batch, src_len)`
- """
- # embed tokens and positions
- x = self.embed_scale * self.embed_tokens(src_tokens)
- if self.embed_positions is not None:
- x += self.embed_positions(src_tokens)
- x = self.dropout_module(x)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # compute padding mask
- encoder_padding_mask = src_tokens.eq(self.padding_idx)
- if not encoder_padding_mask.any():
- encoder_padding_mask = None
-
- # encoder layers
- for layer in self.layers:
- x = layer(x, encoder_padding_mask)
-
- if self.normalize:
- x = self.layer_norm(x)
-
- return {
- "encoder_out": x, # T x B x C
- "encoder_padding_mask": encoder_padding_mask, # B x T
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- """
- Reorder encoder output according to *new_order*.
-
- Args:
- encoder_out: output from the ``forward()`` method
- new_order (LongTensor): desired order
-
- Returns:
- *encoder_out* rearranged according to *new_order*
- """
- if encoder_out["encoder_out"] is not None:
- encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select(
- 1, new_order
- )
- if encoder_out["encoder_padding_mask"] is not None:
- encoder_out["encoder_padding_mask"] = encoder_out[
- "encoder_padding_mask"
- ].index_select(0, new_order)
- return encoder_out
-
- def max_positions(self):
- """Maximum input length supported by the encoder."""
- if self.embed_positions is None:
- return self.max_source_positions
- return min(self.max_source_positions, self.embed_positions.max_positions)
-
-
-class LightConvDecoder(FairseqIncrementalDecoder):
- """
- LightConv decoder consisting of *args.decoder_layers* layers. Each layer
- is a :class:`LightConvDecoderLayer`.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- dictionary (~fairseq.data.Dictionary): decoding dictionary
- embed_tokens (torch.nn.Embedding): output embedding
- no_encoder_attn (bool, optional): whether to attend to encoder outputs.
- Default: ``False``
- """
-
- def __init__(
- self, args, dictionary, embed_tokens, no_encoder_attn=False, final_norm=True
- ):
- super().__init__(dictionary)
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
- self.share_input_output_embed = args.share_decoder_input_output_embed
-
- input_embed_dim = embed_tokens.embedding_dim
- embed_dim = args.decoder_embed_dim
- output_embed_dim = args.decoder_output_dim
-
- padding_idx = embed_tokens.padding_idx
- self.max_target_positions = args.max_target_positions
-
- self.embed_tokens = embed_tokens
- self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim
-
- self.project_in_dim = (
- Linear(input_embed_dim, embed_dim, bias=False)
- if embed_dim != input_embed_dim
- else None
- )
-
- self.embed_positions = (
- PositionalEmbedding(
- args.max_target_positions,
- embed_dim,
- padding_idx,
- learned=args.decoder_learned_pos,
- )
- if not args.no_token_positional_embeddings
- else None
- )
-
- self.layers = nn.ModuleList([])
- self.layers.extend(
- [
- LightConvDecoderLayer(
- args, no_encoder_attn, kernel_size=args.decoder_kernel_size_list[i]
- )
- for i in range(args.decoder_layers)
- ]
- )
-
- self.adaptive_softmax = None
-
- self.project_out_dim = (
- Linear(embed_dim, output_embed_dim, bias=False)
- if embed_dim != output_embed_dim and not args.tie_adaptive_weights
- else None
- )
-
- if args.adaptive_softmax_cutoff is not None:
- self.adaptive_softmax = AdaptiveSoftmax(
- len(dictionary),
- output_embed_dim,
- utils.eval_str_list(args.adaptive_softmax_cutoff, type=int),
- dropout=args.adaptive_softmax_dropout,
- adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None,
- factor=args.adaptive_softmax_factor,
- tie_proj=args.tie_adaptive_proj,
- )
- elif not self.share_input_output_embed:
- self.embed_out = nn.Parameter(
- torch.Tensor(len(dictionary), output_embed_dim)
- )
- nn.init.normal_(self.embed_out, mean=0, std=output_embed_dim ** -0.5)
- self.register_buffer("version", torch.Tensor([2]))
- self.normalize = args.decoder_normalize_before and final_norm
- if self.normalize:
- self.layer_norm = LayerNorm(embed_dim)
-
- def forward(
- self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
- ):
- """
- Args:
- prev_output_tokens (LongTensor): previous decoder outputs of shape
- `(batch, tgt_len)`, for teacher forcing
- encoder_out (Tensor, optional): output from the encoder, used for
- encoder-side attention
- incremental_state (dict): dictionary used for storing state during
- :ref:`Incremental decoding`
-
- Returns:
- tuple:
- - the last decoder layer's output of shape `(batch, tgt_len,
- vocab)`
- - the last decoder layer's attention weights of shape `(batch,
- tgt_len, src_len)`
- """
- # embed positions
- positions = (
- self.embed_positions(
- prev_output_tokens,
- incremental_state=incremental_state,
- )
- if self.embed_positions is not None
- else None
- )
-
- if incremental_state is not None:
- prev_output_tokens = prev_output_tokens[:, -1:]
- if positions is not None:
- positions = positions[:, -1:]
-
- # embed tokens and positions
- x = self.embed_scale * self.embed_tokens(prev_output_tokens)
-
- if self.project_in_dim is not None:
- x = self.project_in_dim(x)
-
- if positions is not None:
- x += positions
- x = self.dropout_module(x)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
- attn = None
-
- inner_states = [x]
-
- # decoder layers
- for layer in self.layers:
- x, attn = layer(
- x,
- encoder_out["encoder_out"] if encoder_out is not None else None,
- encoder_out["encoder_padding_mask"]
- if encoder_out is not None
- else None,
- incremental_state,
- )
- inner_states.append(x)
-
- if self.normalize:
- x = self.layer_norm(x)
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
-
- if self.project_out_dim is not None:
- x = self.project_out_dim(x)
-
- if self.adaptive_softmax is None:
- # project back to size of vocabulary
- if self.share_input_output_embed:
- x = F.linear(x, self.embed_tokens.weight)
- else:
- x = F.linear(x, self.embed_out)
-
- return x, {"attn": attn, "inner_states": inner_states}
-
- def max_positions(self):
- """Maximum output length supported by the decoder."""
- if self.embed_positions is None:
- return self.max_target_positions
- return min(self.max_target_positions, self.embed_positions.max_positions)
-
- def buffered_future_mask(self, tensor):
- dim = tensor.size(0)
- if (
- not hasattr(self, "_future_mask")
- or self._future_mask is None
- or self._future_mask.device != tensor.device
- ):
- self._future_mask = torch.triu(
- utils.fill_with_neg_inf(tensor.new(dim, dim)), 1
- )
- if self._future_mask.size(0) < dim:
- self._future_mask = torch.triu(
- utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1
- )
- return self._future_mask[:dim, :dim]
-
-
-class LightConvEncoderLayer(nn.Module):
- """Encoder layer block.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- kernel_size: kernel size of the convolution
- """
-
- def __init__(self, args, kernel_size=0):
- super().__init__()
- self.embed_dim = args.encoder_embed_dim
- self.conv_dim = args.encoder_conv_dim
- padding_l = (
- kernel_size // 2
- if kernel_size % 2 == 1
- else ((kernel_size - 1) // 2, kernel_size // 2)
- )
-
- if args.encoder_glu:
- self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim)
- self.act = nn.GLU()
- else:
- self.linear1 = Linear(self.embed_dim, self.conv_dim)
- self.act = None
- if args.encoder_conv_type == "lightweight":
- self.conv = LightweightConv(
- self.conv_dim,
- kernel_size,
- padding_l=padding_l,
- weight_softmax=args.weight_softmax,
- num_heads=args.encoder_attention_heads,
- weight_dropout=args.weight_dropout,
- )
- elif args.encoder_conv_type == "dynamic":
- self.conv = DynamicConv(
- self.conv_dim,
- kernel_size,
- padding_l=padding_l,
- weight_softmax=args.weight_softmax,
- num_heads=args.encoder_attention_heads,
- weight_dropout=args.weight_dropout,
- )
- else:
- raise NotImplementedError
- self.linear2 = Linear(self.conv_dim, self.embed_dim)
-
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
- self.relu_dropout_module = FairseqDropout(
- args.relu_dropout, module_name=self.__class__.__name__
- )
- self.input_dropout_module = FairseqDropout(
- args.input_dropout, module_name=self.__class__.__name__
- )
- self.normalize_before = args.encoder_normalize_before
- self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim)
- self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim)
- self.layer_norms = nn.ModuleList([LayerNorm(self.embed_dim) for _ in range(2)])
-
- def forward(self, x, encoder_padding_mask):
- """
- Args:
- x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- encoder_padding_mask (ByteTensor): binary ByteTensor of shape
- `(batch, src_len)` where padding elements are indicated by ``1``.
-
- Returns:
- encoded output of shape `(batch, src_len, embed_dim)`
- """
- residual = x
- x = self.maybe_layer_norm(0, x, before=True)
- x = self.input_dropout_module(x)
- x = self.linear1(x)
- if self.act is not None:
- x = self.act(x)
- if encoder_padding_mask is not None:
- x = x.masked_fill(encoder_padding_mask.transpose(0, 1).unsqueeze(2), 0)
- x = self.conv(x)
- x = self.linear2(x)
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(0, x, after=True)
-
- residual = x
- x = self.maybe_layer_norm(1, x, before=True)
- x = F.relu(self.fc1(x))
- x = self.relu_dropout_module(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(1, x, after=True)
- return x
-
- def maybe_layer_norm(self, i, x, before=False, after=False):
- assert before ^ after
- if after ^ self.normalize_before:
- return self.layer_norms[i](x)
- else:
- return x
-
- def extra_repr(self):
- return (
- "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format(
- self.dropout_module.p,
- self.relu_dropout_module.p,
- self.input_dropout_module.p,
- self.normalize_before,
- )
- )
-
-
-class LightConvDecoderLayer(nn.Module):
- """Decoder layer block.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- no_encoder_attn (bool, optional): whether to attend to encoder outputs.
- Default: ``False``
- kernel_size: kernel size of the convolution
- """
-
- def __init__(self, args, no_encoder_attn=False, kernel_size=0):
- super().__init__()
- self.embed_dim = args.decoder_embed_dim
- self.conv_dim = args.decoder_conv_dim
- if args.decoder_glu:
- self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim)
- self.act = nn.GLU()
- else:
- self.linear1 = Linear(self.embed_dim, self.conv_dim)
- self.act = None
- if args.decoder_conv_type == "lightweight":
- self.conv = LightweightConv(
- self.conv_dim,
- kernel_size,
- padding_l=kernel_size - 1,
- weight_softmax=args.weight_softmax,
- num_heads=args.decoder_attention_heads,
- weight_dropout=args.weight_dropout,
- )
- elif args.decoder_conv_type == "dynamic":
- self.conv = DynamicConv(
- self.conv_dim,
- kernel_size,
- padding_l=kernel_size - 1,
- weight_softmax=args.weight_softmax,
- num_heads=args.decoder_attention_heads,
- weight_dropout=args.weight_dropout,
- )
- else:
- raise NotImplementedError
- self.linear2 = Linear(self.conv_dim, self.embed_dim)
-
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
- self.relu_dropout_module = FairseqDropout(
- args.relu_dropout, module_name=self.__class__.__name__
- )
- self.input_dropout_module = FairseqDropout(
- args.input_dropout, module_name=self.__class__.__name__
- )
- self.normalize_before = args.decoder_normalize_before
-
- self.conv_layer_norm = LayerNorm(self.embed_dim)
-
- if no_encoder_attn:
- self.encoder_attn = None
- self.encoder_attn_layer_norm = None
- else:
- self.encoder_attn = MultiheadAttention(
- self.embed_dim,
- args.decoder_attention_heads,
- dropout=args.attention_dropout,
- encoder_decoder_attention=True,
- )
- self.encoder_attn_layer_norm = LayerNorm(self.embed_dim)
-
- self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim)
- self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim)
-
- self.final_layer_norm = LayerNorm(self.embed_dim)
- self.need_attn = True
-
- def forward(
- self,
- x,
- encoder_out,
- encoder_padding_mask,
- incremental_state,
- prev_conv_state=None,
- prev_attn_state=None,
- conv_mask=None,
- conv_padding_mask=None,
- ):
- """
- Args:
- x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- encoder_padding_mask (ByteTensor): binary ByteTensor of shape
- `(batch, src_len)` where padding elements are indicated by ``1``.
-
- Returns:
- encoded output of shape `(batch, src_len, embed_dim)`
- """
- residual = x
- x = self.maybe_layer_norm(self.conv_layer_norm, x, before=True)
- if prev_conv_state is not None:
- if incremental_state is None:
- incremental_state = {}
- self.conv._set_input_buffer(incremental_state, prev_conv_state)
- x = self.input_dropout_module(x)
- x = self.linear1(x)
- if self.act is not None:
- x = self.act(x)
- x = self.conv(x, incremental_state=incremental_state)
- x = self.linear2(x)
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(self.conv_layer_norm, x, after=True)
-
- attn = None
- if self.encoder_attn is not None:
- residual = x
- x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True)
- if prev_attn_state is not None:
- if incremental_state is None:
- incremental_state = {}
- prev_key, prev_value = prev_attn_state
- saved_state = {"prev_key": prev_key, "prev_value": prev_value}
- self.encoder_attn._set_input_buffer(incremental_state, saved_state)
- x, attn = self.encoder_attn(
- query=x,
- key=encoder_out,
- value=encoder_out,
- key_padding_mask=encoder_padding_mask,
- incremental_state=incremental_state,
- static_kv=True,
- need_weights=(not self.training and self.need_attn),
- )
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True)
-
- residual = x
- x = self.maybe_layer_norm(self.final_layer_norm, x, before=True)
- x = F.relu(self.fc1(x))
- x = self.relu_dropout_module(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(self.final_layer_norm, x, after=True)
- return x, attn
-
- def maybe_layer_norm(self, layer_norm, x, before=False, after=False):
- assert before ^ after
- if after ^ self.normalize_before:
- return layer_norm(x)
- else:
- return x
-
- def make_generation_fast_(self, need_attn=False, **kwargs):
- self.need_attn = need_attn
-
- def extra_repr(self):
- return (
- "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format(
- self.dropout_module.p,
- self.relu_dropout_module.p,
- self.input_dropout_module.p,
- self.normalize_before,
- )
- )
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
- nn.init.constant_(m.weight[padding_idx], 0)
- return m
-
-
-def Linear(in_features, out_features, bias=True):
- m = nn.Linear(in_features, out_features, bias)
- nn.init.xavier_uniform_(m.weight)
- if bias:
- nn.init.constant_(m.bias, 0.0)
- return m
-
-
-@register_model_architecture("lightconv", "lightconv")
-def base_architecture(args):
- args.encoder_embed_path = getattr(args, "encoder_embed_path", None)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048)
- args.encoder_layers = getattr(args, "encoder_layers", 7)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False)
- args.decoder_embed_path = getattr(args, "decoder_embed_path", None)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim)
- args.decoder_ffn_embed_dim = getattr(
- args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim
- )
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
- args.attention_dropout = getattr(args, "attention_dropout", 0.0)
- args.relu_dropout = getattr(args, "relu_dropout", 0.0)
- args.dropout = getattr(args, "dropout", 0.1)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.share_decoder_input_output_embed = getattr(
- args, "share_decoder_input_output_embed", False
- )
- args.share_all_embeddings = getattr(args, "share_all_embeddings", False)
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
-
- args.encoder_conv_dim = getattr(args, "encoder_conv_dim", args.encoder_embed_dim)
- args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim)
-
- args.encoder_kernel_size_list = getattr(
- args, "encoder_kernel_size_list", [3, 7, 15, 31, 31, 31, 31]
- )
- args.decoder_kernel_size_list = getattr(
- args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31]
- )
- if len(args.encoder_kernel_size_list) == 1:
- args.encoder_kernel_size_list = (
- args.encoder_kernel_size_list * args.encoder_layers
- )
- if len(args.decoder_kernel_size_list) == 1:
- args.decoder_kernel_size_list = (
- args.decoder_kernel_size_list * args.decoder_layers
- )
- assert (
- len(args.encoder_kernel_size_list) == args.encoder_layers
- ), "encoder_kernel_size_list doesn't match encoder_layers"
- assert (
- len(args.decoder_kernel_size_list) == args.decoder_layers
- ), "decoder_kernel_size_list doesn't match decoder_layers"
- args.encoder_glu = getattr(args, "encoder_glu", True)
- args.decoder_glu = getattr(args, "decoder_glu", True)
- args.input_dropout = getattr(args, "input_dropout", 0.1)
- args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout)
-
-
-@register_model_architecture("lightconv", "lightconv_iwslt_de_en")
-def lightconv_iwslt_de_en(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4)
- args.encoder_layers = getattr(args, "encoder_layers", 7)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4)
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.attention_dropout = getattr(args, "attention_dropout", 0.1)
- args.weight_dropout = getattr(args, "weight_dropout", 0.1)
- args.encoder_glu = getattr(args, "encoder_glu", False)
- args.decoder_glu = getattr(args, "decoder_glu", False)
- args.input_dropout = getattr(args, "input_dropout", 0.0)
- base_architecture(args)
-
-
-@register_model_architecture("lightconv", "lightconv_wmt_en_de")
-def lightconv_wmt_en_de(args):
- base_architecture(args)
-
-
-@register_model_architecture("lightconv", "lightconv_wmt_en_de_big")
-def lightconv_wmt_en_de_big(args):
- args.attention_dropout = getattr(args, "attention_dropout", 0.1)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16)
- args.dropout = getattr(args, "dropout", 0.3)
- base_architecture(args)
-
-
-@register_model_architecture("lightconv", "lightconv_wmt_en_fr_big")
-def lightconv_wmt_en_fr_big(args):
- args.dropout = getattr(args, "dropout", 0.1)
- lightconv_wmt_en_de_big(args)
-
-
-@register_model_architecture("lightconv", "lightconv_wmt_zh_en_big")
-def lightconv_wmt_zh_en_big(args):
- args.dropout = getattr(args, "dropout", 0.2)
- args.attention_dropout = getattr(args, "attention_dropout", 0.2)
- args.weight_dropout = getattr(args, "weight_dropout", 0.2)
- lightconv_wmt_en_de_big(args)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/adam.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/adam.py
deleted file mode 100644
index d3ae9e64a74774310adcd9968d2eae23368890f9..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/adam.py
+++ /dev/null
@@ -1,239 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import math
-from collections.abc import Collection
-from dataclasses import dataclass, field
-from typing import Any, List
-
-import torch
-import torch.distributed as dist
-import torch.optim
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim import FairseqOptimizer, register_optimizer
-from fairseq.optim.fused_adam import get_fused_adam_class
-from omegaconf import II, OmegaConf
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class FairseqAdamConfig(FairseqDataclass):
- adam_betas: Any = field(
- default=(0.9, 0.999), metadata={"help": "betas for Adam optimizer"}
- )
- adam_eps: float = field(
- default=1e-8, metadata={"help": "epsilon for Adam optimizer"}
- )
- weight_decay: float = field(default=0.0, metadata={"help": "weight decay"})
- use_old_adam: bool = field(
- default=False, metadata={"help": "Use fairseq.optim.adam.Adam"}
- )
- fp16_adam_stats: bool = field(
- default=False, metadata={"help": "use FP16 stats (with automatic scaling)"}
- )
- # TODO common vars below in parent
- tpu: bool = II("common.tpu")
- lr: List[float] = II("optimization.lr")
-
-
-@register_optimizer("adam", dataclass=FairseqAdamConfig)
-class FairseqAdam(FairseqOptimizer):
- """Adam optimizer for fairseq.
-
- Important note: this optimizer corresponds to the "AdamW" variant of
- Adam in its weight decay behavior. As such, it is most closely
- analogous to torch.optim.AdamW from PyTorch.
- """
-
- def __init__(self, cfg: FairseqAdamConfig, params):
- super().__init__(cfg)
- fused_adam_cls = get_fused_adam_class()
- use_fused_adam = (
- not getattr(cfg, "use_old_adam", False)
- and fused_adam_cls is not None
- and torch.cuda.is_available()
- )
- if getattr(cfg, "tpu", False):
- if self.cfg.fp16_adam_stats:
- raise NotImplementedError("--fp16-adam-stats is only supported on GPU")
- # on TPUs we use the Adam defined here, since it
- # automatically casts gradients to FP32
- self._optimizer = Adam(params, **self.optimizer_config)
- elif use_fused_adam:
- logger.info("using FusedAdam")
- self._optimizer = fused_adam_cls(
- params,
- use_fp16_stats=self.cfg.fp16_adam_stats,
- **self.optimizer_config
- )
- else:
- if self.cfg.fp16_adam_stats:
- raise NotImplementedError("--fp16-adam-stats is only supported with FusedAdamV1")
- self._optimizer = Adam(params, **self.optimizer_config)
-
- @property
- def optimizer_config(self):
- """
- Return a kwarg dictionary that will be used to override optimizer
- args stored in checkpoints. This allows us to load a checkpoint and
- resume training using a different set of optimizer args, e.g., with a
- different learning rate.
- """
- return {
- "lr": self.cfg.lr[0]
- if isinstance(self.cfg.lr, Collection)
- else self.cfg.lr,
- "betas": eval(self.cfg.adam_betas)
- if isinstance(self.cfg.adam_betas, str)
- else OmegaConf.to_container(self.cfg.adam_betas),
- "eps": self.cfg.adam_eps,
- "weight_decay": self.cfg.weight_decay,
- }
-
- def average_params(self):
- """Reduce Params is only used during BMUF distributed training."""
- state_dict = self.optimizer.state_dict()
- total_gpus = float(dist.get_world_size())
-
- for _, value in state_dict["state"].items():
- value["exp_avg"] /= total_gpus
- value["exp_avg_sq"] /= total_gpus
- dist.all_reduce(value["exp_avg"], op=dist.ReduceOp.SUM)
- dist.all_reduce(value["exp_avg_sq"], op=dist.ReduceOp.SUM)
-
-
-class Adam(torch.optim.Optimizer):
- r"""Implements Adam algorithm.
-
- This implementation is modified from torch.optim.Adam based on:
- `Fixed Weight Decay Regularization in Adam`
- (see https://arxiv.org/abs/1711.05101)
-
- It has been proposed in `Adam: A Method for Stochastic Optimization`_.
-
- Args:
- params (iterable): iterable of parameters to optimize or dicts defining
- parameter groups
- lr (float, optional): learning rate (default: 1e-3)
- betas (Tuple[float, float], optional): coefficients used for computing
- running averages of gradient and its square (default: (0.9, 0.999))
- eps (float, optional): term added to the denominator to improve
- numerical stability (default: 1e-8)
- weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
- amsgrad (boolean, optional): whether to use the AMSGrad variant of this
- algorithm from the paper `On the Convergence of Adam and Beyond`_
-
- .. _Adam\: A Method for Stochastic Optimization:
- https://arxiv.org/abs/1412.6980
- .. _On the Convergence of Adam and Beyond:
- https://openreview.net/forum?id=ryQu7f-RZ
- """
-
- def __init__(
- self,
- params,
- lr=1e-3,
- betas=(0.9, 0.999),
- eps=1e-8,
- weight_decay=0,
- amsgrad=False,
- ):
- defaults = dict(
- lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, amsgrad=amsgrad
- )
- super(Adam, self).__init__(params, defaults)
-
- @property
- def supports_memory_efficient_fp16(self):
- return True
-
- @property
- def supports_flat_params(self):
- return True
-
- def step(self, closure=None):
- """Performs a single optimization step.
-
- Args:
- closure (callable, optional): A closure that reevaluates the model
- and returns the loss.
- """
- loss = None
- if closure is not None:
- loss = closure()
-
- for group in self.param_groups:
- for p in group["params"]:
- if p.grad is None:
- continue
- grad = p.grad.data
- if grad.dtype in {torch.float16, torch.bfloat16}:
- grad = grad.float()
- if grad.is_sparse:
- raise RuntimeError(
- "Adam does not support sparse gradients, please consider SparseAdam instead"
- )
- amsgrad = group.get("amsgrad", False)
-
- p_data_fp32 = p.data
- if p.data.dtype in {torch.float16, torch.bfloat16}:
- p_data_fp32 = p_data_fp32.float()
-
- state = self.state[p]
-
- # State initialization
- if len(state) == 0:
- state["step"] = 0
- # Exponential moving average of gradient values
- state["exp_avg"] = torch.zeros_like(p_data_fp32)
- # Exponential moving average of squared gradient values
- state["exp_avg_sq"] = torch.zeros_like(p_data_fp32)
- if amsgrad:
- # Maintains max of all exp. moving avg. of sq. grad. values
- state["max_exp_avg_sq"] = torch.zeros_like(p_data_fp32)
- else:
- state["exp_avg"] = state["exp_avg"].to(p_data_fp32)
- state["exp_avg_sq"] = state["exp_avg_sq"].to(p_data_fp32)
- if amsgrad:
- state["max_exp_avg_sq"] = state["max_exp_avg_sq"].to(
- p_data_fp32
- )
-
- exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"]
- if amsgrad:
- max_exp_avg_sq = state["max_exp_avg_sq"]
- beta1, beta2 = group["betas"]
-
- state["step"] += 1
-
- # Decay the first and second moment running average coefficient
- exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
- exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
- if amsgrad:
- # Maintains the maximum of all 2nd moment running avg. till now
- torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
- # Use the max. for normalizing running avg. of gradient
- denom = max_exp_avg_sq.sqrt().add_(group["eps"])
- else:
- denom = exp_avg_sq.sqrt().add_(group["eps"])
-
- bias_correction1 = 1 - beta1 ** state["step"]
- bias_correction2 = 1 - beta2 ** state["step"]
- step_size = group["lr"] * math.sqrt(bias_correction2) / bias_correction1
-
- if group["weight_decay"] != 0:
- p_data_fp32.add_(
- p_data_fp32, alpha=-group["weight_decay"] * group["lr"]
- )
-
- p_data_fp32.addcdiv_(exp_avg, denom, value=-step_size)
-
- if p.data.dtype in {torch.float16, torch.bfloat16}:
- p.data.copy_(p_data_fp32)
-
- return loss
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/fast_noisy_channel/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/fast_noisy_channel/__init__.py
deleted file mode 100644
index 9b248c3a24e12ad3da885a7f328c714942de2e6b..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/fast_noisy_channel/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import noisy_channel_translation # noqa
-from . import noisy_channel_sequence_generator # noqa
-from . import noisy_channel_beam_search # noqa
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_options.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_options.py
deleted file mode 100644
index de91939e6635bdf33c9dc330116be07d9e8be6a2..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/noisychannel/rerank_options.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq import options
-
-
-def get_reranking_parser(default_task="translation"):
- parser = options.get_parser("Generation and reranking", default_task)
- add_reranking_args(parser)
- return parser
-
-
-def get_tuning_parser(default_task="translation"):
- parser = options.get_parser("Reranking tuning", default_task)
- add_reranking_args(parser)
- add_tuning_args(parser)
- return parser
-
-
-def add_reranking_args(parser):
- group = parser.add_argument_group("Reranking")
- # fmt: off
- group.add_argument('--score-model1', '-s1', type=str, metavar='FILE', required=True,
- help='path to first model or ensemble of models for rescoring')
- group.add_argument('--score-model2', '-s2', type=str, metavar='FILE', required=False,
- help='path to second model or ensemble of models for rescoring')
- group.add_argument('--num-rescore', '-n', type=int, metavar='N', default=10,
- help='the number of candidate hypothesis to rescore')
- group.add_argument('-bz', '--batch-size', type=int, metavar='N', default=128,
- help='batch size for generating the nbest list')
- group.add_argument('--gen-subset', default='test', metavar='SET', choices=['test', 'train', 'valid'],
- help='data subset to generate (train, valid, test)')
- group.add_argument('--gen-model', default=None, metavar='FILE',
- help='the model to generate translations')
- group.add_argument('-b1', '--backwards1', action='store_true',
- help='whether or not the first model group is backwards')
- group.add_argument('-b2', '--backwards2', action='store_true',
- help='whether or not the second model group is backwards')
- group.add_argument('-a', '--weight1', default=1, nargs='+', type=float,
- help='the weight(s) of the first model')
- group.add_argument('-b', '--weight2', default=1, nargs='+', type=float,
- help='the weight(s) of the second model, or the gen model if using nbest from interactive.py')
- group.add_argument('-c', '--weight3', default=1, nargs='+', type=float,
- help='the weight(s) of the third model')
-
- # lm arguments
- group.add_argument('-lm', '--language-model', default=None, metavar='FILE',
- help='language model for target language to rescore translations')
- group.add_argument('--lm-dict', default=None, metavar='FILE',
- help='the dict of the language model for the target language')
- group.add_argument('--lm-name', default=None,
- help='the name of the language model for the target language')
- group.add_argument('--lm-bpe-code', default=None, metavar='FILE',
- help='the bpe code for the language model for the target language')
- group.add_argument('--data-dir-name', default=None,
- help='name of data directory')
- group.add_argument('--lenpen', default=1, nargs='+', type=float,
- help='length penalty: <1.0 favors shorter, >1.0 favors longer sentences')
- group.add_argument('--score-dict-dir', default=None,
- help='the directory with dictionaries for the scoring models')
- group.add_argument('--right-to-left1', action='store_true',
- help='whether the first model group is a right to left model')
- group.add_argument('--right-to-left2', action='store_true',
- help='whether the second model group is a right to left model')
- group.add_argument('--post-process', '--remove-bpe', default='@@ ',
- help='the bpe symbol, used for the bitext and LM')
- group.add_argument('--prefix-len', default=None, type=int,
- help='the length of the target prefix to use in rescoring (in terms of words wo bpe)')
- group.add_argument('--sampling', action='store_true',
- help='use sampling instead of beam search for generating n best list')
- group.add_argument('--diff-bpe', action='store_true',
- help='bpe for rescoring and nbest list not the same')
- group.add_argument('--rescore-bpe-code', default=None,
- help='bpe code for rescoring models')
- group.add_argument('--nbest-list', default=None,
- help='use predefined nbest list in interactive.py format')
- group.add_argument('--write-hypos', default=None,
- help='filename prefix to write hypos to')
- group.add_argument('--ref-translation', default=None,
- help='reference translation to use with nbest list from interactive.py')
- group.add_argument('--backwards-score-dict-dir', default=None,
- help='the directory with dictionaries for the backwards model,'
- 'if None then it is assumed the fw and backwards models share dictionaries')
-
- # extra scaling args
- group.add_argument('--gen-model-name', default=None,
- help='the name of the models that generated the nbest list')
- group.add_argument('--model1-name', default=None,
- help='the name of the set for model1 group ')
- group.add_argument('--model2-name', default=None,
- help='the name of the set for model2 group')
- group.add_argument('--shard-id', default=0, type=int,
- help='the id of the shard to generate')
- group.add_argument('--num-shards', default=1, type=int,
- help='the number of shards to generate across')
- group.add_argument('--all-shards', action='store_true',
- help='use all shards')
- group.add_argument('--target-prefix-frac', default=None, type=float,
- help='the fraction of the target prefix to use in rescoring (in terms of words wo bpe)')
- group.add_argument('--source-prefix-frac', default=None, type=float,
- help='the fraction of the source prefix to use in rescoring (in terms of words wo bpe)')
- group.add_argument('--normalize', action='store_true',
- help='whether to normalize by src and target len')
- # fmt: on
- return group
-
-
-def add_tuning_args(parser):
- group = parser.add_argument_group("Tuning")
-
- group.add_argument(
- "--lower-bound",
- default=[-0.7],
- nargs="+",
- type=float,
- help="lower bound of search space",
- )
- group.add_argument(
- "--upper-bound",
- default=[3],
- nargs="+",
- type=float,
- help="upper bound of search space",
- )
- group.add_argument(
- "--tune-param",
- default=["lenpen"],
- nargs="+",
- choices=["lenpen", "weight1", "weight2", "weight3"],
- help="the parameter(s) to tune",
- )
- group.add_argument(
- "--tune-subset",
- default="valid",
- choices=["valid", "test", "train"],
- help="the subset to tune on ",
- )
- group.add_argument(
- "--num-trials",
- default=1000,
- type=int,
- help="number of trials to do for random search",
- )
- group.add_argument(
- "--share-weights", action="store_true", help="share weight2 and weight 3"
- )
- return group
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/rxf/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/rxf/README.md
deleted file mode 100644
index 22a1cc47df23c7e0ebbf0ad805031478d1b4a95e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/rxf/README.md
+++ /dev/null
@@ -1,52 +0,0 @@
-[Better Fine-Tuning by Reducing Representational Collapse](https://arxiv.org/abs/2008.03156)
-=====================
-This repo contains the code to replicate all experiments from the _Better Fine-Tuning by Reducing Representational Collapse_ paper excluding the probing results.
-
-The R3F sentence prediction criterion is registered as `sentence_prediction_r3f` while the label smoothing version of it is implemented as `label_smoothed_cross_entropy_r3f`. The R4F version of the sentence prediction criterion can be achieved by applying spectral norm to the classification head via the `--spectral-norm-classification-head` parameter.
-
-## Hyper-parameters
-Our methods introduce 3 new hyper-parameters; `--eps` which sets the standard deviation or range of the distribution we're sampling from, `--r3f-lambda` which controls the combining of logistic loss and noisy KL loss and `--noise-type` which controls which parametric distribution we use ('normal', 'uniform').
-
-For example to run R3F on RTE from GLUE
-
-```
-TOTAL_NUM_UPDATES=3120
-WARMUP_UPDATES=187
-LR=1e-05
-NUM_CLASSES=2
-MAX_SENTENCES=8 # Batch size.
-ROBERTA_PATH=/path/to/roberta/model.pt
-
-CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin \
- --restore-file $ROBERTA_PATH \
- --max-positions 512 \
- --max-sentences $MAX_SENTENCES \
- --max-tokens 4400 \
- --task sentence_prediction \
- --reset-optimizer --reset-dataloader --reset-meters \
- --required-batch-size-multiple 1 \
- --init-token 0 --separator-token 2 \
- --arch roberta_large \
- --criterion sentence_prediction_r3f \
- --num-classes $NUM_CLASSES \
- --dropout 0.1 --attention-dropout 0.1 \
- --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \
- --clip-norm 0.0 \
- --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \
- --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \
- --max-epoch 10 \
- --find-unused-parameters \
- --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \
- --noise-type uniform --r3f-lambda 0.7 \
- --user-dir examples/rxf/rxf_src
-```
-
-## Citation
-```bibtex
-@article{aghajanyan2020better,
- title={Better Fine-Tuning by Reducing Representational Collapse},
- author={Aghajanyan, Armen and Shrivastava, Akshat and Gupta, Anchit and Goyal, Naman and Zettlemoyer, Luke and Gupta, Sonal},
- journal={arXiv preprint arXiv:2008.03156},
- year={2020}
-}
-```
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py
deleted file mode 100644
index a30254604311a488a1d4959f941051890ed32b2e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import logging
-from pathlib import Path
-from collections import defaultdict
-from typing import List, Dict, Tuple
-
-import pandas as pd
-import numpy as np
-import torchaudio
-from tqdm import tqdm
-
-from examples.speech_to_text.data_utils import load_df_from_tsv, save_df_to_tsv
-
-
-log = logging.getLogger(__name__)
-
-SPLITS = ["train", "dev", "test"]
-
-
-def get_top_n(
- root: Path, n_speakers: int = 10, min_n_tokens: int = 5
-) -> pd.DataFrame:
- df = load_df_from_tsv(root / "validated.tsv")
- df["n_tokens"] = [len(s.split()) for s in df["sentence"]]
- df = df[df["n_tokens"] >= min_n_tokens]
- df["n_frames"] = [
- torchaudio.info((root / "clips" / p).as_posix()).num_frames
- for p in tqdm(df["path"])
- ]
- df["id"] = [Path(p).stem for p in df["path"]]
- total_duration_ms = df.groupby("client_id")["n_frames"].agg(["sum"])
- total_duration_ms = total_duration_ms.sort_values("sum", ascending=False)
-
- top_n_total_duration_ms = total_duration_ms.head(n_speakers)
- top_n_client_ids = set(top_n_total_duration_ms.index.tolist())
- df_top_n = df[df["client_id"].isin(top_n_client_ids)]
- return df_top_n
-
-
-def get_splits(
- df, train_split_ratio=0.99, speaker_in_all_splits=False, rand_seed=0
-) -> Tuple[Dict[str, str], List[str]]:
- np.random.seed(rand_seed)
- dev_split_ratio = (1. - train_split_ratio) / 3
- grouped = list(df.groupby("client_id"))
- id_to_split = {}
- for _, cur_df in tqdm(grouped):
- cur_n_examples = len(cur_df)
- if speaker_in_all_splits and cur_n_examples < 3:
- continue
- cur_n_train = int(cur_n_examples * train_split_ratio)
- cur_n_dev = int(cur_n_examples * dev_split_ratio)
- cur_n_test = cur_n_examples - cur_n_dev - cur_n_train
- if speaker_in_all_splits and cur_n_dev * cur_n_test == 0:
- cur_n_dev, cur_n_test = 1, 1
- cur_n_train = cur_n_examples - cur_n_dev - cur_n_test
- cur_indices = cur_df.index.tolist()
- cur_shuffled_indices = np.random.permutation(cur_n_examples)
- cur_shuffled_indices = [cur_indices[i] for i in cur_shuffled_indices]
- cur_indices_by_split = {
- "train": cur_shuffled_indices[:cur_n_train],
- "dev": cur_shuffled_indices[cur_n_train: cur_n_train + cur_n_dev],
- "test": cur_shuffled_indices[cur_n_train + cur_n_dev:]
- }
- for split in SPLITS:
- for i in cur_indices_by_split[split]:
- id_ = df["id"].loc[i]
- id_to_split[id_] = split
- return id_to_split, sorted(df["client_id"].unique())
-
-
-def convert_to_wav(root: Path, filenames: List[str], target_sr=16_000):
- out_root = root / "wav"
- out_root.mkdir(exist_ok=True, parents=True)
- print("Converting to WAV...")
- for n in tqdm(filenames):
- in_path = (root / "clips" / n).as_posix()
- waveform, sr = torchaudio.load(in_path)
- converted, converted_sr = torchaudio.sox_effects.apply_effects_tensor(
- waveform, sr, [["rate", str(target_sr)], ["channels", "1"]]
- )
- out_path = (out_root / Path(n).with_suffix(".wav").name).as_posix()
- torchaudio.save(out_path, converted, converted_sr, encoding="PCM_S",
- bits_per_sample=16)
-
-
-def process(args):
- data_root = Path(args.data_root).absolute() / args.lang
-
- # Generate TSV manifest
- print("Generating manifest...")
-
- df_top_n = get_top_n(data_root)
- id_to_split, speakers = get_splits(df_top_n)
-
- if args.convert_to_wav:
- convert_to_wav(data_root, df_top_n["path"].tolist())
-
- manifest_by_split = {split: defaultdict(list) for split in SPLITS}
- for sample in tqdm(df_top_n.to_dict(orient="index").values()):
- sample_id = sample["id"]
- split = id_to_split[sample_id]
- manifest_by_split[split]["id"].append(sample_id)
- if args.convert_to_wav:
- audio_path = data_root / "wav" / f"{sample_id}.wav"
- else:
- audio_path = data_root / "clips" / f"{sample_id}.mp3"
- manifest_by_split[split]["audio"].append(audio_path.as_posix())
- manifest_by_split[split]["n_frames"].append(sample["n_frames"])
- manifest_by_split[split]["tgt_text"].append(sample["sentence"])
- manifest_by_split[split]["speaker"].append(sample["client_id"])
- manifest_by_split[split]["src_text"].append(sample["sentence"])
-
- output_root = Path(args.output_manifest_root).absolute()
- output_root.mkdir(parents=True, exist_ok=True)
- for split in SPLITS:
- save_df_to_tsv(
- pd.DataFrame.from_dict(manifest_by_split[split]),
- output_root / f"{split}.audio.tsv"
- )
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument("--data-root", "-d", required=True, type=str)
- parser.add_argument("--output-manifest-root", "-m", required=True, type=str)
- parser.add_argument("--lang", "-l", required=True, type=str)
- parser.add_argument("--convert-to-wav", action="store_true")
- args = parser.parse_args()
-
- process(args)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/wav2vec_featurize.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/wav2vec_featurize.py
deleted file mode 100644
index 588268b7080cbd3400ac144604b2d75cef2876dd..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/wav2vec_featurize.py
+++ /dev/null
@@ -1,249 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Helper script to pre-compute embeddings for a flashlight (previously called wav2letter++) dataset
-"""
-
-import argparse
-import glob
-import os
-from shutil import copy
-
-import h5py
-import numpy as np
-import soundfile as sf
-import torch
-import tqdm
-import fairseq
-from torch import nn
-
-
-def read_audio(fname):
- """ Load an audio file and return PCM along with the sample rate """
-
- wav, sr = sf.read(fname)
- assert sr == 16e3
-
- return wav, 16e3
-
-
-class PretrainedWav2VecModel(nn.Module):
- def __init__(self, fname):
- super().__init__()
-
- model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([fname])
- model = model[0]
- model.eval()
-
- self.model = model
-
- def forward(self, x):
- with torch.no_grad():
- z = self.model.feature_extractor(x)
- if isinstance(z, tuple):
- z = z[0]
- c = self.model.feature_aggregator(z)
- return z, c
-
-
-class EmbeddingWriterConfig(argparse.ArgumentParser):
- def __init__(self):
- super().__init__("Pre-compute embeddings for flashlight datasets")
-
- kwargs = {"action": "store", "type": str, "required": True}
-
- self.add_argument("--input", "-i", help="Input Directory", **kwargs)
- self.add_argument("--output", "-o", help="Output Directory", **kwargs)
- self.add_argument("--model", help="Path to model checkpoint", **kwargs)
- self.add_argument("--split", help="Dataset Splits", nargs="+", **kwargs)
- self.add_argument(
- "--ext", default="wav", required=False, help="Audio file extension"
- )
-
- self.add_argument(
- "--no-copy-labels",
- action="store_true",
- help="Do not copy label files. Useful for large datasets, use --targetdir in flashlight then.",
- )
- self.add_argument(
- "--use-feat",
- action="store_true",
- help="Use the feature vector ('z') instead of context vector ('c') for features",
- )
- self.add_argument("--gpu", help="GPU to use", default=0, type=int)
-
-
-class Prediction:
- """ Lightweight wrapper around a fairspeech embedding model """
-
- def __init__(self, fname, gpu=0):
- self.gpu = gpu
- self.model = PretrainedWav2VecModel(fname).cuda(gpu)
-
- def __call__(self, x):
- x = torch.from_numpy(x).float().cuda(self.gpu)
- with torch.no_grad():
- z, c = self.model(x.unsqueeze(0))
-
- return z.squeeze(0).cpu().numpy(), c.squeeze(0).cpu().numpy()
-
-
-class H5Writer:
- """ Write features as hdf5 file in flashlight compatible format """
-
- def __init__(self, fname):
- self.fname = fname
- os.makedirs(os.path.dirname(self.fname), exist_ok=True)
-
- def write(self, data):
- channel, T = data.shape
-
- with h5py.File(self.fname, "w") as out_ds:
- data = data.T.flatten()
- out_ds["features"] = data
- out_ds["info"] = np.array([16e3 // 160, T, channel])
-
-
-class EmbeddingDatasetWriter(object):
- """Given a model and a flashlight dataset, pre-compute and store embeddings
-
- Args:
- input_root, str :
- Path to the flashlight dataset
- output_root, str :
- Desired output directory. Will be created if non-existent
- split, str :
- Dataset split
- """
-
- def __init__(
- self,
- input_root,
- output_root,
- split,
- model_fname,
- extension="wav",
- gpu=0,
- verbose=False,
- use_feat=False,
- ):
-
- assert os.path.exists(model_fname)
-
- self.model_fname = model_fname
- self.model = Prediction(self.model_fname, gpu)
-
- self.input_root = input_root
- self.output_root = output_root
- self.split = split
- self.verbose = verbose
- self.extension = extension
- self.use_feat = use_feat
-
- assert os.path.exists(self.input_path), "Input path '{}' does not exist".format(
- self.input_path
- )
-
- def _progress(self, iterable, **kwargs):
- if self.verbose:
- return tqdm.tqdm(iterable, **kwargs)
- return iterable
-
- def require_output_path(self, fname=None):
- path = self.get_output_path(fname)
- os.makedirs(path, exist_ok=True)
-
- @property
- def input_path(self):
- return self.get_input_path()
-
- @property
- def output_path(self):
- return self.get_output_path()
-
- def get_input_path(self, fname=None):
- if fname is None:
- return os.path.join(self.input_root, self.split)
- return os.path.join(self.get_input_path(), fname)
-
- def get_output_path(self, fname=None):
- if fname is None:
- return os.path.join(self.output_root, self.split)
- return os.path.join(self.get_output_path(), fname)
-
- def copy_labels(self):
- self.require_output_path()
-
- labels = list(
- filter(
- lambda x: self.extension not in x, glob.glob(self.get_input_path("*"))
- )
- )
- for fname in tqdm.tqdm(labels):
- copy(fname, self.output_path)
-
- @property
- def input_fnames(self):
- return sorted(glob.glob(self.get_input_path("*.{}".format(self.extension))))
-
- def __len__(self):
- return len(self.input_fnames)
-
- def write_features(self):
-
- paths = self.input_fnames
-
- fnames_context = map(
- lambda x: os.path.join(
- self.output_path, x.replace("." + self.extension, ".h5context")
- ),
- map(os.path.basename, paths),
- )
-
- for name, target_fname in self._progress(
- zip(paths, fnames_context), total=len(self)
- ):
- wav, sr = read_audio(name)
- z, c = self.model(wav)
- feat = z if self.use_feat else c
- writer = H5Writer(target_fname)
- writer.write(feat)
-
- def __repr__(self):
-
- return "EmbeddingDatasetWriter ({n_files} files)\n\tinput:\t{input_root}\n\toutput:\t{output_root}\n\tsplit:\t{split})".format(
- n_files=len(self), **self.__dict__
- )
-
-
-if __name__ == "__main__":
-
- args = EmbeddingWriterConfig().parse_args()
-
- for split in args.split:
-
- writer = EmbeddingDatasetWriter(
- input_root=args.input,
- output_root=args.output,
- split=split,
- model_fname=args.model,
- gpu=args.gpu,
- extension=args.ext,
- use_feat=args.use_feat,
- )
-
- print(writer)
- writer.require_output_path()
-
- print("Writing Features...")
- writer.write_features()
- print("Done.")
-
- if not args.no_copy_labels:
- print("Copying label data...")
- writer.copy_labels()
- print("Done.")
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_sparse_multihead_attention.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_sparse_multihead_attention.py
deleted file mode 100644
index 3e32b25a7fb1e12295b84d0c65064f8e42b7bdd3..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_sparse_multihead_attention.py
+++ /dev/null
@@ -1,114 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-
-import torch
-from fairseq.modules.sparse_multihead_attention import SparseMultiheadAttention
-
-
-class TestSparseMultiheadAttention(unittest.TestCase):
- def test_sparse_multihead_attention(self):
- attn_weights = torch.randn(1, 8, 8)
- bidirectional_sparse_mask = torch.tensor(
- [
- [0, 0, 0, 0, 0, float("-inf"), float("-inf"), 0],
- [0, 0, 0, 0, 0, float("-inf"), float("-inf"), 0],
- [0, 0, 0, 0, 0, float("-inf"), float("-inf"), 0],
- [0, 0, 0, 0, 0, float("-inf"), float("-inf"), 0],
- [float("-inf"), float("-inf"), float("-inf"), 0, 0, 0, 0, 0],
- [float("-inf"), float("-inf"), float("-inf"), 0, 0, 0, 0, 0],
- [float("-inf"), float("-inf"), float("-inf"), 0, 0, 0, 0, 0],
- [float("-inf"), float("-inf"), float("-inf"), 0, 0, 0, 0, 0],
- ]
- )
-
- bidirectional_attention = SparseMultiheadAttention(
- 16, 1, stride=4, expressivity=1, is_bidirectional=True
- )
- bidirectional_attention_sparse_mask = (
- bidirectional_attention.buffered_sparse_mask(attn_weights, 8, 8)
- )
- torch.all(
- torch.eq(bidirectional_attention_sparse_mask, bidirectional_sparse_mask)
- )
-
- sparse_mask = torch.tensor(
- [
- [
- 0,
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- ],
- [
- 0,
- 0,
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- ],
- [
- 0,
- 0,
- 0,
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- ],
- [
- 0,
- 0,
- 0,
- 0,
- float("-inf"),
- float("-inf"),
- float("-inf"),
- float("-inf"),
- ],
- [0, 0, 0, 0, 0, float("-inf"), float("-inf"), float("-inf")],
- [
- float("-inf"),
- float("-inf"),
- float("-inf"),
- 0,
- 0,
- 0,
- float("-inf"),
- float("-inf"),
- ],
- [
- float("-inf"),
- float("-inf"),
- float("-inf"),
- 0,
- 0,
- 0,
- 0,
- float("-inf"),
- ],
- [float("-inf"), float("-inf"), float("-inf"), 0, 0, 0, 0, 0],
- ]
- )
-
- attention = SparseMultiheadAttention(
- 16, 1, stride=4, expressivity=1, is_bidirectional=False
- )
- attention_sparse_mask = attention.buffered_sparse_mask(attn_weights, 8, 8)
-
- torch.all(torch.eq(attention_sparse_mask, sparse_mask))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/README.md b/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/README.md
deleted file mode 100644
index c6fe5f5d7af0b2e0c08c896ef5964ff2932305e8..0000000000000000000000000000000000000000
--- a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/README.md
+++ /dev/null
@@ -1,112 +0,0 @@
----
-title: LLMRiddles-ChatGPT-CN
-emoji: 🚀
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 4.1.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-python_version: 3.8
----
-
-# LLM Riddles
-
-
-
-English | [简体中文](https://github.com/opendilab/LLMRiddles/blob/main/README_zh.md)
-
-## :thinking: What's This
-Welcome to LLM Riddles! This is a game of wits and courage with language models. In the game, you need to construct questions that interact with the language model to get answers that meet the requirements. In this process, you can use your brain and use all the methods you can think of to get the model to output the results required by the answer.
-
-## :space_invader: How to Play
-We provide an online version for players to directly access and try out.
-- [Hugging Face][ChatGPT + English(w/o key)](https://huggingface.co/spaces/OpenDILabCommunity/LLMRiddlesChatGPTEN)
-- [Hugging Face][ChatGPT + Chinese(w/o key)](https://huggingface.co/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN)
-- [Hugging Face][ChatGLM + Chinese(w/ key)](https://huggingface.co/spaces/OpenDILabCommunity/LLMRiddlesChatGLMCN)
-- [OpenXLab][ChatGPT + Chinese(w/o key)](https://openxlab.org.cn/apps/detail/OpenDILab/LLMRiddlesChatGPTCN)
-- [OpenXLab][ChatGLM + Chinese(w/ key)](https://openxlab.org.cn/apps/detail/OpenDILab/LLMRiddlesChatGLMCN)
-- [OpenXLab][ChatGLM + English(w/ key)](https://openxlab.org.cn/apps/detail/OpenDILab/LLMRiddlesChatGLMEN)
-- [Private Server][Mistral + English(w/ key)](https://d9b451a97791dd8ef3.gradio.live)
-- [Private Server][ChatGPT + Chinese(w/ key)](http://llmriddles.opendilab.net/)
-
-Local deployment can be done in the following ways:
-## Installation
-### Use ChatGPT / ChatGLM API
-```shell
-pip3 install -r requirements.txt
-```
-### Deploy Mistral-7B-Instruct-v0.1 for local inference
-```shell
-pip3 install -r requirements-dev.txt
-```
-## Launch
-### ChatGPT + Chinese
-```shell
-QUESTION_LANG=cn QUESTION_LLM='chatgpt' QUESTION_LLM_KEY= python3 -u app.py
-```
-### ChatGPT + English
-```shell
-QUESTION_LANG=en QUESTION_LLM='chatgpt' QUESTION_LLM_KEY= python3 -u app.py
-```
-### ChatGLM + Chinese
-```shell
-QUESTION_LANG=cn QUESTION_LLM='chatglm' QUESTION_LLM_KEY= python3 -u app.py
-```
-### ChatGLM + English
-```shell
-QUESTION_LANG=en QUESTION_LLM='chatglm' QUESTION_LLM_KEY= python3 -u app.py
-```
-### Mistral-7B-Instruct-v0.1 + English
-```shell
-QUESTION_LANG=en QUESTION_LLM='mistral-7b' python3 -u app.py
-```
-## :technologist: Why Doing This
-
-Our goal is to use this game to give participants a deeper understanding of the fascinating aspects of prompt engineering and natural language processing. This process will show players how to cleverly construct prompts and how to use them to trigger surprising responses from artificial intelligence systems, while also helping them better understand the incredible power of deep learning and natural language processing technologies. .
-
-## :raising_hand: How to Submit a Custom Level
-If you have interesting questions or ideas, players are welcome to submit their own ideas. You can [Initiate a Pull Request](https://github.com/opendilab/LLMRiddles/compare) and submit it to us. We will include it in the level after approval.
-The question format should include the following points:
-- Pull Request title, example: feature(username): Chapter X-Level Design
-- The ID you want to be mentioned
-- Modify the corresponding chapter question files
-- Modification of \__init__.py
-
-For a complete example, please refer to: [Submit your own level design](https://github.com/opendilab/LLMRiddles/pull/6)
-
-## :writing_hand: Roadmap
-
-- [x] Support custom levels
-- [x] Online trial link
-- [x] Hugging Face Space link
-- [x] Support Mistral-7B(English version)
-- [x] Support ChatGLM(Chinese and English version)
-- [ ] Support Baichuan2-7B(Chinese version)
-- [ ] Support LLaMA2-7B(English version)
-- [ ] LLM inference speed optimization
-- [ ] More question levels and solution blogs
-
-## :speech_balloon: Feedback and Contribution
-- [Start an Issue](https://github.com/opendilab/CodeMorpheus/issues/new/choose) on GitHub
-- Contact us by email (opendilab@pjlab.org.cn)
-- Discuss on OpenDILab's WeChat group (i.e. add us on WeChat: ding314assist)
-
-
-## :star2: Special Thanks
-- Thanks to [Haoqiang Fan](https://www.zhihu.com/people/haoqiang-fan) for his original idea and title, which provided inspiration and motivation for the development and expansion of this project.
-- Thanks to [HuggingFace](https://huggingface.co) for supporting and assisting the game.
-- Thanks to [ChatGLM](https://chatglm.cn) for supporting and assisting the game, especially sufficient inference token support.
-- Thanks to [LLM Riddles contributors](https://github.com/opendilab/LLMRiddles/graphs/contributors) for their implementation and support.
-
-## :label: License
-All code within this repository is under [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
-
-
- We built Text2Video-Zero, a first zero-shot text-to-video synthesis diffusion framework, that enables low cost yet high-quality and consistent video generation with only pre-trained text-to-image diffusion models without any training on videos or optimization!
- Text2Video-Zero also naturally supports cool extension works of pre-trained text-to-image models such as Instruct Pix2Pix, ControlNet and DreamBooth, and based on which we present Video Instruct Pix2Pix, Pose Conditional, Edge Conditional and, Edge Conditional and DreamBooth Specialized applications.
- We hope our Text2Video-Zero will further democratize AI and empower the creativity of everyone by unleashing the zero-shot video generation and editing capacity of the amazing text-to-image models and encourage future research!
-
-
- """)
-
- if on_huggingspace:
- gr.HTML("""
-
For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings.
-
-
-
-
""")
-
- with gr.Tab('Zero-Shot Text2Video'):
- create_demo_text_to_video(model)
- with gr.Tab('Video Instruct Pix2Pix'):
- create_demo_pix2pix_video(model)
- with gr.Tab('Pose Conditional'):
- create_demo_pose(model)
- with gr.Tab('Edge Conditional'):
- create_demo_canny(model)
- with gr.Tab('Edge Conditional and Dreambooth Specialized'):
- create_demo_canny_db(model)
- with gr.Tab('Depth Conditional'):
- create_demo_depth(model)
- '''
- '''
- gr.HTML(
- """
-
-
- Version: v1.0
-
-
- Caution:
- We would like the raise the awareness of users of this demo of its potential issues and concerns.
- Like previous large foundation models, Text2Video-Zero could be problematic in some cases, partially we use pretrained Stable Diffusion, therefore Text2Video-Zero can Inherit Its Imperfections.
- So far, we keep all features available for research testing both to show the great potential of the Text2Video-Zero framework and to collect important feedback to improve the model in the future.
- We welcome researchers and users to report issues with the HuggingFace community discussion feature or email the authors.
-
-
- Biases and content acknowledgement:
- Beware that Text2Video-Zero may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography, and violence.
- Text2Video-Zero in this demo is meant only for research purposes.
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Baby Day Out Movie In Punjabi Download TOP.md b/spaces/bioriAsaeru/text-to-voice/Baby Day Out Movie In Punjabi Download TOP.md
deleted file mode 100644
index d6c271c9ad19e437c4531f4529ea74e8be23594f..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Baby Day Out Movie In Punjabi Download TOP.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
We have big surprise for you guys. SONY PICTURES, RELOADED does comes again with an HD version of Baby Day Out. And this time its in SUBTITLES. This baby day out 720p is having a new storyline and you can watch the movie in here. Its being a funny and entertaining movie. Please share this download links with your friends in advance because thats the only way we can provide you the best and safe files. New story, Review, Trailer, And trailer at the same time. For SONY PICTURES, RELOADED we are very happy to announce this special offer to our visitors. We are going to release all our movies in HD in subtitles for free in one week and we are expecting you guys share this in your circles. So please download the movie asap from our server. Enjoy the HD!!!!!!!!!
-
Cloud music services are designed to support new generation of music lovers. Internet users no longer need to buy CDs, rip them to their computers and use plug-ins to enjoy their music. They can now stream and download music in a format that can easily be moved between computer and mobile devices. Cloud music services are versatile as well, meaning they are able to work with users' existing plug-ins, such as iTunes and Winamp.
The Baby Day Out comic is a sentimental buddy cop tale that follows a nanny named Nadia and her awkward charge, Baby Bink, who is placed in her care during a tryst at a party thrown by a couple who've wanted a child for years. Nadia and Baby Bink soon form a bond as Nadia helps Baby Bink adjust to a world of material excess and luxurious opulence, going so far as to create for him a hideaway in the woods which Baby Bink gladly shares with Nadia. The Baby Day Out film culminates in a thrilling climax involving loads of high-flying adventure and Baby Bink's discovery of his true purpose in life.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Download the movie Horror Night online for free no registration no ads no hassle.md b/spaces/bioriAsaeru/text-to-voice/Download the movie Horror Night online for free no registration no ads no hassle.md
deleted file mode 100644
index e04159c22124624252fbe7316a99cf2dabf31a0c..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Download the movie Horror Night online for free no registration no ads no hassle.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
Seltzer was commissioned by the producer, Harvey Bernhard, to write a movie about the Antichrist after Bernhard was given the idea by a friend, Bob Munger. It took Seltzer exactly one year to write the screenplay and it would go on to be one of the most iconic horror movies of all time.
This script was born purely out of budgetary restrictions as writers Whannell and Wan deliberately wanted to write a horror film as cheaply as possible. One that they could finance themselves. Inspired by low-budget movies such as Pi and The Blair Witch Project, they decided on the concept of two actors, one room, and one dead body. Easily one of the best screenplays to read for horror writers.
-
BONUS SCREENPLAYS TO READ: You can download five more of the best screenplays to read in each genre in this post. Read as many movie scripts as you can and watch your screenwriting ability soar.
-
I read the screenplay to one of my favourite horror movies and one of my recent favourite mvoies, IT (2017). I really would like to read the full screenplay to IT Chapter Two (2019). any chance you guys could find and upload it for me, please?
-
But those are just some of the options we recommend for your virtual movie nights. Which ones are your favorites. Share them with us in the comments below. Because even if the theaters are closed we still want to hang out with friends and watch our favorite films.
-
-
Go Into the Story is the official blog for The Blacklist, the screenwriting community famous for its annual top ten list of unproduced scripts. One useful feature of Go Into the Story is its bank of downloadable movie scripts.
-
Play the best horror games for free. We have collected 64 popular horror games for you to play on LittleGames. They include new and top horror games such as YORG.io 3, Swat vs Zombies, Stupid Zombies, Masked Forces Zombie Survival and Scary Granny House. Choose a horror game from the list and you can play online on your mobile or computer for free.
-
Showcase Cinema Warwick you'll want to make sure you're one of the first people to see it! So mark your calendars and get ready for a Violent Night movie experience like never before. of our other Marvel movies available to watch online. We're sure you'll find something to your liking. Thanks for reading, and we'll see you soon! Violent Night is available on our website for free streaming. Details on how you can watch Violent Night for free throughout the year are described
-
If you're a fan of the comics, you won't want to miss this one! The storyline follows Violent Night as he tries to find his way home after being stranded on an alien planet. Violent Night is definitely a Violent Night movie you don't want to miss with stunning visuals and an action-packed plot! Plus, Violent Night online streaming is available on our website. Violent Night online is free, which includes streaming options such as 123movies, Reddit, or TV shows from HBO Max or Netflix!
-
Violent Night hits theaters on September 23, 2022. Tickets to see the film at your local movie theater are available online here. The film is being released in a wide release so you can watch it in person.
-
Most Viewed, Most Favorite, Top Rating, Top IMDb movies online. Here we can download and watch 123movies movies offline. 123Movies website is the best alternative to Violent Night's (2021) free online. We will recommend 123Movies as the best Solarmovie alternative There are a
-
few ways to watch Violent Night online in the US You can use a streaming service such as Netflix, Hulu, or Amazon Prime Video. You can also rent or buy the movie on iTunes or Google Play. watch it on-demand or on a streaming app available on your TV or streaming device if you have cable.
-
A creative new addition for 2022 is the Jupiter's Claim set, seen in Jordan Peele's extraterrestrial horror movie Nope. Fans of the film will recognize the adorable Jupiter cowboy balloon and the partially destroyed Star Lasso Experience. The Tethered doppelgangers from Peele's film Us stalk the set, armed with scissors and moving in uncanny bursts. But those who've seen Nope know that when the lights go out and the waving tube men go flat, the real threat has arrived overhead.
-
This is not movie based. It's a story that i submitted for scary nightmares contest. This story is based on a nightmare that haunted me a while back when i watched the movie the boy in striped pajamas. I dozed off before i saw how the movie ended and this is what i dreamt later that night.
-
Heads Up! is a mobile game that combines trivia with charades. One player holds the device to their head, while the other participants shout out clues to help the first player guess what the word on the device is. With a variety of topics to choose from, including celebrities, movies, and accents, Heads Up! is sure to get your team hyped at the next game night.
-
Once you situate all teammates on the call, the game can begin. As the leader, you will play host and read off the trivia questions. Players can submit answers through the private chat feature, or you can also utilize the polling option to collect responses. Spreadsheets make great score-keeping tools. You can also use programs like Google Slides or PowerPoint to organize and display questions. Also, consider incorporating audio and video components for an additional layer to your online trivia night. Features such as screen-sharing make it easy to play video clips as part of a question.
-
Through this online exercise, team members practice skills like listening, reasoning and critical thinking, and discussion. Plus, the activity is a lot of fun. Online trivia nights allow digital nomads the chance to unwind and socialize with coworkers, which in turn leads to increased employee engagement, job satisfaction and improved interpersonal communication.
-
As a massively multiplayer online RPG game, Dead Frontier 2 puts you amid the experience of a zombie outbreak with an open-world experience. You can work with other players to survive the in-game world, trading and scavenging for materials you need. Or, you can play by yourself and confront the zombie invasion on your own terms. Overall, this is an excellent option for an immersive survival horror game.
-
Now that streaming services like Netflix and Hulu are so popular, it's hard to find movies available for download. Although some services let you save movies for offline viewing, you can't actually store their files on your USB drive. This wikiHow teaches you how to download movies (legally) from the internet and save them to your removable flash drive.
-
The very first time my wife Flora (Mommy Frog to many of you) and I attended Halloween Horror Nights, we did it all wrong. We did no advance research, had a standard ticket, did not utilize Early Entry time (in fact, we arrived late), spent the first hour in a show and spent a lot of time in long lines. We did not hit all the mazes that year. It was our own night of horror as we are not patient frogs and prefer to skip the lines. The next year, we did our research and accomplished so much more, hitting every house, scare zone and Terror Tram. Now, we have it all down and can share what we've learned with you.
-
Ultimately, check the wait times and follow the shortest lines. Sometimes a house is popular due to the fans of the movie or show it is based on, but a long line does not necessarily mean it's the best house. Many times we are hoppily surprised by a seemingly less popular theme and a little underwhelmed by the house we were most looking forward to. So keep an open mind and go for enjoying the most houses rather than spending 300 minutes and your whole night in one line.
-
Injustices is a horror map in which you play as Michael who is a resident of Alaska and decides to go to a friend's place and on the way his car breaks down and he decides to spend the night in a nearby motel during which Micheal's live changes beyond recognition.
-
Trip to Brennenburg: Remake is a survival horror map. This is a map about exploring dark corners of the castle, solving puzzles,and getting involved in the creation of a nightmare. This is something that will definitely give you goosebumps.
-
Registration is not required to get started with the platform. It allows users to watch movies or videos, listen to music, and shop online. Users can search and select video sources from Vimeo, YouTube, DailyMotion, or audio from SoundCloud.
-
Kast allows users to watch movies together, play games, and share other content. You can either download a desktop app for Windows or macOS or use the web version supported by Google Chrome and Microsoft Edge. The next step is to create your watch party or join an already available one.
-
Having problems viewing this movie? Need it in another format? Going on a flight? Or just want to keep it? Here it is for download. Right-click the button below and go to Save Target As to download it. Or you can left click it, and on the next page go to File > Save Page As to download it.
-
Screambox's selection isn't as vast as some of the competition, but it is highly specialized. No thrillers are masquerading as horror films here. There's even a category called Only On Screambox that highlights scary movies that aren't available on major streaming platforms. An English Haunting, It's Here, Mortuary Massacre, and There is No Door are some of the flicks you'll find there.
-
Screambox has a lot of HD content, but some movies are capped at 720p. You can download videos for offline viewing, but there's a catch: you have to watch the content within 30 days of downloading, and you have 48 hours to finish watching a movie you've started before you have to sign back into the website for a new DRM license.
-
Shout Factory TV is a free streaming service that covers many genres, from classic late-night talk shows to 1970s Japanese superhero dramas. But horror is its biggest genre by far, especially the gloriously low-budget, cheesy, schlock horror that has filled drive-in movie theaters throughout history. You can watch Shout Factory TV for free using a web browser, an Apple TV, or an Android device.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Gta San Andreas Bucuresti Download Torentl Explore the City of Bucharest in This Mod.md b/spaces/bioriAsaeru/text-to-voice/Gta San Andreas Bucuresti Download Torentl Explore the City of Bucharest in This Mod.md
deleted file mode 100644
index 13e31b9d8aaa21bef59477345ca5aaec2aea981b..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Gta San Andreas Bucuresti Download Torentl Explore the City of Bucharest in This Mod.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
Analiza de, subiectele (gta san andreas bucuresti v2.0 download, gta bucuresti v2.0 download, download gta bucuresti gratis) i. Grand Theft Auto San Andreas download torrent. How Download GTA San Andreas Bucuresti. ZICETIMI CE TUTORIALE SA MAI FAC PLEASE How Download GTA Bucuresti:. Download TORRENT- GTA San Andreas GTA San Andreas (GTA TOTAL ROMANESC V2) 2012.rar. Fisierul TORRENT- GTA San Andreas GTA San Andreas (GTA TOTAL ROMANESC V2) 2012. Picktorrent: gta bucuresti - Free Search and Download Torrents at search engine.
Download Music, TV Shows, Movies, Anime, Software and more. Gta bucuresti downloads torent tpb. Real GTA III is a Total Conversion for Download GTA San Andreas Bucuresti Gameplay 1 by Alexsso MP3 or HD MP4 video for free. ThePirateBay.TO, Download torrents, music, movies, games, apps, software and much more.
-
Gta vice bucuresti v1 gta san andreas bucuresti v1. 1.1 gta san andreas bucuresti gameplay 1 download torent jocul gta bucuresti gratis; 7 grand theft auto san andreas v1 grand theft auto 1 mac. V.1.1 jocuri gta 3.
-
Download H1Z1; Serious Sam 1; Deer Hunter 5;. Cum imi bag muzica in gta san andreas bucuresti spcensoredimi va rog din suflet. Quote +1 ademaro 2016-12-02. Free Download GTA San Andreas Bucuresti Mod Patch 1.1 - A lightweight patch for the Bucuresti MOD which fixes several issues and adds new content. Download gta san andreas bucuresti free Categorie. Download gta san andreas rar (no torrent). San Andreas Free Modificat;.
-
San Andreas: Multiplayer, free and safe download. San Andreas: Multiplayer latest version: Play GTA San Andreas multiplayer. If you own GTA: San Andreas on PC, you. Gta san andreas bucuresti download utorrent. How to export pdf from word 2007 ## ### Gta iv zombie mod.
-
-
Forum despre gta san andreas bucuresti download torrent. GTA BUCURESTI NOU TORENT. 2 Download GTA San Andreas Total Romanesc V2 free, Descarca GTA San Andreas Total Romanesc V2 326717, SC TIMPURI NOI SA BUCURESTI,. GTA San Andreas Free Download Full Version PC Game.
-
Gta san andreas bucuresti v1 0 2010 download torent. Adobe is a building material made from earth and often organic material. Gta san andreas bucuresti v1 0 2010 download torent. Gta san andreas bucuresti v1 0 2010 download torent. Cum sa descarci GTA Bucuresti.
-
Like si Subscribe plz Link download Gta:. La instalarea MOD-ului GTA San Andreas in Limba Romana. Analiza de, subiectele (gta san andreas zima download torent,. Download GTA Bucuresti v1 0 Setup exe via BitTorrent or choose other Games.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/breadlicker45/gpt-youtuben-gen/app.py b/spaces/breadlicker45/gpt-youtuben-gen/app.py
deleted file mode 100644
index f01162213e161235e4b9c2c265252ffeb6c9282d..0000000000000000000000000000000000000000
--- a/spaces/breadlicker45/gpt-youtuben-gen/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import streamlit as st
-import time
-from transformers import pipeline
-import torch
-trust_remote_code=True
-st.markdown('## Text-generation gpt-youtube from Breadlicker45')
-use_auth_token=True
-@st.cache(allow_output_mutation=True, suppress_st_warning =True, show_spinner=False)
-def get_model():
- return pipeline('text-generation', model=model, do_sample=False)
-
-col1, col2 = st.columns([2,1])
-
-with st.sidebar:
- st.markdown('## Model Parameters')
-
- max_length = st.slider('Max text length', 0, 500, 80)
-
- num_beams = st.slider('N° tree beams search', 1, 15, 1)
-
- early_stopping = st.selectbox(
- 'Early stopping text generation',
- ('True', 'False'), key={'True' : True, 'False': False}, index=0)
-
- no_ngram_repeat = st.slider('Max repetition limit', 1, 5, 2)
-
-with col1:
- prompt= st.text_area('Your prompt here',
- '''What is the meaning of life?''')
-
-with col2:
- select_model = st.radio(
- "Select the model to use:",
- ('gpt-youtube','null'), index = 0)
-
- if select_model == 'gpt-youtube':
- model = 'BreadAi/gpt-Youtube'
- elif select_model == 'gpt-null':
- model = 'BreadAi/gpt-Youtube'
- elif select_model == 'null':
- model = 'BreadAi/gpt-Youtube'
-
- with st.spinner('Loading Model... (This may take a while)'):
- generator = get_model()
- st.success('Model loaded correctly!')
-
-gen = st.info('Generating text...')
-answer = generator(prompt, max_length=max_length, no_repeat_ngram_size=no_ngram_repeat,
- early_stopping=early_stopping, num_beams=num_beams, do_sample=False)
-gen.empty()
-
-lst = answer[0]['generated_text']
-
-t = st.empty()
-for i in range(len(lst)):
- t.markdown("#### %s" % lst[0:i])
- time.sleep(0.04)
\ No newline at end of file
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/proposal_generator/rrpn.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/proposal_generator/rrpn.py
deleted file mode 100644
index 1a3cd282c2d1ede5c60a7c2c84846cbeed7808f0..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/proposal_generator/rrpn.py
+++ /dev/null
@@ -1,209 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import itertools
-import logging
-from typing import Dict, List
-import torch
-
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec, batched_nms_rotated, cat
-from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated
-from detectron2.utils.memory import retry_if_cuda_oom
-
-from ..box_regression import Box2BoxTransformRotated
-from .build import PROPOSAL_GENERATOR_REGISTRY
-from .proposal_utils import _is_tracing
-from .rpn import RPN
-
-logger = logging.getLogger(__name__)
-
-
-def find_top_rrpn_proposals(
- proposals,
- pred_objectness_logits,
- image_sizes,
- nms_thresh,
- pre_nms_topk,
- post_nms_topk,
- min_box_size,
- training,
-):
- """
- For each feature map, select the `pre_nms_topk` highest scoring proposals,
- apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk`
- highest scoring proposals among all the feature maps if `training` is True,
- otherwise, returns the highest `post_nms_topk` scoring proposals for each
- feature map.
-
- Args:
- proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 5).
- All proposal predictions on the feature maps.
- pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A).
- image_sizes (list[tuple]): sizes (h, w) for each image
- nms_thresh (float): IoU threshold to use for NMS
- pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS.
- When RRPN is run on multiple feature maps (as in FPN) this number is per
- feature map.
- post_nms_topk (int): number of top k scoring proposals to keep after applying NMS.
- When RRPN is run on multiple feature maps (as in FPN) this number is total,
- over all feature maps.
- min_box_size(float): minimum proposal box side length in pixels (absolute units wrt
- input images).
- training (bool): True if proposals are to be used in training, otherwise False.
- This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..."
- comment.
-
- Returns:
- proposals (list[Instances]): list of N Instances. The i-th Instances
- stores post_nms_topk object proposals for image i.
- """
- num_images = len(image_sizes)
- device = proposals[0].device
-
- # 1. Select top-k anchor for every level and every image
- topk_scores = [] # #lvl Tensor, each of shape N x topk
- topk_proposals = []
- level_ids = [] # #lvl Tensor, each of shape (topk,)
- batch_idx = torch.arange(num_images, device=device)
- for level_id, proposals_i, logits_i in zip(
- itertools.count(), proposals, pred_objectness_logits
- ):
- Hi_Wi_A = logits_i.shape[1]
- if isinstance(Hi_Wi_A, torch.Tensor): # it's a tensor in tracing
- num_proposals_i = torch.clamp(Hi_Wi_A, max=pre_nms_topk)
- else:
- num_proposals_i = min(Hi_Wi_A, pre_nms_topk)
-
- topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1)
-
- # each is N x topk
- topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 5
-
- topk_proposals.append(topk_proposals_i)
- topk_scores.append(topk_scores_i)
- level_ids.append(torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device))
-
- # 2. Concat all levels together
- topk_scores = cat(topk_scores, dim=1)
- topk_proposals = cat(topk_proposals, dim=1)
- level_ids = cat(level_ids, dim=0)
-
- # 3. For each image, run a per-level NMS, and choose topk results.
- results = []
- for n, image_size in enumerate(image_sizes):
- boxes = RotatedBoxes(topk_proposals[n])
- scores_per_img = topk_scores[n]
- lvl = level_ids
-
- valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img)
- if not valid_mask.all():
- if training:
- raise FloatingPointError(
- "Predicted boxes or scores contain Inf/NaN. Training has diverged."
- )
- boxes = boxes[valid_mask]
- scores_per_img = scores_per_img[valid_mask]
- lvl = lvl[valid_mask]
- boxes.clip(image_size)
-
- # filter empty boxes
- keep = boxes.nonempty(threshold=min_box_size)
- if _is_tracing() or keep.sum().item() != len(boxes):
- boxes, scores_per_img, lvl = (boxes[keep], scores_per_img[keep], lvl[keep])
-
- keep = batched_nms_rotated(boxes.tensor, scores_per_img, lvl, nms_thresh)
- # In Detectron1, there was different behavior during training vs. testing.
- # (https://github.com/facebookresearch/Detectron/issues/459)
- # During training, topk is over the proposals from *all* images in the training batch.
- # During testing, it is over the proposals for each image separately.
- # As a result, the training behavior becomes batch-dependent,
- # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size.
- # This bug is addressed in Detectron2 to make the behavior independent of batch size.
- keep = keep[:post_nms_topk]
-
- res = Instances(image_size)
- res.proposal_boxes = boxes[keep]
- res.objectness_logits = scores_per_img[keep]
- results.append(res)
- return results
-
-
-@PROPOSAL_GENERATOR_REGISTRY.register()
-class RRPN(RPN):
- """
- Rotated Region Proposal Network described in :paper:`RRPN`.
- """
-
- @configurable
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- if self.anchor_boundary_thresh >= 0:
- raise NotImplementedError(
- "anchor_boundary_thresh is a legacy option not implemented for RRPN."
- )
-
- @classmethod
- def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
- ret = super().from_config(cfg, input_shape)
- ret["box2box_transform"] = Box2BoxTransformRotated(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS)
- return ret
-
- @torch.no_grad()
- def label_and_sample_anchors(self, anchors: List[RotatedBoxes], gt_instances: List[Instances]):
- """
- Args:
- anchors (list[RotatedBoxes]): anchors for each feature map.
- gt_instances: the ground-truth instances for each image.
-
- Returns:
- list[Tensor]:
- List of #img tensors. i-th element is a vector of labels whose length is
- the total number of anchors across feature maps. Label values are in {-1, 0, 1},
- with meanings: -1 = ignore; 0 = negative class; 1 = positive class.
- list[Tensor]:
- i-th element is a Nx5 tensor, where N is the total number of anchors across
- feature maps. The values are the matched gt boxes for each anchor.
- Values are undefined for those anchors not labeled as 1.
- """
- anchors = RotatedBoxes.cat(anchors)
-
- gt_boxes = [x.gt_boxes for x in gt_instances]
- del gt_instances
-
- gt_labels = []
- matched_gt_boxes = []
- for gt_boxes_i in gt_boxes:
- """
- gt_boxes_i: ground-truth boxes for i-th image
- """
- match_quality_matrix = retry_if_cuda_oom(pairwise_iou_rotated)(gt_boxes_i, anchors)
- matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix)
- # Matching is memory-expensive and may result in CPU tensors. But the result is small
- gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device)
-
- # A vector of labels (-1, 0, 1) for each anchor
- gt_labels_i = self._subsample_labels(gt_labels_i)
-
- if len(gt_boxes_i) == 0:
- # These values won't be used anyway since the anchor is labeled as background
- matched_gt_boxes_i = torch.zeros_like(anchors.tensor)
- else:
- # TODO wasted indexing computation for ignored boxes
- matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor
-
- gt_labels.append(gt_labels_i) # N,AHW
- matched_gt_boxes.append(matched_gt_boxes_i)
- return gt_labels, matched_gt_boxes
-
- @torch.no_grad()
- def predict_proposals(self, anchors, pred_objectness_logits, pred_anchor_deltas, image_sizes):
- pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas)
- return find_top_rrpn_proposals(
- pred_proposals,
- pred_objectness_logits,
- image_sizes,
- self.nms_thresh,
- self.pre_nms_topk[self.training],
- self.post_nms_topk[self.training],
- self.min_box_size,
- self.training,
- )
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/configs/common/coco_loader_lsj.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/configs/common/coco_loader_lsj.py
deleted file mode 100644
index e6c2f1e913a9f629290ce345fc4ffd4db4037e14..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/configs/common/coco_loader_lsj.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import detectron2.data.transforms as T
-from detectron2 import model_zoo
-from detectron2.config import LazyCall as L
-
-# Data using LSJ
-image_size = 1024
-dataloader = model_zoo.get_config("common/data/coco.py").dataloader
-dataloader.train.mapper.augmentations = [
- L(T.RandomFlip)(horizontal=True), # flip first
- L(T.ResizeScale)(
- min_scale=0.1, max_scale=2.0, target_height=image_size, target_width=image_size
- ),
- L(T.FixedSizeCrop)(crop_size=(image_size, image_size), pad=False),
-]
-dataloader.train.mapper.image_format = "RGB"
-dataloader.train.total_batch_size = 64
-# recompute boxes due to cropping
-dataloader.train.mapper.recompute_boxes = True
-
-dataloader.test.mapper.augmentations = [
- L(T.ResizeShortestEdge)(short_edge_length=image_size, max_size=image_size),
-]
diff --git a/spaces/brooksjordan/pet-classifier-tutorial-fastai/README.md b/spaces/brooksjordan/pet-classifier-tutorial-fastai/README.md
deleted file mode 100644
index 35d95ae4666ea25ea376d967d717487698d22581..0000000000000000000000000000000000000000
--- a/spaces/brooksjordan/pet-classifier-tutorial-fastai/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Barnaby
-emoji: 🦀
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.8.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bsenst/flask_inference_api/app.py b/spaces/bsenst/flask_inference_api/app.py
deleted file mode 100644
index 933e580a7b674740148a059378faa46858c9f4e4..0000000000000000000000000000000000000000
--- a/spaces/bsenst/flask_inference_api/app.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import flask
-from flask import request
-import os
-from dotenv import load_dotenv
-load_dotenv()
-
-app = flask.Flask(__name__, template_folder="./")
-
-from transformers import pipeline
-
-classifier = pipeline('text-classification', model="bsenst/classify_services_model")
-
-@app.route('/')
-def index():
- return flask.render_template('index.html')
-
-@app.route("/", methods=["POST"])
-def predict():
- incoming = request.get_json()
- print(incoming)
- prediction = classifier(incoming["text"])[0]
- print(prediction)
- return prediction
-
-if __name__ == '__main__':
- app.run(host='0.0.0.0', port=int(os.environ.get('PORT', 7860)))
diff --git a/spaces/camenduru-com/vscode/README.md b/spaces/camenduru-com/vscode/README.md
deleted file mode 100644
index 7881035d10eeab8d95769c430e36766ecf4c54a4..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/vscode/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Visual Studio Code
-emoji: 💻
-colorFrom: blue
-colorTo: blue
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/proposal_generator/proposal_utils.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/proposal_generator/proposal_utils.py
deleted file mode 100644
index 0fdf5dc15c125163c124ab3d04c13bd5b4261588..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/proposal_generator/proposal_utils.py
+++ /dev/null
@@ -1,205 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import math
-from typing import List, Tuple, Union
-import torch
-
-from detectron2.layers import batched_nms, cat, move_device_like
-from detectron2.structures import Boxes, Instances
-
-logger = logging.getLogger(__name__)
-
-
-def _is_tracing():
- # (fixed in TORCH_VERSION >= 1.9)
- if torch.jit.is_scripting():
- # https://github.com/pytorch/pytorch/issues/47379
- return False
- else:
- return torch.jit.is_tracing()
-
-
-def find_top_rpn_proposals(
- proposals: List[torch.Tensor],
- pred_objectness_logits: List[torch.Tensor],
- image_sizes: List[Tuple[int, int]],
- nms_thresh: float,
- pre_nms_topk: int,
- post_nms_topk: int,
- min_box_size: float,
- training: bool,
-):
- """
- For each feature map, select the `pre_nms_topk` highest scoring proposals,
- apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk`
- highest scoring proposals among all the feature maps for each image.
-
- Args:
- proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 4).
- All proposal predictions on the feature maps.
- pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A).
- image_sizes (list[tuple]): sizes (h, w) for each image
- nms_thresh (float): IoU threshold to use for NMS
- pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS.
- When RPN is run on multiple feature maps (as in FPN) this number is per
- feature map.
- post_nms_topk (int): number of top k scoring proposals to keep after applying NMS.
- When RPN is run on multiple feature maps (as in FPN) this number is total,
- over all feature maps.
- min_box_size (float): minimum proposal box side length in pixels (absolute units
- wrt input images).
- training (bool): True if proposals are to be used in training, otherwise False.
- This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..."
- comment.
-
- Returns:
- list[Instances]: list of N Instances. The i-th Instances
- stores post_nms_topk object proposals for image i, sorted by their
- objectness score in descending order.
- """
- num_images = len(image_sizes)
- device = (
- proposals[0].device
- if torch.jit.is_scripting()
- else ("cpu" if torch.jit.is_tracing() else proposals[0].device)
- )
-
- # 1. Select top-k anchor for every level and every image
- topk_scores = [] # #lvl Tensor, each of shape N x topk
- topk_proposals = []
- level_ids = [] # #lvl Tensor, each of shape (topk,)
- batch_idx = move_device_like(torch.arange(num_images, device=device), proposals[0])
- for level_id, (proposals_i, logits_i) in enumerate(zip(proposals, pred_objectness_logits)):
- Hi_Wi_A = logits_i.shape[1]
- if isinstance(Hi_Wi_A, torch.Tensor): # it's a tensor in tracing
- num_proposals_i = torch.clamp(Hi_Wi_A, max=pre_nms_topk)
- else:
- num_proposals_i = min(Hi_Wi_A, pre_nms_topk)
-
- topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1)
-
- # each is N x topk
- topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 4
-
- topk_proposals.append(topk_proposals_i)
- topk_scores.append(topk_scores_i)
- level_ids.append(
- move_device_like(
- torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device),
- proposals[0],
- )
- )
-
- # 2. Concat all levels together
- topk_scores = cat(topk_scores, dim=1)
- topk_proposals = cat(topk_proposals, dim=1)
- level_ids = cat(level_ids, dim=0)
-
- # 3. For each image, run a per-level NMS, and choose topk results.
- results: List[Instances] = []
- for n, image_size in enumerate(image_sizes):
- boxes = Boxes(topk_proposals[n])
- scores_per_img = topk_scores[n]
- lvl = level_ids
-
- valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img)
- if not valid_mask.all():
- if training:
- raise FloatingPointError(
- "Predicted boxes or scores contain Inf/NaN. Training has diverged."
- )
- boxes = boxes[valid_mask]
- scores_per_img = scores_per_img[valid_mask]
- lvl = lvl[valid_mask]
- boxes.clip(image_size)
-
- # filter empty boxes
- keep = boxes.nonempty(threshold=min_box_size)
- if _is_tracing() or keep.sum().item() != len(boxes):
- boxes, scores_per_img, lvl = boxes[keep], scores_per_img[keep], lvl[keep]
-
- keep = batched_nms(boxes.tensor, scores_per_img, lvl, nms_thresh)
- # In Detectron1, there was different behavior during training vs. testing.
- # (https://github.com/facebookresearch/Detectron/issues/459)
- # During training, topk is over the proposals from *all* images in the training batch.
- # During testing, it is over the proposals for each image separately.
- # As a result, the training behavior becomes batch-dependent,
- # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size.
- # This bug is addressed in Detectron2 to make the behavior independent of batch size.
- keep = keep[:post_nms_topk] # keep is already sorted
-
- res = Instances(image_size)
- res.proposal_boxes = boxes[keep]
- res.objectness_logits = scores_per_img[keep]
- results.append(res)
- return results
-
-
-def add_ground_truth_to_proposals(
- gt: Union[List[Instances], List[Boxes]], proposals: List[Instances]
-) -> List[Instances]:
- """
- Call `add_ground_truth_to_proposals_single_image` for all images.
-
- Args:
- gt(Union[List[Instances], List[Boxes]): list of N elements. Element i is a Instances
- representing the ground-truth for image i.
- proposals (list[Instances]): list of N elements. Element i is a Instances
- representing the proposals for image i.
-
- Returns:
- list[Instances]: list of N Instances. Each is the proposals for the image,
- with field "proposal_boxes" and "objectness_logits".
- """
- assert gt is not None
-
- if len(proposals) != len(gt):
- raise ValueError("proposals and gt should have the same length as the number of images!")
- if len(proposals) == 0:
- return proposals
-
- return [
- add_ground_truth_to_proposals_single_image(gt_i, proposals_i)
- for gt_i, proposals_i in zip(gt, proposals)
- ]
-
-
-def add_ground_truth_to_proposals_single_image(
- gt: Union[Instances, Boxes], proposals: Instances
-) -> Instances:
- """
- Augment `proposals` with `gt`.
-
- Args:
- Same as `add_ground_truth_to_proposals`, but with gt and proposals
- per image.
-
- Returns:
- Same as `add_ground_truth_to_proposals`, but for only one image.
- """
- if isinstance(gt, Boxes):
- # convert Boxes to Instances
- gt = Instances(proposals.image_size, gt_boxes=gt)
-
- gt_boxes = gt.gt_boxes
- device = proposals.objectness_logits.device
- # Assign all ground-truth boxes an objectness logit corresponding to
- # P(object) = sigmoid(logit) =~ 1.
- gt_logit_value = math.log((1.0 - 1e-10) / (1 - (1.0 - 1e-10)))
- gt_logits = gt_logit_value * torch.ones(len(gt_boxes), device=device)
-
- # Concatenating gt_boxes with proposals requires them to have the same fields
- gt_proposal = Instances(proposals.image_size, **gt.get_fields())
- gt_proposal.proposal_boxes = gt_boxes
- gt_proposal.objectness_logits = gt_logits
-
- for key in proposals.get_fields().keys():
- assert gt_proposal.has(
- key
- ), "The attribute '{}' in `proposals` does not exist in `gt`".format(key)
-
- # NOTE: Instances.cat only use fields from the first item. Extra fields in latter items
- # will be thrown away.
- new_proposals = Instances.cat([proposals, gt_proposal])
-
- return new_proposals
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docker/deploy.Dockerfile b/spaces/carlosalonso/Detection-video/carpeta_deteccion/docker/deploy.Dockerfile
deleted file mode 100644
index 30b4ed774368af89d654c9f01850d769e6cf9f52..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docker/deploy.Dockerfile
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# This file defines a container that compiles the C++ examples of detectron2.
-# See docker/README.md for usage.
-
-# Depends on the image produced by "./Dockerfile"
-FROM detectron2:v0
-
-USER appuser
-ENV HOME=/home/appuser
-WORKDIR $HOME
-
-# Let torchvision find libtorch
-ENV CMAKE_PREFIX_PATH=$HOME/.local/lib/python3.6/site-packages/torch/
-
-RUN sudo apt-get update && sudo apt-get install libopencv-dev --yes
-
-# install libtorchvision
-RUN git clone --branch v0.11.1 https://github.com/pytorch/vision/
-RUN mkdir vision/build && cd vision/build && \
- cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/.local -DCMAKE_BUILD_TYPE=Release -DWITH_CUDA=on -DTORCH_CUDA_ARCH_LIST=$TORCH_CUDA_ARCH_LIST && \
- make -j && make install
-
-# make our installation take effect
-ENV CPATH=$HOME/.local/include \
- LIBRARY_PATH=$HOME/.local/lib \
- LD_LIBRARY_PATH=$HOME/.local/lib
-
-
-# build C++ examples of detectron2
-RUN cd detectron2_repo/tools/deploy && mkdir build && cd build && \
- cmake -DTORCH_CUDA_ARCH_LIST=$TORCH_CUDA_ARCH_LIST .. && make
-# binaries will be available under tools/deploy/build
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/test_engine.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/test_engine.py
deleted file mode 100644
index 164a7f87a6f21731e9525b93f1e01cb174a59779..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/test_engine.py
+++ /dev/null
@@ -1,195 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import json
-import math
-import os
-import tempfile
-import time
-import unittest
-from unittest import mock
-import torch
-from fvcore.common.checkpoint import Checkpointer
-from torch import nn
-
-from detectron2 import model_zoo
-from detectron2.config import configurable, get_cfg
-from detectron2.engine import DefaultTrainer, SimpleTrainer, default_setup, hooks
-from detectron2.modeling.meta_arch import META_ARCH_REGISTRY
-from detectron2.utils.events import CommonMetricPrinter, JSONWriter
-
-
-@META_ARCH_REGISTRY.register()
-class _SimpleModel(nn.Module):
- @configurable
- def __init__(self, sleep_sec=0):
- super().__init__()
- self.mod = nn.Linear(10, 20)
- self.sleep_sec = sleep_sec
-
- @classmethod
- def from_config(cls, cfg):
- return {}
-
- def forward(self, x):
- if self.sleep_sec > 0:
- time.sleep(self.sleep_sec)
- return {"loss": x.sum() + sum([x.mean() for x in self.parameters()])}
-
-
-class TestTrainer(unittest.TestCase):
- def _data_loader(self, device):
- device = torch.device(device)
- while True:
- yield torch.rand(3, 3).to(device)
-
- def test_simple_trainer(self, device="cpu"):
- model = _SimpleModel().to(device=device)
- trainer = SimpleTrainer(
- model, self._data_loader(device), torch.optim.SGD(model.parameters(), 0.1)
- )
- trainer.train(0, 10)
-
- def test_simple_trainer_reset_dataloader(self, device="cpu"):
- model = _SimpleModel().to(device=device)
- trainer = SimpleTrainer(
- model, self._data_loader(device), torch.optim.SGD(model.parameters(), 0.1)
- )
- trainer.train(0, 10)
- trainer.reset_data_loader(lambda: self._data_loader(device))
- trainer.train(0, 10)
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def test_simple_trainer_cuda(self):
- self.test_simple_trainer(device="cuda")
-
- def test_writer_hooks(self):
- model = _SimpleModel(sleep_sec=0.1)
- trainer = SimpleTrainer(
- model, self._data_loader("cpu"), torch.optim.SGD(model.parameters(), 0.1)
- )
-
- max_iter = 50
-
- with tempfile.TemporaryDirectory(prefix="detectron2_test") as d:
- json_file = os.path.join(d, "metrics.json")
- writers = [CommonMetricPrinter(max_iter), JSONWriter(json_file)]
-
- trainer.register_hooks(
- [hooks.EvalHook(0, lambda: {"metric": 100}), hooks.PeriodicWriter(writers)]
- )
- with self.assertLogs(writers[0].logger) as logs:
- trainer.train(0, max_iter)
-
- with open(json_file, "r") as f:
- data = [json.loads(line.strip()) for line in f]
- self.assertEqual([x["iteration"] for x in data], [19, 39, 49, 50])
- # the eval metric is in the last line with iter 50
- self.assertIn("metric", data[-1], "Eval metric must be in last line of JSON!")
-
- # test logged messages from CommonMetricPrinter
- self.assertEqual(len(logs.output), 3)
- for log, iter in zip(logs.output, [19, 39, 49]):
- self.assertIn(f"iter: {iter}", log)
-
- self.assertIn("eta: 0:00:00", logs.output[-1], "Last ETA must be 0!")
-
- def test_default_trainer(self):
- # TODO: this test requires manifold access, so changed device to CPU. see: T88318502
- cfg = get_cfg()
- cfg.MODEL.DEVICE = "cpu"
- cfg.MODEL.META_ARCHITECTURE = "_SimpleModel"
- cfg.DATASETS.TRAIN = ("coco_2017_val_100",)
- with tempfile.TemporaryDirectory(prefix="detectron2_test") as d:
- cfg.OUTPUT_DIR = d
- trainer = DefaultTrainer(cfg)
-
- # test property
- self.assertIs(trainer.model, trainer._trainer.model)
- trainer.model = _SimpleModel()
- self.assertIs(trainer.model, trainer._trainer.model)
-
- def test_checkpoint_resume(self):
- model = _SimpleModel()
- dataloader = self._data_loader("cpu")
- opt = torch.optim.SGD(model.parameters(), 0.1)
- scheduler = torch.optim.lr_scheduler.StepLR(opt, 3)
-
- with tempfile.TemporaryDirectory(prefix="detectron2_test") as d:
- trainer = SimpleTrainer(model, dataloader, opt)
- checkpointer = Checkpointer(model, d, opt=opt, trainer=trainer)
-
- trainer.register_hooks(
- [
- hooks.LRScheduler(scheduler=scheduler),
- # checkpoint after scheduler to properly save the state of scheduler
- hooks.PeriodicCheckpointer(checkpointer, 10),
- ]
- )
-
- trainer.train(0, 12)
- self.assertAlmostEqual(opt.param_groups[0]["lr"], 1e-5)
- self.assertEqual(scheduler.last_epoch, 12)
- del trainer
-
- opt = torch.optim.SGD(model.parameters(), 999) # lr will be loaded
- trainer = SimpleTrainer(model, dataloader, opt)
- scheduler = torch.optim.lr_scheduler.StepLR(opt, 3)
- trainer.register_hooks(
- [
- hooks.LRScheduler(scheduler=scheduler),
- ]
- )
- checkpointer = Checkpointer(model, d, opt=opt, trainer=trainer)
- checkpointer.resume_or_load("non_exist.pth")
- self.assertEqual(trainer.iter, 11) # last finished iter number (0-based in Trainer)
- # number of times `scheduler.step()` was called (1-based)
- self.assertEqual(scheduler.last_epoch, 12)
- self.assertAlmostEqual(opt.param_groups[0]["lr"], 1e-5)
-
- def test_eval_hook(self):
- model = _SimpleModel()
- dataloader = self._data_loader("cpu")
- opt = torch.optim.SGD(model.parameters(), 0.1)
-
- for total_iter, period, eval_count in [(30, 15, 2), (31, 15, 3), (20, 0, 1)]:
- test_func = mock.Mock(return_value={"metric": 3.0})
- trainer = SimpleTrainer(model, dataloader, opt)
- trainer.register_hooks([hooks.EvalHook(period, test_func)])
- trainer.train(0, total_iter)
- self.assertEqual(test_func.call_count, eval_count)
-
- def test_best_checkpointer(self):
- model = _SimpleModel()
- dataloader = self._data_loader("cpu")
- opt = torch.optim.SGD(model.parameters(), 0.1)
- metric_name = "metric"
- total_iter = 40
- test_period = 10
- test_cases = [
- ("max", iter([0.3, 0.4, 0.35, 0.5]), 3),
- ("min", iter([1.0, 0.8, 0.9, 0.9]), 2),
- ("min", iter([math.nan, 0.8, 0.9, 0.9]), 1),
- ]
- for mode, metrics, call_count in test_cases:
- trainer = SimpleTrainer(model, dataloader, opt)
- with tempfile.TemporaryDirectory(prefix="detectron2_test") as d:
- checkpointer = Checkpointer(model, d, opt=opt, trainer=trainer)
- trainer.register_hooks(
- [
- hooks.EvalHook(test_period, lambda: {metric_name: next(metrics)}),
- hooks.BestCheckpointer(test_period, checkpointer, metric_name, mode=mode),
- ]
- )
- with mock.patch.object(checkpointer, "save") as mock_save_method:
- trainer.train(0, total_iter)
- self.assertEqual(mock_save_method.call_count, call_count)
-
- def test_setup_config(self):
- with tempfile.TemporaryDirectory(prefix="detectron2_test") as d:
- cfg = get_cfg()
- cfg.OUTPUT_DIR = os.path.join(d, "yacs")
- default_setup(cfg, {})
-
- cfg = model_zoo.get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py")
- cfg.train.output_dir = os.path.join(d, "omegaconf")
- default_setup(cfg, {})
diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/short_audio_transcribe.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/short_audio_transcribe.py
deleted file mode 100644
index 04b23ef09b0f7fe9fb3b430d31a0b4c877baaf55..0000000000000000000000000000000000000000
--- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/short_audio_transcribe.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import whisper
-import os
-import torchaudio
-import argparse
-import torch
-
-lang2token = {
- 'zh': "[ZH]",
- 'ja': "[JA]",
- "en": "[EN]",
- }
-def transcribe_one(audio_path):
- # load audio and pad/trim it to fit 30 seconds
- audio = whisper.load_audio(audio_path)
- audio = whisper.pad_or_trim(audio)
-
- # make log-Mel spectrogram and move to the same device as the model
- mel = whisper.log_mel_spectrogram(audio).to(model.device)
-
- # detect the spoken language
- _, probs = model.detect_language(mel)
- print(f"Detected language: {max(probs, key=probs.get)}")
- lang = max(probs, key=probs.get)
- # decode the audio
- options = whisper.DecodingOptions()
- result = whisper.decode(model, mel, options)
-
- # print the recognized text
- print(result.text)
- return lang, result.text
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--languages", default="CJE")
- parser.add_argument("--whisper_size", default="medium")
- args = parser.parse_args()
- if args.languages == "CJE":
- lang2token = {
- 'zh': "[ZH]",
- 'ja': "[JA]",
- "en": "[EN]",
- }
- elif args.languages == "CJ":
- lang2token = {
- 'zh': "[ZH]",
- 'ja': "[JA]",
- }
- elif args.languages == "C":
- lang2token = {
- 'zh': "[ZH]",
- }
- assert (torch.cuda.is_available()), "Please enable GPU in order to run Whisper!"
- model = whisper.load_model(args.whisper_size)
- parent_dir = "./custom_character_voice/"
- speaker_names = list(os.walk(parent_dir))[0][1]
- speaker_annos = []
- # resample audios
- for speaker in speaker_names:
- for i, wavfile in enumerate(list(os.walk(parent_dir + speaker))[0][2]):
- # try to load file as audio
- if wavfile.startswith("processed_"):
- continue
- try:
- wav, sr = torchaudio.load(parent_dir + speaker + "/" + wavfile, frame_offset=0, num_frames=-1, normalize=True,
- channels_first=True)
- wav = wav.mean(dim=0).unsqueeze(0)
- if sr != 22050:
- wav = torchaudio.transforms.Resample(orig_freq=sr, new_freq=22050)(wav)
- if wav.shape[1] / sr > 20:
- print(f"{wavfile} too long, ignoring\n")
- save_path = parent_dir + speaker + "/" + f"processed_{i}.wav"
- torchaudio.save(save_path, wav, 22050, channels_first=True)
- # transcribe text
- lang, text = transcribe_one(save_path)
- if lang not in list(lang2token.keys()):
- print(f"{lang} not supported, ignoring\n")
- continue
- text = lang2token[lang] + text + lang2token[lang] + "\n"
- speaker_annos.append(save_path + "|" + speaker + "|" + text)
- except:
- continue
-
- # # clean annotation
- # import argparse
- # import text
- # from utils import load_filepaths_and_text
- # for i, line in enumerate(speaker_annos):
- # path, sid, txt = line.split("|")
- # cleaned_text = text._clean_text(txt, ["cjke_cleaners2"])
- # cleaned_text += "\n" if not cleaned_text.endswith("\n") else ""
- # speaker_annos[i] = path + "|" + sid + "|" + cleaned_text
- # write into annotation
- if len(speaker_annos) == 0:
- print("Warning: no short audios found, this IS expected if you have only uploaded long audios, videos or video links.")
- print("this IS NOT expected if you have uploaded a zip file of short audios. Please check your file structure or make sure your audio language is supported.")
- with open("short_character_anno.txt", 'w', encoding='utf-8') as f:
- for line in speaker_annos:
- f.write(line)
-
- # import json
- # # generate new config
- # with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f:
- # hps = json.load(f)
- # # modify n_speakers
- # hps['data']["n_speakers"] = 1000 + len(speaker2id)
- # # add speaker names
- # for speaker in speaker_names:
- # hps['speakers'][speaker] = speaker2id[speaker]
- # # save modified config
- # with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f:
- # json.dump(hps, f, indent=2)
- # print("finished")
diff --git a/spaces/ccolas/TastyPiano/src/music/utilities/handcoded_rep_utilities/tht/tactus_hypothesis_tracker.py b/spaces/ccolas/TastyPiano/src/music/utilities/handcoded_rep_utilities/tht/tactus_hypothesis_tracker.py
deleted file mode 100644
index 7070a398380174913b71ee17c864b986ff282947..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/music/utilities/handcoded_rep_utilities/tht/tactus_hypothesis_tracker.py
+++ /dev/null
@@ -1,211 +0,0 @@
-"""This module contains the TactusTrackersGenerator class that can be configured
-to generate complete hypothesis trackers for the playback of a case."""
-
-from . import hypothesis, playback, defaults, confidence
-from .correction import HypothesisCorrection, windowed_corr
-import collections
-import logging
-from typing import *
-
-Rho = NewType('Rho', float)
-Delta = NewType('Delta', float)
-OnsetIdx = NewType('OnsetIdx', int)
-Conf = NewType('Conf', float)
-
-class HypothesisTracker(hypothesis.HypothesisFromIndex):
- """Class that holds information of the hypothesis evolution.
-
- A hypothesis is defined as a rho and a delta values, where all tactus
- predictions are described as: rho + delta * k, for some integer k.
-
- The 'name' of the hypothesis is given by the two onset indexes used
- to originate the hypothesis. The 'beta' value is the first hypothesis.
- 'corr' contains a list of Correction objects with information about
- each correction performed over the hypothesis. 'cur' is the current value
- of the hypothesis. 'confs' contains the evolution of the confidence for
- the hypothesis.
-
- The tracker also contains some convenience methods to work with the
- current hypothesis. The 'r' property gives as the current rho value,
- the 'd' property the current 'delta'. The 'proj' generates all
- tactus predictions by the hypothesis within range of a playback.
-
- The 'update' method allows us to correct the current hypothesis with
- a correction function and to update the confence status with a
- confidence function.
- """
- beta: Tuple[Rho, Delta]
- oonset_times: List[float]
- corr: List[Tuple[OnsetIdx, HypothesisCorrection]]
- confs: List[Tuple[OnsetIdx, float]]
-
- def __init__(self, start_idx, end_idx, onset_times):
- super(self.__class__, self).__init__(start_idx, end_idx, onset_times)
- self.beta = self.htuple
- self.onset_times = onset_times
- self.corr = [] # [(onset_idx, hypothesis_correction)]
- self.confs = [] # [(onset_idx, conf_value)]
-
- def update(self, ongoing_play, eval_f, corr_f):
- "Updates a hypothesis with new conf and applying corrections."
- correction = corr_f(self, ongoing_play)
- self.corr.append((ongoing_play.discovered_index, correction))
- self.htuple = correction.new_hypothesis()
- n_conf = eval_f(self, ongoing_play)
- self.confs.append((ongoing_play.discovered_index, n_conf))
-
- @property
- def cur(self):
- return self.htuple
-
- @property
- def conf(self):
- return self.confs[-1][1]
-
- def origin_onsets(self):
- return (self.beta[0], sum(self.beta))
-
-
-class TactusHypothesisTracker():
- """Configurable class to generate hypothesis trackers for a case.
-
- Configuration includes:
- * an eval function that defines how to evaluate a hypothesis over
- certain Playback
- * a correction functions that produces a HypothesisCorrection for a
- hypothesis over a Playback
- * a similarity function that defines how similar are two hypothesis
- * a similarity_epsilon that defines the threshold for trimming
- * a maximun amount of hypothesis trackers to be kept. Only hypotheses
- best confidence are kept.
-
- When called on a set of onset_times it will return the hypothesis trackers
- generated by the model.
- """
-
- logger = logging.getLogger('TactusHypothesisTracker')
-
- def __init__(self, eval_f, corr_f, sim_f, similarity_epsilon,
- min_delta, max_delta, max_hypotheses,
- archive_hypotheses=False):
- self.eval_f = eval_f
- self.corr_f = corr_f
- self.sim_f = sim_f
- self.similarity_epsilon = similarity_epsilon
- self.min_delta = min_delta
- self.max_delta = max_delta
- self.max_hypotheses = max_hypotheses
- self.archive_hypotheses = archive_hypotheses
-
- def __call__(self, onset_times):
- """
- Performs the tracking of tactus hypothesis as defined by the model from
- the song represented by the received onset_times.
-
- Args:
- onset_times: a sorted list of ms where the musical events occur.
-
- Returns:
- A dict :: hypothesis_name -> HypothesisTracker
- """
- self.logger.debug('Started tracking for onsets (%d) : %s',
- len(onset_times), onset_times)
- ongoing_play = playback.OngoingPlayback(onset_times)
- hypothesis_trackers = []
- archived_hypotheses = []
- while ongoing_play.advance():
- n_hts = list(self._generate_new_hypothesis(ongoing_play))
- self.logger.debug('New step. %d hypothesis created', len(n_hts))
-
- hypothesis_trackers.extend(n_hts)
-
- for h in hypothesis_trackers:
- h.update(ongoing_play, self.eval_f, self.corr_f)
-
- kept_hs, trimmed_hs = self._trim_similar_hypotheses(
- hypothesis_trackers, ongoing_play)
- self.logger.debug('Trimmed by similarity (%d): %s',
- ongoing_play.discovered_index,
- str([str(h) for h in trimmed_hs]))
-
- k_best_hs, other_hs = self._split_k_best_hypotheses(kept_hs)
- self.logger.debug('Trimmed by score (%d): %s',
- ongoing_play.discovered_index,
- str([str(h) for h in other_hs]))
- hypothesis_trackers = k_best_hs
- if (self.archive_hypotheses):
- archived_hypotheses.extend(other_hs)
- self.logger.debug('End of step. %d trackers remaining',
- len(hypothesis_trackers))
-
- return dict([(ht.name, ht)
- for ht in archived_hypotheses + hypothesis_trackers])
-
- def _generate_new_hypothesis(self, ongoing_play):
- "Generates new hypothesis trackers given discovered onset in playback."
- end_index = ongoing_play.discovered_index
- for k in range(end_index):
- delta = (ongoing_play.onset_times[end_index] -
- ongoing_play.onset_times[k])
- if self.min_delta <= delta and delta <= self.max_delta:
- yield HypothesisTracker(k, end_index,
- ongoing_play.onset_times)
-
- def _trim_similar_hypotheses(self, hts, ongoing_play):
- """Partitions new hypothesis into those that should be trimmed given
- a set of comparsion hypothesis.
-
- Assumes hypothesis trackers are sorted by when they were generated in
- hts.
- """
- trimmed_hs_data = []
- kept_hs = []
- remaining_hts = collections.deque(hts)
- while remaining_hts:
- ht = remaining_hts.popleft()
- n_remaining_hts = collections.deque()
- kept_hs.append(ht)
- while remaining_hts:
- n_ht = remaining_hts.popleft()
- s = self.sim_f(ht, n_ht, ongoing_play)
- if s > (1 - self.similarity_epsilon):
- trimmed_hs_data.append((n_ht, ht))
- else:
- n_remaining_hts.append(n_ht)
-
- remaining_hts = n_remaining_hts
-
- return (kept_hs, trimmed_hs_data)
-
- def _split_k_best_hypotheses(self, hts):
- """Splits hypotheses into the self.max_hypotheses best
- (according to confidence) and the rest.
-
- Both result list will be sorted in order of generation."""
- hts_info = [(-1 * ht.conf, idx) for idx, ht in enumerate(hts)]
- sorted_hts_info = sorted(hts_info)
- best_hts_idx = set([
- i for _, i in sorted_hts_info[:self.max_hypotheses]])
- best_k_hts = [ht for idx, ht in enumerate(hts)
- if idx in best_hts_idx]
- other_hts = [ht for idx, ht in enumerate(hts)
- if idx not in best_hts_idx]
- return best_k_hts, other_hts
-
-
-def default_tht(**kwargs):
- '''Returns a TactusHypothesisTracker with the default configuration.
-
- Default config may be overriden witih kwargs. See defaults.config
- '''
- config = defaults.config.copy()
- config.update(kwargs)
- return TactusHypothesisTracker(**config)
-
-
-jnmr_tht = default_tht(
- **{
- 'eval_f': confidence.WindowedExpEval(6000),
- 'corr_f': windowed_corr
- }
-)
diff --git a/spaces/ccolas/TastyPiano/src/music2cocktailrep/training/latent_translation/run.py b/spaces/ccolas/TastyPiano/src/music2cocktailrep/training/latent_translation/run.py
deleted file mode 100644
index 12779b5cdb790fa2523ea5d8f14b9d49a589606e..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/music2cocktailrep/training/latent_translation/run.py
+++ /dev/null
@@ -1,506 +0,0 @@
-import os
-
-import torch; torch.manual_seed(0)
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils
-import torch.distributions
-import numpy as np
-import matplotlib.pyplot as plt; plt.rcParams['figure.dpi'] = 200
-from vae_model import get_gml_vae_models
-from utils import get_dataloaders, compute_swd_loss
-import matplotlib.pyplot as plt
-from src.music.config import MUSIC_REP_PATH
-from src.cocktails.config import FULL_COCKTAIL_REP_PATH
-import json
-import argparse
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-
-if torch.cuda.is_available():
- print('Using GPUs')
-else:
- print('Using CPUs')
-
-music_rep_path = "/home/cedric/Documents/pianocktail/data/music/represented_small/"
-music_rep_path = MUSIC_REP_PATH + "music_reps_normalized_meanstd.pickle"
-# music_rep_path = "/home/cedric/Documents/pianocktail/data/music/32_represented/reps.pickle"
-LOSS = nn.CrossEntropyLoss()
-def run_epoch(epoch, model, data, params, opt, train):
- if epoch == params['n_epochs_music_pretrain']:
- print(f'Switching to bs: {params["batch_size"]}')
- for k in data.keys():
- prefix = 'train' if train else 'test'
- data[k].batch_sampler.update_epoch_size_and_batch(params[prefix + '_epoch_size'], params['batch_size'])
- if train:
- model.train()
- else:
- model.eval()
- keys_to_track = params['keys_to_track']
- losses = dict(zip(keys_to_track, [[] for _ in range(len(keys_to_track))]))
- step = 0
- cf_matrices_music = []
- cf_matrices_cocktail = []
- for i_batch, data_music, data_cocktail, data_music_lab, data_cocktail_lab, data_reg_grounding \
- in zip(range(len(data['music'])), data['music'], data['cocktail'], data['music_labeled'], data['cocktail_labeled'], data['reg_grounding']):
- x_music, _ = data_music
- x_cocktail, _, contains_egg, contains_bubbles = data_cocktail
- x_music_lab, labels_music = data_music_lab
- x_cocktail_lab, labels_cocktail = data_cocktail_lab
- x_reg_music, x_reg_cocktail = data_reg_grounding
- step += x_music.shape[0]
- if train: opt.zero_grad()
-
- # weight more examples that have bubbles or egg in the mse computation
- bubbles_egg_weights = torch.ones([contains_bubbles.shape[0]])
- bubbles_egg_weights[contains_bubbles] += 1
- bubbles_egg_weights[contains_egg] += 3
-
- # vae
- x_hat_cocktail, z_cocktail, mu_cocktail, log_var_cocktail = model(x_cocktail, modality_in='cocktail', modality_out='cocktail')
- mse_loss_cocktail = torch.sum(((x_cocktail - x_hat_cocktail)**2).mean(axis=1) * bubbles_egg_weights) / bubbles_egg_weights.sum()
- if contains_bubbles.sum() > 0:
- bubble_mse = float(((x_cocktail - x_hat_cocktail)**2)[contains_bubbles, -3].mean())
- else:
- bubble_mse = np.nan
- if contains_egg.sum() > 0:
- egg_mse = float(((x_cocktail - x_hat_cocktail)**2)[contains_egg, -1].mean())
- else:
- egg_mse = np.nan
-
- kld_loss_cocktail = torch.mean(-0.5 * torch.sum(1 + log_var_cocktail - mu_cocktail ** 2 - log_var_cocktail.exp(), dim=1))
-
- x_hat_music, z_music, mu_music, log_var_music = model(x_music, modality_in='music', modality_out='music')
- mse_loss_music = ((x_music - x_hat_music)**2).mean()
- kld_loss_music = torch.mean(-0.5 * torch.sum(1 + log_var_music - mu_music ** 2 - log_var_music.exp(), dim=1))
-
- music_vae_loss = mse_loss_music + params['beta_vae'] * kld_loss_music
- cocktail_vae_loss = mse_loss_cocktail + params['beta_vae'] * kld_loss_cocktail
- vae_loss = cocktail_vae_loss + params['beta_music'] * music_vae_loss
- # music_vae_loss = mse_loss_music + params['beta_vae'] * kld_loss_music
- brb_kld_loss_cocktail, brb_kld_loss_music, brb_mse_loss_music, brb_mse_loss_cocktail, brb_mse_latent_loss, brb_music_vae_loss, brb_vae_loss = [0] * 7
-
- if params['use_brb_vae']:
- # vae back to back
- out = model.forward_b2b(x_cocktail, modality_in_out='cocktail', modality_intermediate='music')
- x_hat_cocktail, x_intermediate_music, mu_cocktail, log_var_cocktail, z_cocktail, mu_music, log_var_music, z_music = out
- brb_mse_loss_cocktail = ((x_cocktail - x_hat_cocktail) ** 2).mean()
- brb_mse_latent_loss_1 = ((z_music - z_cocktail) ** 2).mean()
- brb_kld_loss_cocktail_1 = torch.mean(-0.5 * torch.sum(1 + log_var_cocktail - mu_cocktail ** 2 - log_var_cocktail.exp(), dim=1))
- brb_kld_loss_music_1 = torch.mean(-0.5 * torch.sum(1 + log_var_music - mu_music ** 2 - log_var_music.exp(), dim=1))
- # brb_cocktail_in_loss = mse_loss_cocktail + mse_latents_1 + params['beta_vae'] * (kld_loss_cocktail + kld_loss_music)
-
- out = model.forward_b2b(x_music, modality_in_out='music', modality_intermediate='cocktail')
- x_hat_music, x_intermediate_cocktail, mu_music, log_var_music, z_music, mu_cocktail, log_var_cocktail, z_cocktail = out
- brb_mse_loss_music = ((x_music - x_hat_music) ** 2).mean()
- brb_mse_latent_loss_2 = ((z_music - z_cocktail) ** 2).mean()
- brb_kld_loss_cocktail_2 = torch.mean(-0.5 * torch.sum(1 + log_var_cocktail - mu_cocktail ** 2 - log_var_cocktail.exp(), dim=1))
- brb_kld_loss_music_2 = torch.mean(-0.5 * torch.sum(1 + log_var_music - mu_music ** 2 - log_var_music.exp(), dim=1))
- # brb_music_in_loss = mse_loss_music + mse_latents_2 + params['beta_vae'] * (kld_loss_cocktail + kld_loss_music)
- brb_mse_latent_loss = (brb_mse_latent_loss_1 + brb_mse_latent_loss_2) / 2
- brb_kld_loss_music = (brb_kld_loss_music_1 + brb_kld_loss_music_2) / 2
- brb_kld_loss_cocktail = (brb_kld_loss_cocktail_1 + brb_kld_loss_cocktail_2) / 2
- brb_vae_loss = brb_mse_latent_loss + brb_mse_loss_cocktail + brb_mse_loss_music + params['beta_vae'] * (brb_kld_loss_music + brb_kld_loss_cocktail)
- brb_music_vae_loss = brb_mse_loss_music + params['beta_vae'] * brb_kld_loss_music + brb_mse_latent_loss
-
- # swd
- if params['beta_swd'] > 0:
- swd_loss = compute_swd_loss(z_music, z_cocktail, params['latent_dim'])
- else:
- swd_loss = 0
-
- # classif losses
- if params['beta_classif'] > 0:
- pred_music = model.classify(x_music_lab, modality_in='music')
- classif_loss_music = LOSS(pred_music, labels_music)
- accuracy_music = torch.mean((torch.argmax(pred_music, dim=1) == labels_music).float())
- cf_matrices_music.append(get_cf_matrix(pred_music, labels_music))
- pred_cocktail = model.classify(x_cocktail_lab, modality_in='cocktail')
- classif_loss_cocktail = LOSS(pred_cocktail, labels_cocktail)
- accuracy_cocktail = torch.mean((torch.argmax(pred_cocktail, dim=1) == labels_cocktail).float())
- cf_matrices_cocktail.append(get_cf_matrix(pred_cocktail, labels_cocktail))
-
- else:
- classif_loss_cocktail, classif_loss_music = 0, 0
- accuracy_music, accuracy_cocktail = 0, 0
- cf_matrices_cocktail.append(np.zeros((2, 2)))
- cf_matrices_music.append(np.zeros((2, 2)))
-
- if params['beta_reg_grounding'] > 0:
- x_hat_cocktail, _, _, _ = model(x_reg_music, modality_in='music', modality_out='cocktail', freeze_decoder=True)
- mse_reg_grounding = ((x_reg_cocktail - x_hat_cocktail) ** 2).mean()
- else:
- mse_reg_grounding = 0
-
- if params['use_brb_vae']:
- global_minus_classif = params['beta_vae_loss'] * (vae_loss + brb_music_vae_loss) + params['beta_swd'] * swd_loss
- global_loss = params['beta_vae_loss'] * (vae_loss + brb_music_vae_loss) + params['beta_swd'] * swd_loss + \
- params['beta_classif'] * (classif_loss_cocktail + params['beta_music_classif'] * classif_loss_music)
- else:
- global_minus_classif = params['beta_vae_loss'] * vae_loss + params['beta_swd'] * swd_loss
- global_loss = params['beta_vae_loss'] * vae_loss + params['beta_swd'] * swd_loss + params['beta_classif'] * (classif_loss_cocktail + classif_loss_music) + \
- params['beta_reg_grounding'] * mse_reg_grounding
- # global_loss = params['beta_vae_loss'] * cocktail_vae_loss + params['beta_classif'] * (classif_loss_cocktail + classif_loss_music) + \
- # params['beta_reg_grounding'] * mse_reg_grounding
-
- losses['brb_vae_loss'].append(float(brb_vae_loss))
- losses['brb_mse_latent_loss'].append(float(brb_mse_latent_loss))
- losses['brb_kld_loss_cocktail'].append(float(brb_kld_loss_cocktail))
- losses['brb_kld_loss_music'].append(float(brb_kld_loss_music))
- losses['brb_mse_loss_music'].append(float(brb_mse_loss_music))
- losses['brb_mse_loss_cocktail'].append(float(brb_mse_loss_cocktail))
- losses['swd_losses'].append(float(swd_loss))
- losses['vae_losses'].append(float(vae_loss))
- losses['kld_losses_music'].append(float(kld_loss_music))
- losses['kld_losses_cocktail'].append(float(kld_loss_cocktail))
- losses['mse_losses_music'].append(float(mse_loss_music))
- losses['mse_losses_cocktail'].append(float(mse_loss_cocktail))
- losses['global_losses'].append(float(global_loss))
- losses['classif_losses_music'].append(float(classif_loss_music))
- losses['classif_losses_cocktail'].append(float(classif_loss_cocktail))
- losses['classif_acc_cocktail'].append(float(accuracy_cocktail))
- losses['classif_acc_music'].append(float(accuracy_music))
- losses['beta_reg_grounding'].append(float(mse_reg_grounding))
- losses['bubble_mse'].append(bubble_mse)
- losses['egg_mse'].append(egg_mse)
-
- if train:
- # if epoch < params['n_epochs_music_pretrain']:
- # music_vae_loss.backward()
- # elif epoch >= params['n_epochs_music_pretrain'] and epoch < (params['n_epochs_music_pretrain'] + params['n_epochs_train']):
- # global_minus_classif.backward()
- # elif epoch >= (params['n_epochs_music_pretrain'] + params['n_epochs_train']):
- global_loss.backward()
- opt.step()
-
- if params['log_every'] != 0:
- if step != 0 and step % params['log_every'] == 0:
- print(f'\tBatch #{i_batch}')
- for k in params['keys_to_print']:
- if k != 'steps':
- print(f'\t {k}: Train: {np.nanmean(losses[k][-params["log_every"]:]):.3f}')
- # print(f'\t {k}: Train: {torch.mean(torch.cat(losses[k][-params["log_every"]:])):.3f}')
- return losses, [np.mean(cf_matrices_music, axis=0), np.mean(cf_matrices_cocktail, axis=0)]
-
-def get_cf_matrix(pred, labels):
- bs, dim = pred.shape
- labels = labels.detach().numpy()
- pred_labels = np.argmax(pred.detach().numpy(), axis=1)
- confusion_matrix = np.zeros((dim, dim))
- for i in range(bs):
- confusion_matrix[labels[i], pred_labels[i]] += 1
- for i in range(dim):
- if np.sum(confusion_matrix[i]) != 0:
- confusion_matrix[i] /= np.sum(confusion_matrix[i])
- return confusion_matrix
-
-def train(model, dataloaders, params):
- keys_to_track = params['keys_to_track']
- opt = torch.optim.AdamW(list(model.parameters()), lr=params['lr'])
- if params['decay_step'] > 0: scheduler = torch.optim.lr_scheduler.StepLR(opt, step_size=params['decay_step'], gamma=0.5)
- all_train_losses = dict(zip(keys_to_track, [[] for _ in range(len(keys_to_track))]))
- all_eval_losses = dict(zip(keys_to_track, [[] for _ in range(len(keys_to_track))]))
- best_eval_loss = np.inf
-
- data_train = dict()
- data_test = dict()
- for k in dataloaders.keys():
- if '_train' in k:
- data_train[k[:-6]] = dataloaders[k]
- elif '_test' in k:
- data_test[k[:-5]] = dataloaders[k]
- else:
- raise ValueError
- # run first eval
- eval_losses, _ = run_epoch(0, model, data_test, params, opt, train=False)
- for k in params['keys_to_track']:
- if k == 'steps':
- all_train_losses[k].append(0)
- all_eval_losses[k].append(0)
- else:
- all_train_losses[k].append(np.nan)
- all_eval_losses[k].append(np.mean(eval_losses[k]))
- # all_train_losses[k].append(torch.Tensor([np.nan]))
- # all_eval_losses[k].append(torch.atleast_1d(torch.mean(torch.cat(eval_losses[k]))))
- print(f'Initial evaluation')
- for k in params['keys_to_print']:
- to_print = all_eval_losses[k][-1] if k != 'steps' else all_eval_losses[k][-1]
- # to_print = all_eval_losses[k][-1][0] if k != 'steps' else all_eval_losses[k][-1]
- print(f' {k}: Eval: {to_print:.3f}')
- step = 0
- for epoch in range(params['epochs']):
- print(f'\n------------\nEpoch #{epoch}')
- # run training epoch
- train_losses, train_cf_matrices = run_epoch(epoch, model, data_train, params, opt, train=True)
- # run eval epoch
- eval_losses, eval_cf_matrices = run_epoch(epoch, model, data_test, params, opt, train=False)
-
- if epoch < params['n_epochs_music_pretrain']:
- epoch_size = params['pretrain_train_epoch_size']
- else:
- epoch_size = params['train_epoch_size']
- step += epoch_size
- for k in params['keys_to_track']:
- if k == 'steps':
- all_train_losses[k].append(epoch)
- all_eval_losses[k].append(epoch)
- else:
- all_train_losses[k].append(np.nanmean(train_losses[k]))
- all_eval_losses[k].append(np.nanmean(eval_losses[k]))
- # all_train_losses[k].append(torch.atleast_1d(torch.mean(torch.cat(train_losses[k]))))
- # all_eval_losses[k].append(torch.atleast_1d(torch.mean(torch.cat(eval_losses[k]))))
- if params['decay_step']: scheduler.step()
- # logging
- print(f'----\n\tEval epoch #{epoch}')
- for k in params['keys_to_print']:
- to_print_eval = all_eval_losses[k][-1] if k != 'steps' else all_eval_losses[k][-1]
- to_print_train = all_train_losses[k][-1] if k != 'steps' else all_train_losses[k][-1]
- # to_print_eval = all_eval_losses[k][-1][0] if k != 'steps' else all_eval_losses[k][-1]
- # to_print_train = all_train_losses[k][-1][0] if k != 'steps' else all_train_losses[k][-1]
- print(f'\t {k}: Eval: {to_print_eval:.3f} / Train: {to_print_train:.3f}')
-
- if epoch % params['plot_every'] == 0:
- plot_all_losses(all_train_losses.copy(), all_eval_losses.copy(), train_cf_matrices, eval_cf_matrices, params)
- # saving models
- save_losses(all_train_losses, all_eval_losses, params['save_path'] + 'results.txt')
- if params['save_every'] != 0:
- if epoch % params['save_every'] == 0:
- print('Saving model.')
- save_model(model, path=params['save_path'], name=f'epoch_{epoch}')
- if all_eval_losses['global_losses'][-1] < best_eval_loss:
- best_eval_loss = all_eval_losses['global_losses'][-1]
- print(f'New best eval loss: {best_eval_loss:.3f}, saving model.')
- # print(f'New best eval loss: {best_eval_loss[0]:.3f}, saving model.')
- save_model(model, path=params['save_path'], name='best_eval')
- print('Saving last model.')
- save_model(model, path=params['save_path'], name=f'last')
- return model, all_train_losses, all_eval_losses, train_cf_matrices, eval_cf_matrices
-
-def save_losses(train_losses, eval_losses, path):
- results = []
- keys = sorted(train_losses.keys())
- for k in keys:
- if k != 'steps':
- results.append(train_losses[k])#list(torch.cat(train_losses[k]).detach().cpu().numpy()))
- else:
- results.append(train_losses[k])
- for k in keys:
- if k != 'steps':
- results.append(eval_losses[k])#list(torch.cat(eval_losses[k]).detach().cpu().numpy()))
- else:
- results.append(eval_losses[k])
- np.savetxt(path, np.array(results))
-
-def save_model(model, path, name):
- torch.save(model.state_dict(), path + f'checkpoints_{name}.save')
-
-def run_training(params):
- params = compute_expe_name_and_save_path(params)
- dataloaders, n_labels, stats = get_dataloaders(cocktail_rep_path=params['cocktail_rep_path'],
- music_rep_path=params['music_rep_path'],
- batch_size=params['pretrain_batch_size'],
- train_epoch_size=params['pretrain_train_epoch_size'],
- test_epoch_size=params['pretrain_test_epoch_size'])
- params['nb_classes'] = n_labels
- params['stats'] = stats
- params['classif_classes'] = dataloaders['music_labeled_train'].dataset.classes
- vae_gml_model = get_gml_vae_models(layer_type=params['layer_type'],
- input_dim_music=dataloaders['music_train'].dataset.dim_music,
- input_dim_cocktail=dataloaders['cocktail_train'].dataset.dim_cocktail,
- hidden_dim=params['hidden_dim'],
- n_hidden=params['n_hidden'],
- latent_dim=params['latent_dim'],
- nb_classes=params['nb_classes'],
- dropout=params['dropout'])
- params['dim_music'] = dataloaders['music_train'].dataset.dim_music
- params['dim_cocktail'] = dataloaders['cocktail_train'].dataset.dim_cocktail
- with open(params['save_path'] + 'params.json', 'w') as f:
- json.dump(params, f)
- models, train_losses, eval_losses, train_cf_matrices, eval_cf_matrices = train(vae_gml_model, dataloaders, params)
- plot_all_losses(train_losses.copy(), eval_losses.copy(), train_cf_matrices, eval_cf_matrices, params)
- return models, train_losses, eval_losses
-
-def plot_all_losses(train_losses, eval_losses, train_cf_matrices, eval_cf_matrices, params):
- plot_losses(train_losses, train_cf_matrices, 'train', params)
- plot_losses(eval_losses, eval_cf_matrices, 'eval', params)
-
-def plot_losses(losses, cf_matrices, split, params):
- save_path = params['save_path'] + 'plots/'
- os.makedirs(save_path, exist_ok=True)
- steps = losses['steps']
- for k in losses.keys():
- # if k != 'steps':
- # losses[k] = losses[k]#torch.cat(losses[k]).detach().cpu().numpy()
- # else:
- losses[k] = np.array(losses[k])
- losses['sum_loss_classif'] = losses['classif_losses_music'] + losses['classif_losses_cocktail']
- losses['av_acc_classif'] = (losses['classif_acc_cocktail'] + losses['classif_acc_music'])/2
- losses['sum_mse_vae'] = losses['mse_losses_cocktail'] + losses['mse_losses_music']
- losses['sum_kld_vae'] = losses['kld_losses_cocktail'] + losses['kld_losses_music']
-
-
- plt.figure()
- for k in ['global_losses', 'vae_losses', 'swd_losses', 'sum_mse_vae', 'sum_kld_vae']:
- factor = 10 if k == 'swd_losses' else 1
- plt.plot(steps, losses[k] * factor, label=k)
- plt.title(split)
- plt.legend()
- plt.ylim([0, 2.5])
- plt.savefig(save_path + f'plot_high_level_losses_{split}.png')
- plt.close(plt.gcf())
-
- plt.figure()
- for k in ['classif_acc_cocktail', 'classif_acc_music']:
- plt.plot(steps, losses[k], label=k)
- plt.title(split)
- plt.ylim([0, 1])
- plt.legend()
- plt.savefig(save_path + f'plot_classif_accuracies_{split}.png')
- plt.close(plt.gcf())
-
- plt.figure()
- for k in ['mse_losses_cocktail', 'mse_losses_music', 'kld_losses_cocktail',
- 'kld_losses_music', 'swd_losses', 'classif_losses_cocktail', 'classif_losses_music', 'beta_reg_grounding',
- 'bubble_mse', 'egg_mse']:
- factor = 10 if k == 'swd_losses' else 1
- plt.plot(steps, losses[k] * factor, label=k)
- plt.title(split)
- plt.ylim([0, 2.5])
- plt.legend()
- plt.savefig(save_path + f'plot_detailed_losses_{split}.png')
- plt.close(plt.gcf())
-
- for i_k, k in enumerate(['music', 'cocktail']):
- plt.figure()
- plt.imshow(cf_matrices[i_k], vmin=0, vmax=1)
- labx = plt.xticks(range(len(params['classif_classes'])), params['classif_classes'], rotation=45)
- laby = plt.yticks(range(len(params['classif_classes'])), params['classif_classes'])
- labxx = plt.xlabel('predicted')
- labyy = plt.ylabel('true')
- plt.title(split + ' ' + k)
- plt.colorbar()
- plt.savefig(save_path + f'cf_matrix_{split}_{k}.png', artists=(labx, laby, labxx, labyy))
- plt.close(plt.gcf())
-
- if params['use_brb_vae']:
- plt.figure()
- for k in ['brb_vae_loss', 'brb_kld_loss_cocktail', 'brb_kld_loss_music', 'brb_mse_loss_music', 'brb_mse_loss_cocktail', 'mse_losses_music', 'brb_mse_latent_loss']:
- factor = 10 if k == 'swd_losses' else 1
- plt.plot(steps, losses[k] * factor, label=k)
- plt.title(split)
- plt.ylim([0, 2.5])
- plt.legend()
- plt.savefig(save_path + f'plot_detailed_brb_losses_{split}.png')
- plt.close(plt.gcf())
-
-def parse_args():
- parser = argparse.ArgumentParser(description="")
- parser.add_argument("--save_path", type=str, default="/home/cedric/Documents/pianocktail/experiments/music/representation_learning/saved_models/latent_translation/")
- parser.add_argument("--trial_id", type=str, default="b256_r128_classif001_ld40_meanstd")
- parser.add_argument("--hidden_dim", type=int, default=256) #128
- parser.add_argument("--n_hidden", type=int, default=1)
- parser.add_argument("--latent_dim", type=int, default=40) #40
- parser.add_argument("--n_epochs_music_pretrain", type=int, default=0)
- parser.add_argument("--n_epochs_train", type=int, default=200)
- parser.add_argument("--n_epochs_classif_finetune", type=int, default=0)
- parser.add_argument("--beta_vae_loss", type=float, default=1.)
- parser.add_argument("--beta_vae", type=float, default=1.2) # keep this low~1 to allow music classification...
- parser.add_argument("--beta_swd", type=float, default=1)
- parser.add_argument("--beta_reg_grounding", type=float, default=2.5)
- parser.add_argument("--beta_classif", type=float, default=0.01)#0.01) #TODO: try 0.1, default 0.01
- parser.add_argument("--beta_music", type=float, default=100) # higher loss on the music that needs more to converge
- parser.add_argument("--beta_music_classif", type=float, default=300) # try300# higher loss on the music that needs more to converge
- parser.add_argument("--pretrain_batch_size", type=int, default=128)
- parser.add_argument("--batch_size", type=int, default=32)
- parser.add_argument("--lr", type=float, default=0.001)
- parser.add_argument("--decay_step", type=int, default=0)
- parser.add_argument("--cocktail_rep_path", type=str, default=FULL_COCKTAIL_REP_PATH)
- parser.add_argument("--music_rep_path", type=str, default=music_rep_path)
- parser.add_argument("--use_brb_vae", type=bool, default=False)
- parser.add_argument("--layer_type", type=str, default='gml')
- parser.add_argument("--dropout", type=float, default=0.2)
-
- # best parameters
- # parser = argparse.ArgumentParser(description="")
- # parser.add_argument("--save_path", type=str, default="/home/cedric/Documents/pianocktail/experiments/music/representation_learning/saved_models/latent_translation/")
- # parser.add_argument("--trial_id", type=str, default="b256_r128_classif001_ld40_meanstd")
- # parser.add_argument("--hidden_dim", type=int, default=256) #128
- # parser.add_argument("--n_hidden", type=int, default=1)
- # parser.add_argument("--latent_dim", type=int, default=40) #40
- # parser.add_argument("--n_epochs_music_pretrain", type=int, default=0)
- # parser.add_argument("--n_epochs_train", type=int, default=200)
- # parser.add_argument("--n_epochs_classif_finetune", type=int, default=0)
- # parser.add_argument("--beta_vae_loss", type=float, default=1.)
- # parser.add_argument("--beta_vae", type=float, default=1) # keep this low~1 to allow music classification...
- # parser.add_argument("--beta_swd", type=float, default=1)
- # parser.add_argument("--beta_reg_grounding", type=float, default=2.5)
- # parser.add_argument("--beta_classif", type=float, default=0.01)#0.01) #TODO: try 0.1, default 0.01
- # parser.add_argument("--beta_music", type=float, default=100) # higher loss on the music that needs more to converge
- # parser.add_argument("--beta_music_classif", type=float, default=300) # try300# higher loss on the music that needs more to converge
- # parser.add_argument("--pretrain_batch_size", type=int, default=128)
- # parser.add_argument("--batch_size", type=int, default=32)
- # parser.add_argument("--lr", type=float, default=0.001)
- # parser.add_argument("--decay_step", type=int, default=0)
- # parser.add_argument("--cocktail_rep_path", type=str, default=FULL_COCKTAIL_REP_PATH)
- # parser.add_argument("--music_rep_path", type=str, default=music_rep_path)
- # parser.add_argument("--use_brb_vae", type=bool, default=False)
- # parser.add_argument("--layer_type", type=str, default='gml')
- # parser.add_argument("--dropout", type=float, default=0.2)
- args = parser.parse_args()
- return args
-
-def compute_expe_name_and_save_path(params):
- save_path = params['save_path'] + params["trial_id"]
- if params["use_brb_vae"]:
- save_path += '_usebrb'
- save_path += f'_lr{params["lr"]}'
- save_path += f'_bs{params["batch_size"]}'
- save_path += f'_bmusic{params["beta_music"]}'
- save_path += f'_bswd{params["beta_swd"]}'
- save_path += f'_bclassif{params["beta_classif"]}'
- save_path += f'_bvae{params["beta_vae_loss"]}'
- save_path += f'_bvaekld{params["beta_vae"]}'
- save_path += f'_lat{params["latent_dim"]}'
- save_path += f'_hd{params["n_hidden"]}x{params["hidden_dim"]}'
- save_path += f'_drop{params["dropout"]}'
- save_path += f'_decay{params["decay_step"]}'
- save_path += f'_layertype{params["layer_type"]}'
- number_added = False
- counter = 1
- while os.path.exists(save_path):
- if number_added:
- save_path = '_'.join(save_path.split('_')[:-1]) + f'_{counter}'
- counter += 1
- else:
- save_path += f'_{counter}'
- params["save_path"] = save_path + '/'
- os.makedirs(save_path)
- print(f'logging to {save_path}')
- return params
-
-if __name__ == '__main__':
- keys_to_track = ['steps', 'global_losses', 'vae_losses', 'mse_losses_cocktail', 'mse_losses_music', 'kld_losses_cocktail',
- 'kld_losses_music', 'swd_losses', 'classif_losses_cocktail', 'classif_losses_music', 'classif_acc_cocktail', 'classif_acc_music',
- 'brb_kld_loss_cocktail', 'brb_kld_loss_music', 'brb_mse_loss_music', 'brb_mse_loss_cocktail', 'brb_mse_latent_loss', 'brb_vae_loss', 'beta_reg_grounding',
- 'bubble_mse', 'egg_mse']
-
- keys_to_print = ['steps', 'global_losses', 'vae_losses', 'mse_losses_cocktail', 'mse_losses_music', 'kld_losses_cocktail',
- 'kld_losses_music', 'swd_losses', 'classif_losses_cocktail', 'classif_losses_music', 'classif_acc_cocktail', 'classif_acc_music', 'beta_reg_grounding']
- #TODO: first phase vae pretraining for music
- # then in second phase: vae cocktail and music, brb vaes
- args = parse_args()
- params = dict(nb_classes=None,
- save_every=0, #epochs
- log_every=0, #32*500,
- plot_every=10, # in epochs
- keys_to_track=keys_to_track,
- keys_to_print=keys_to_print,)
- params.update(vars(args))
-
- params['train_epoch_size'] = params['batch_size'] * 100
- params['test_epoch_size'] = params['batch_size'] * 10
- params['pretrain_train_epoch_size'] = params['pretrain_batch_size'] * 100
- params['pretrain_test_epoch_size'] = params['pretrain_batch_size'] * 10
- params['epochs'] = params['n_epochs_music_pretrain'] + params['n_epochs_train'] + params['n_epochs_classif_finetune']
- run_training(params)
-
-
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/demo/megengine_cpp_readme.md b/spaces/chendl/compositional_test/multimodal/YOLOX/docs/demo/megengine_cpp_readme.md
deleted file mode 100644
index dbadb36bcdc06a7d62e99d7f2f0c59b40231e1b7..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/demo/megengine_cpp_readme.md
+++ /dev/null
@@ -1 +0,0 @@
-../../demo/MegEngine/cpp/README.md
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/layers/fast_coco_eval_api.py b/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/layers/fast_coco_eval_api.py
deleted file mode 100644
index 5f3aeb5517077718331074c3795ed2d10b4954bc..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/layers/fast_coco_eval_api.py
+++ /dev/null
@@ -1,151 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# This file comes from
-# https://github.com/facebookresearch/detectron2/blob/master/detectron2/evaluation/fast_eval_api.py
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# Copyright (c) Megvii Inc. All rights reserved.
-
-import copy
-import time
-
-import numpy as np
-from pycocotools.cocoeval import COCOeval
-
-from .jit_ops import FastCOCOEvalOp
-
-
-class COCOeval_opt(COCOeval):
- """
- This is a slightly modified version of the original COCO API, where the functions evaluateImg()
- and accumulate() are implemented in C++ to speedup evaluation
- """
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.module = FastCOCOEvalOp().load()
-
- def evaluate(self):
- """
- Run per image evaluation on given images and store results in self.evalImgs_cpp, a
- datastructure that isn't readable from Python but is used by a c++ implementation of
- accumulate(). Unlike the original COCO PythonAPI, we don't populate the datastructure
- self.evalImgs because this datastructure is a computational bottleneck.
- :return: None
- """
- tic = time.time()
-
- print("Running per image evaluation...")
- p = self.params
- # add backward compatibility if useSegm is specified in params
- if p.useSegm is not None:
- p.iouType = "segm" if p.useSegm == 1 else "bbox"
- print(
- "useSegm (deprecated) is not None. Running {} evaluation".format(
- p.iouType
- )
- )
- print("Evaluate annotation type *{}*".format(p.iouType))
- p.imgIds = list(np.unique(p.imgIds))
- if p.useCats:
- p.catIds = list(np.unique(p.catIds))
- p.maxDets = sorted(p.maxDets)
- self.params = p
-
- self._prepare()
-
- # loop through images, area range, max detection number
- catIds = p.catIds if p.useCats else [-1]
-
- if p.iouType == "segm" or p.iouType == "bbox":
- computeIoU = self.computeIoU
- elif p.iouType == "keypoints":
- computeIoU = self.computeOks
- self.ious = {
- (imgId, catId): computeIoU(imgId, catId)
- for imgId in p.imgIds
- for catId in catIds
- }
-
- maxDet = p.maxDets[-1]
-
- # <<<< Beginning of code differences with original COCO API
- def convert_instances_to_cpp(instances, is_det=False):
- # Convert annotations for a list of instances in an image to a format that's fast
- # to access in C++
- instances_cpp = []
- for instance in instances:
- instance_cpp = self.module.InstanceAnnotation(
- int(instance["id"]),
- instance["score"] if is_det else instance.get("score", 0.0),
- instance["area"],
- bool(instance.get("iscrowd", 0)),
- bool(instance.get("ignore", 0)),
- )
- instances_cpp.append(instance_cpp)
- return instances_cpp
-
- # Convert GT annotations, detections, and IOUs to a format that's fast to access in C++
- ground_truth_instances = [
- [convert_instances_to_cpp(self._gts[imgId, catId]) for catId in p.catIds]
- for imgId in p.imgIds
- ]
- detected_instances = [
- [
- convert_instances_to_cpp(self._dts[imgId, catId], is_det=True)
- for catId in p.catIds
- ]
- for imgId in p.imgIds
- ]
- ious = [[self.ious[imgId, catId] for catId in catIds] for imgId in p.imgIds]
-
- if not p.useCats:
- # For each image, flatten per-category lists into a single list
- ground_truth_instances = [
- [[o for c in i for o in c]] for i in ground_truth_instances
- ]
- detected_instances = [
- [[o for c in i for o in c]] for i in detected_instances
- ]
-
- # Call C++ implementation of self.evaluateImgs()
- self._evalImgs_cpp = self.module.COCOevalEvaluateImages(
- p.areaRng,
- maxDet,
- p.iouThrs,
- ious,
- ground_truth_instances,
- detected_instances,
- )
- self._evalImgs = None
-
- self._paramsEval = copy.deepcopy(self.params)
- toc = time.time()
- print("COCOeval_opt.evaluate() finished in {:0.2f} seconds.".format(toc - tic))
- # >>>> End of code differences with original COCO API
-
- def accumulate(self):
- """
- Accumulate per image evaluation results and store the result in self.eval. Does not
- support changing parameter settings from those used by self.evaluate()
- """
- print("Accumulating evaluation results...")
- tic = time.time()
- if not hasattr(self, "_evalImgs_cpp"):
- print("Please run evaluate() first")
-
- self.eval = self.module.COCOevalAccumulate(self._paramsEval, self._evalImgs_cpp)
-
- # recall is num_iou_thresholds X num_categories X num_area_ranges X num_max_detections
- self.eval["recall"] = np.array(self.eval["recall"]).reshape(
- self.eval["counts"][:1] + self.eval["counts"][2:]
- )
-
- # precision and scores are num_iou_thresholds X num_recall_thresholds X num_categories X
- # num_area_ranges X num_max_detections
- self.eval["precision"] = np.array(self.eval["precision"]).reshape(
- self.eval["counts"]
- )
- self.eval["scores"] = np.array(self.eval["scores"]).reshape(self.eval["counts"])
- toc = time.time()
- print(
- "COCOeval_opt.accumulate() finished in {:0.2f} seconds.".format(toc - tic)
- )
diff --git a/spaces/chenmgtea/cn_tts/bert/prosody_tool.py b/spaces/chenmgtea/cn_tts/bert/prosody_tool.py
deleted file mode 100644
index 690a1bab4701b7912840c09447d8e0de8ebecb5a..0000000000000000000000000000000000000000
--- a/spaces/chenmgtea/cn_tts/bert/prosody_tool.py
+++ /dev/null
@@ -1,426 +0,0 @@
-def is_chinese(uchar):
- if uchar >= u'\u4e00' and uchar <= u'\u9fa5':
- return True
- else:
- return False
-
-
-pinyin_dict = {
- "a": ("^", "a"),
- "ai": ("^", "ai"),
- "an": ("^", "an"),
- "ang": ("^", "ang"),
- "ao": ("^", "ao"),
- "ba": ("b", "a"),
- "bai": ("b", "ai"),
- "ban": ("b", "an"),
- "bang": ("b", "ang"),
- "bao": ("b", "ao"),
- "be": ("b", "e"),
- "bei": ("b", "ei"),
- "ben": ("b", "en"),
- "beng": ("b", "eng"),
- "bi": ("b", "i"),
- "bian": ("b", "ian"),
- "biao": ("b", "iao"),
- "bie": ("b", "ie"),
- "bin": ("b", "in"),
- "bing": ("b", "ing"),
- "bo": ("b", "o"),
- "bu": ("b", "u"),
- "ca": ("c", "a"),
- "cai": ("c", "ai"),
- "can": ("c", "an"),
- "cang": ("c", "ang"),
- "cao": ("c", "ao"),
- "ce": ("c", "e"),
- "cen": ("c", "en"),
- "ceng": ("c", "eng"),
- "cha": ("ch", "a"),
- "chai": ("ch", "ai"),
- "chan": ("ch", "an"),
- "chang": ("ch", "ang"),
- "chao": ("ch", "ao"),
- "che": ("ch", "e"),
- "chen": ("ch", "en"),
- "cheng": ("ch", "eng"),
- "chi": ("ch", "iii"),
- "chong": ("ch", "ong"),
- "chou": ("ch", "ou"),
- "chu": ("ch", "u"),
- "chua": ("ch", "ua"),
- "chuai": ("ch", "uai"),
- "chuan": ("ch", "uan"),
- "chuang": ("ch", "uang"),
- "chui": ("ch", "uei"),
- "chun": ("ch", "uen"),
- "chuo": ("ch", "uo"),
- "ci": ("c", "ii"),
- "cong": ("c", "ong"),
- "cou": ("c", "ou"),
- "cu": ("c", "u"),
- "cuan": ("c", "uan"),
- "cui": ("c", "uei"),
- "cun": ("c", "uen"),
- "cuo": ("c", "uo"),
- "da": ("d", "a"),
- "dai": ("d", "ai"),
- "dan": ("d", "an"),
- "dang": ("d", "ang"),
- "dao": ("d", "ao"),
- "de": ("d", "e"),
- "dei": ("d", "ei"),
- "den": ("d", "en"),
- "deng": ("d", "eng"),
- "di": ("d", "i"),
- "dia": ("d", "ia"),
- "dian": ("d", "ian"),
- "diao": ("d", "iao"),
- "die": ("d", "ie"),
- "ding": ("d", "ing"),
- "diu": ("d", "iou"),
- "dong": ("d", "ong"),
- "dou": ("d", "ou"),
- "du": ("d", "u"),
- "duan": ("d", "uan"),
- "dui": ("d", "uei"),
- "dun": ("d", "uen"),
- "duo": ("d", "uo"),
- "e": ("^", "e"),
- "ei": ("^", "ei"),
- "en": ("^", "en"),
- "ng": ("^", "en"),
- "eng": ("^", "eng"),
- "er": ("^", "er"),
- "fa": ("f", "a"),
- "fan": ("f", "an"),
- "fang": ("f", "ang"),
- "fei": ("f", "ei"),
- "fen": ("f", "en"),
- "feng": ("f", "eng"),
- "fo": ("f", "o"),
- "fou": ("f", "ou"),
- "fu": ("f", "u"),
- "ga": ("g", "a"),
- "gai": ("g", "ai"),
- "gan": ("g", "an"),
- "gang": ("g", "ang"),
- "gao": ("g", "ao"),
- "ge": ("g", "e"),
- "gei": ("g", "ei"),
- "gen": ("g", "en"),
- "geng": ("g", "eng"),
- "gong": ("g", "ong"),
- "gou": ("g", "ou"),
- "gu": ("g", "u"),
- "gua": ("g", "ua"),
- "guai": ("g", "uai"),
- "guan": ("g", "uan"),
- "guang": ("g", "uang"),
- "gui": ("g", "uei"),
- "gun": ("g", "uen"),
- "guo": ("g", "uo"),
- "ha": ("h", "a"),
- "hai": ("h", "ai"),
- "han": ("h", "an"),
- "hang": ("h", "ang"),
- "hao": ("h", "ao"),
- "he": ("h", "e"),
- "hei": ("h", "ei"),
- "hen": ("h", "en"),
- "heng": ("h", "eng"),
- "hong": ("h", "ong"),
- "hou": ("h", "ou"),
- "hu": ("h", "u"),
- "hua": ("h", "ua"),
- "huai": ("h", "uai"),
- "huan": ("h", "uan"),
- "huang": ("h", "uang"),
- "hui": ("h", "uei"),
- "hun": ("h", "uen"),
- "huo": ("h", "uo"),
- "ji": ("j", "i"),
- "jia": ("j", "ia"),
- "jian": ("j", "ian"),
- "jiang": ("j", "iang"),
- "jiao": ("j", "iao"),
- "jie": ("j", "ie"),
- "jin": ("j", "in"),
- "jing": ("j", "ing"),
- "jiong": ("j", "iong"),
- "jiu": ("j", "iou"),
- "ju": ("j", "v"),
- "juan": ("j", "van"),
- "jue": ("j", "ve"),
- "jun": ("j", "vn"),
- "ka": ("k", "a"),
- "kai": ("k", "ai"),
- "kan": ("k", "an"),
- "kang": ("k", "ang"),
- "kao": ("k", "ao"),
- "ke": ("k", "e"),
- "kei": ("k", "ei"),
- "ken": ("k", "en"),
- "keng": ("k", "eng"),
- "kong": ("k", "ong"),
- "kou": ("k", "ou"),
- "ku": ("k", "u"),
- "kua": ("k", "ua"),
- "kuai": ("k", "uai"),
- "kuan": ("k", "uan"),
- "kuang": ("k", "uang"),
- "kui": ("k", "uei"),
- "kun": ("k", "uen"),
- "kuo": ("k", "uo"),
- "la": ("l", "a"),
- "lai": ("l", "ai"),
- "lan": ("l", "an"),
- "lang": ("l", "ang"),
- "lao": ("l", "ao"),
- "le": ("l", "e"),
- "lei": ("l", "ei"),
- "leng": ("l", "eng"),
- "li": ("l", "i"),
- "lia": ("l", "ia"),
- "lian": ("l", "ian"),
- "liang": ("l", "iang"),
- "liao": ("l", "iao"),
- "lie": ("l", "ie"),
- "lin": ("l", "in"),
- "ling": ("l", "ing"),
- "liu": ("l", "iou"),
- "lo": ("l", "o"),
- "long": ("l", "ong"),
- "lou": ("l", "ou"),
- "lu": ("l", "u"),
- "lv": ("l", "v"),
- "luan": ("l", "uan"),
- "lve": ("l", "ve"),
- "lue": ("l", "ve"),
- "lun": ("l", "uen"),
- "luo": ("l", "uo"),
- "ma": ("m", "a"),
- "mai": ("m", "ai"),
- "man": ("m", "an"),
- "mang": ("m", "ang"),
- "mao": ("m", "ao"),
- "me": ("m", "e"),
- "mei": ("m", "ei"),
- "men": ("m", "en"),
- "meng": ("m", "eng"),
- "mi": ("m", "i"),
- "mian": ("m", "ian"),
- "miao": ("m", "iao"),
- "mie": ("m", "ie"),
- "min": ("m", "in"),
- "ming": ("m", "ing"),
- "miu": ("m", "iou"),
- "mo": ("m", "o"),
- "mou": ("m", "ou"),
- "mu": ("m", "u"),
- "na": ("n", "a"),
- "nai": ("n", "ai"),
- "nan": ("n", "an"),
- "nang": ("n", "ang"),
- "nao": ("n", "ao"),
- "ne": ("n", "e"),
- "nei": ("n", "ei"),
- "nen": ("n", "en"),
- "neng": ("n", "eng"),
- "ni": ("n", "i"),
- "nia": ("n", "ia"),
- "nian": ("n", "ian"),
- "niang": ("n", "iang"),
- "niao": ("n", "iao"),
- "nie": ("n", "ie"),
- "nin": ("n", "in"),
- "ning": ("n", "ing"),
- "niu": ("n", "iou"),
- "nong": ("n", "ong"),
- "nou": ("n", "ou"),
- "nu": ("n", "u"),
- "nv": ("n", "v"),
- "nuan": ("n", "uan"),
- "nve": ("n", "ve"),
- "nue": ("n", "ve"),
- "nuo": ("n", "uo"),
- "o": ("^", "o"),
- "ou": ("^", "ou"),
- "pa": ("p", "a"),
- "pai": ("p", "ai"),
- "pan": ("p", "an"),
- "pang": ("p", "ang"),
- "pao": ("p", "ao"),
- "pe": ("p", "e"),
- "pei": ("p", "ei"),
- "pen": ("p", "en"),
- "peng": ("p", "eng"),
- "pi": ("p", "i"),
- "pian": ("p", "ian"),
- "piao": ("p", "iao"),
- "pie": ("p", "ie"),
- "pin": ("p", "in"),
- "ping": ("p", "ing"),
- "po": ("p", "o"),
- "pou": ("p", "ou"),
- "pu": ("p", "u"),
- "qi": ("q", "i"),
- "qia": ("q", "ia"),
- "qian": ("q", "ian"),
- "qiang": ("q", "iang"),
- "qiao": ("q", "iao"),
- "qie": ("q", "ie"),
- "qin": ("q", "in"),
- "qing": ("q", "ing"),
- "qiong": ("q", "iong"),
- "qiu": ("q", "iou"),
- "qu": ("q", "v"),
- "quan": ("q", "van"),
- "que": ("q", "ve"),
- "qun": ("q", "vn"),
- "ran": ("r", "an"),
- "rang": ("r", "ang"),
- "rao": ("r", "ao"),
- "re": ("r", "e"),
- "ren": ("r", "en"),
- "reng": ("r", "eng"),
- "ri": ("r", "iii"),
- "rong": ("r", "ong"),
- "rou": ("r", "ou"),
- "ru": ("r", "u"),
- "rua": ("r", "ua"),
- "ruan": ("r", "uan"),
- "rui": ("r", "uei"),
- "run": ("r", "uen"),
- "ruo": ("r", "uo"),
- "sa": ("s", "a"),
- "sai": ("s", "ai"),
- "san": ("s", "an"),
- "sang": ("s", "ang"),
- "sao": ("s", "ao"),
- "se": ("s", "e"),
- "sen": ("s", "en"),
- "seng": ("s", "eng"),
- "sha": ("sh", "a"),
- "shai": ("sh", "ai"),
- "shan": ("sh", "an"),
- "shang": ("sh", "ang"),
- "shao": ("sh", "ao"),
- "she": ("sh", "e"),
- "shei": ("sh", "ei"),
- "shen": ("sh", "en"),
- "sheng": ("sh", "eng"),
- "shi": ("sh", "iii"),
- "shou": ("sh", "ou"),
- "shu": ("sh", "u"),
- "shua": ("sh", "ua"),
- "shuai": ("sh", "uai"),
- "shuan": ("sh", "uan"),
- "shuang": ("sh", "uang"),
- "shui": ("sh", "uei"),
- "shun": ("sh", "uen"),
- "shuo": ("sh", "uo"),
- "si": ("s", "ii"),
- "song": ("s", "ong"),
- "sou": ("s", "ou"),
- "su": ("s", "u"),
- "suan": ("s", "uan"),
- "sui": ("s", "uei"),
- "sun": ("s", "uen"),
- "suo": ("s", "uo"),
- "ta": ("t", "a"),
- "tai": ("t", "ai"),
- "tan": ("t", "an"),
- "tang": ("t", "ang"),
- "tao": ("t", "ao"),
- "te": ("t", "e"),
- "tei": ("t", "ei"),
- "teng": ("t", "eng"),
- "ti": ("t", "i"),
- "tian": ("t", "ian"),
- "tiao": ("t", "iao"),
- "tie": ("t", "ie"),
- "ting": ("t", "ing"),
- "tong": ("t", "ong"),
- "tou": ("t", "ou"),
- "tu": ("t", "u"),
- "tuan": ("t", "uan"),
- "tui": ("t", "uei"),
- "tun": ("t", "uen"),
- "tuo": ("t", "uo"),
- "wa": ("^", "ua"),
- "wai": ("^", "uai"),
- "wan": ("^", "uan"),
- "wang": ("^", "uang"),
- "wei": ("^", "uei"),
- "wen": ("^", "uen"),
- "weng": ("^", "ueng"),
- "wo": ("^", "uo"),
- "wu": ("^", "u"),
- "xi": ("x", "i"),
- "xia": ("x", "ia"),
- "xian": ("x", "ian"),
- "xiang": ("x", "iang"),
- "xiao": ("x", "iao"),
- "xie": ("x", "ie"),
- "xin": ("x", "in"),
- "xing": ("x", "ing"),
- "xiong": ("x", "iong"),
- "xiu": ("x", "iou"),
- "xu": ("x", "v"),
- "xuan": ("x", "van"),
- "xue": ("x", "ve"),
- "xun": ("x", "vn"),
- "ya": ("^", "ia"),
- "yan": ("^", "ian"),
- "yang": ("^", "iang"),
- "yao": ("^", "iao"),
- "ye": ("^", "ie"),
- "yi": ("^", "i"),
- "yin": ("^", "in"),
- "ying": ("^", "ing"),
- "yo": ("^", "iou"),
- "yong": ("^", "iong"),
- "you": ("^", "iou"),
- "yu": ("^", "v"),
- "yuan": ("^", "van"),
- "yue": ("^", "ve"),
- "yun": ("^", "vn"),
- "za": ("z", "a"),
- "zai": ("z", "ai"),
- "zan": ("z", "an"),
- "zang": ("z", "ang"),
- "zao": ("z", "ao"),
- "ze": ("z", "e"),
- "zei": ("z", "ei"),
- "zen": ("z", "en"),
- "zeng": ("z", "eng"),
- "zha": ("zh", "a"),
- "zhai": ("zh", "ai"),
- "zhan": ("zh", "an"),
- "zhang": ("zh", "ang"),
- "zhao": ("zh", "ao"),
- "zhe": ("zh", "e"),
- "zhei": ("zh", "ei"),
- "zhen": ("zh", "en"),
- "zheng": ("zh", "eng"),
- "zhi": ("zh", "iii"),
- "zhong": ("zh", "ong"),
- "zhou": ("zh", "ou"),
- "zhu": ("zh", "u"),
- "zhua": ("zh", "ua"),
- "zhuai": ("zh", "uai"),
- "zhuan": ("zh", "uan"),
- "zhuang": ("zh", "uang"),
- "zhui": ("zh", "uei"),
- "zhun": ("zh", "uen"),
- "zhuo": ("zh", "uo"),
- "zi": ("z", "ii"),
- "zong": ("z", "ong"),
- "zou": ("z", "ou"),
- "zu": ("z", "u"),
- "zuan": ("z", "uan"),
- "zui": ("z", "uei"),
- "zun": ("z", "uen"),
- "zuo": ("z", "uo"),
-}
diff --git a/spaces/chongjie/ZoeDepth_slim/geometry.py b/spaces/chongjie/ZoeDepth_slim/geometry.py
deleted file mode 100644
index 6cb738f60d68b6dd2e58fa61093367f748a31bce..0000000000000000000000000000000000000000
--- a/spaces/chongjie/ZoeDepth_slim/geometry.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import numpy as np
-
-def get_intrinsics(H,W):
- """
- Intrinsics for a pinhole camera model.
- Assume fov of 55 degrees and central principal point.
- """
- f = 0.5 * W / np.tan(0.5 * 55 * np.pi / 180.0)
- cx = 0.5 * W
- cy = 0.5 * H
- return np.array([[f, 0, cx],
- [0, f, cy],
- [0, 0, 1]])
-
-def depth_to_points(depth, R=None, t=None):
-
- K = get_intrinsics(depth.shape[1], depth.shape[2])
- Kinv = np.linalg.inv(K)
- if R is None:
- R = np.eye(3)
- if t is None:
- t = np.zeros(3)
-
- # M converts from your coordinate to PyTorch3D's coordinate system
- M = np.eye(3)
- M[0, 0] = -1.0
- M[1, 1] = -1.0
-
- height, width = depth.shape[1:3]
-
- x = np.arange(width)
- y = np.arange(height)
- coord = np.stack(np.meshgrid(x, y), -1)
- coord = np.concatenate((coord, np.ones_like(coord)[:, :, [0]]), -1) # z=1
- coord = coord.astype(np.float32)
- # coord = torch.as_tensor(coord, dtype=torch.float32, device=device)
- coord = coord[None] # bs, h, w, 3
-
- D = depth[:, :, :, None, None]
- # print(D.shape, Kinv[None, None, None, ...].shape, coord[:, :, :, :, None].shape )
- pts3D_1 = D * Kinv[None, None, None, ...] @ coord[:, :, :, :, None]
- # pts3D_1 live in your coordinate system. Convert them to Py3D's
- pts3D_1 = M[None, None, None, ...] @ pts3D_1
- # from reference to targe tviewpoint
- pts3D_2 = R[None, None, None, ...] @ pts3D_1 + t[None, None, None, :, None]
- # pts3D_2 = pts3D_1
- # depth_2 = pts3D_2[:, :, :, 2, :] # b,1,h,w
- return pts3D_2[:, :, :, :3, 0][0]
-
-
-def create_triangles(h, w, mask=None):
- """Creates mesh triangle indices from a given pixel grid size.
- This function is not and need not be differentiable as triangle indices are
- fixed.
- Args:
- h: (int) denoting the height of the image.
- w: (int) denoting the width of the image.
- Returns:
- triangles: 2D numpy array of indices (int) with shape (2(W-1)(H-1) x 3)
- """
- x, y = np.meshgrid(range(w - 1), range(h - 1))
- tl = y * w + x
- tr = y * w + x + 1
- bl = (y + 1) * w + x
- br = (y + 1) * w + x + 1
- triangles = np.array([tl, bl, tr, br, tr, bl])
- triangles = np.transpose(triangles, (1, 2, 0)).reshape(
- ((w - 1) * (h - 1) * 2, 3))
- if mask is not None:
- mask = mask.reshape(-1)
- triangles = triangles[mask[triangles].all(1)]
- return triangles
\ No newline at end of file
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/utils.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/utils.py
deleted file mode 100644
index 5b404defde33449e33da554a80aa28ac23230938..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/utils.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-from __future__ import annotations
-
-import typing
-
-from cryptography.hazmat.primitives import hashes
-from cryptography.hazmat.primitives.asymmetric.utils import Prehashed
-
-if typing.TYPE_CHECKING:
- from cryptography.hazmat.backends.openssl.backend import Backend
-
-
-def _evp_pkey_derive(backend: Backend, evp_pkey, peer_public_key) -> bytes:
- ctx = backend._lib.EVP_PKEY_CTX_new(evp_pkey, backend._ffi.NULL)
- backend.openssl_assert(ctx != backend._ffi.NULL)
- ctx = backend._ffi.gc(ctx, backend._lib.EVP_PKEY_CTX_free)
- res = backend._lib.EVP_PKEY_derive_init(ctx)
- backend.openssl_assert(res == 1)
-
- if backend._lib.Cryptography_HAS_EVP_PKEY_SET_PEER_EX:
- res = backend._lib.EVP_PKEY_derive_set_peer_ex(
- ctx, peer_public_key._evp_pkey, 0
- )
- else:
- res = backend._lib.EVP_PKEY_derive_set_peer(
- ctx, peer_public_key._evp_pkey
- )
- backend.openssl_assert(res == 1)
-
- keylen = backend._ffi.new("size_t *")
- res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)
- backend.openssl_assert(res == 1)
- backend.openssl_assert(keylen[0] > 0)
- buf = backend._ffi.new("unsigned char[]", keylen[0])
- res = backend._lib.EVP_PKEY_derive(ctx, buf, keylen)
- if res != 1:
- errors = backend._consume_errors()
- raise ValueError("Error computing shared key.", errors)
-
- return backend._ffi.buffer(buf, keylen[0])[:]
-
-
-def _calculate_digest_and_algorithm(
- data: bytes,
- algorithm: typing.Union[Prehashed, hashes.HashAlgorithm],
-) -> typing.Tuple[bytes, hashes.HashAlgorithm]:
- if not isinstance(algorithm, Prehashed):
- hash_ctx = hashes.Hash(algorithm)
- hash_ctx.update(data)
- data = hash_ctx.finalize()
- else:
- algorithm = algorithm._algorithm
-
- if len(data) != algorithm.digest_size:
- raise ValueError(
- "The provided data must be the same length as the hash "
- "algorithm's digest size."
- )
-
- return (data, algorithm)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/symfont.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/symfont.py
deleted file mode 100644
index 0bd69a386ec9f01c8951f0dfc8bc8c261718cf1f..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/symfont.py
+++ /dev/null
@@ -1,251 +0,0 @@
-from fontTools.pens.basePen import BasePen
-from functools import partial
-from itertools import count
-import sympy as sp
-import sys
-
-n = 3 # Max Bezier degree; 3 for cubic, 2 for quadratic
-
-t, x, y = sp.symbols("t x y", real=True)
-c = sp.symbols("c", real=False) # Complex representation instead of x/y
-
-X = tuple(sp.symbols("x:%d" % (n + 1), real=True))
-Y = tuple(sp.symbols("y:%d" % (n + 1), real=True))
-P = tuple(zip(*(sp.symbols("p:%d[%s]" % (n + 1, w), real=True) for w in "01")))
-C = tuple(sp.symbols("c:%d" % (n + 1), real=False))
-
-# Cubic Bernstein basis functions
-BinomialCoefficient = [(1, 0)]
-for i in range(1, n + 1):
- last = BinomialCoefficient[-1]
- this = tuple(last[j - 1] + last[j] for j in range(len(last))) + (0,)
- BinomialCoefficient.append(this)
-BinomialCoefficient = tuple(tuple(item[:-1]) for item in BinomialCoefficient)
-del last, this
-
-BernsteinPolynomial = tuple(
- tuple(c * t**i * (1 - t) ** (n - i) for i, c in enumerate(coeffs))
- for n, coeffs in enumerate(BinomialCoefficient)
-)
-
-BezierCurve = tuple(
- tuple(
- sum(P[i][j] * bernstein for i, bernstein in enumerate(bernsteins))
- for j in range(2)
- )
- for n, bernsteins in enumerate(BernsteinPolynomial)
-)
-BezierCurveC = tuple(
- sum(C[i] * bernstein for i, bernstein in enumerate(bernsteins))
- for n, bernsteins in enumerate(BernsteinPolynomial)
-)
-
-
-def green(f, curveXY):
- f = -sp.integrate(sp.sympify(f), y)
- f = f.subs({x: curveXY[0], y: curveXY[1]})
- f = sp.integrate(f * sp.diff(curveXY[0], t), (t, 0, 1))
- return f
-
-
-class _BezierFuncsLazy(dict):
- def __init__(self, symfunc):
- self._symfunc = symfunc
- self._bezfuncs = {}
-
- def __missing__(self, i):
- args = ["p%d" % d for d in range(i + 1)]
- f = green(self._symfunc, BezierCurve[i])
- f = sp.gcd_terms(f.collect(sum(P, ()))) # Optimize
- return sp.lambdify(args, f)
-
-
-class GreenPen(BasePen):
-
- _BezierFuncs = {}
-
- @classmethod
- def _getGreenBezierFuncs(celf, func):
- funcstr = str(func)
- if not funcstr in celf._BezierFuncs:
- celf._BezierFuncs[funcstr] = _BezierFuncsLazy(func)
- return celf._BezierFuncs[funcstr]
-
- def __init__(self, func, glyphset=None):
- BasePen.__init__(self, glyphset)
- self._funcs = self._getGreenBezierFuncs(func)
- self.value = 0
-
- def _moveTo(self, p0):
- self.__startPoint = p0
-
- def _closePath(self):
- p0 = self._getCurrentPoint()
- if p0 != self.__startPoint:
- self._lineTo(self.__startPoint)
-
- def _endPath(self):
- p0 = self._getCurrentPoint()
- if p0 != self.__startPoint:
- # Green theorem is not defined on open contours.
- raise NotImplementedError
-
- def _lineTo(self, p1):
- p0 = self._getCurrentPoint()
- self.value += self._funcs[1](p0, p1)
-
- def _qCurveToOne(self, p1, p2):
- p0 = self._getCurrentPoint()
- self.value += self._funcs[2](p0, p1, p2)
-
- def _curveToOne(self, p1, p2, p3):
- p0 = self._getCurrentPoint()
- self.value += self._funcs[3](p0, p1, p2, p3)
-
-
-# Sample pens.
-# Do not use this in real code.
-# Use fontTools.pens.momentsPen.MomentsPen instead.
-AreaPen = partial(GreenPen, func=1)
-MomentXPen = partial(GreenPen, func=x)
-MomentYPen = partial(GreenPen, func=y)
-MomentXXPen = partial(GreenPen, func=x * x)
-MomentYYPen = partial(GreenPen, func=y * y)
-MomentXYPen = partial(GreenPen, func=x * y)
-
-
-def printGreenPen(penName, funcs, file=sys.stdout, docstring=None):
-
- if docstring is not None:
- print('"""%s"""' % docstring)
-
- print(
- """from fontTools.pens.basePen import BasePen, OpenContourError
-try:
- import cython
-
- COMPILED = cython.compiled
-except (AttributeError, ImportError):
- # if cython not installed, use mock module with no-op decorators and types
- from fontTools.misc import cython
-
- COMPILED = False
-
-
-__all__ = ["%s"]
-
-class %s(BasePen):
-
- def __init__(self, glyphset=None):
- BasePen.__init__(self, glyphset)
-"""
- % (penName, penName),
- file=file,
- )
- for name, f in funcs:
- print(" self.%s = 0" % name, file=file)
- print(
- """
- def _moveTo(self, p0):
- self.__startPoint = p0
-
- def _closePath(self):
- p0 = self._getCurrentPoint()
- if p0 != self.__startPoint:
- self._lineTo(self.__startPoint)
-
- def _endPath(self):
- p0 = self._getCurrentPoint()
- if p0 != self.__startPoint:
- # Green theorem is not defined on open contours.
- raise OpenContourError(
- "Green theorem is not defined on open contours."
- )
-""",
- end="",
- file=file,
- )
-
- for n in (1, 2, 3):
-
- subs = {P[i][j]: [X, Y][j][i] for i in range(n + 1) for j in range(2)}
- greens = [green(f, BezierCurve[n]) for name, f in funcs]
- greens = [sp.gcd_terms(f.collect(sum(P, ()))) for f in greens] # Optimize
- greens = [f.subs(subs) for f in greens] # Convert to p to x/y
- defs, exprs = sp.cse(
- greens,
- optimizations="basic",
- symbols=(sp.Symbol("r%d" % i) for i in count()),
- )
-
- print()
- for name, value in defs:
- print(" @cython.locals(%s=cython.double)" % name, file=file)
- if n == 1:
- print(
- """\
- @cython.locals(x0=cython.double, y0=cython.double)
- @cython.locals(x1=cython.double, y1=cython.double)
- def _lineTo(self, p1):
- x0,y0 = self._getCurrentPoint()
- x1,y1 = p1
-""",
- file=file,
- )
- elif n == 2:
- print(
- """\
- @cython.locals(x0=cython.double, y0=cython.double)
- @cython.locals(x1=cython.double, y1=cython.double)
- @cython.locals(x2=cython.double, y2=cython.double)
- def _qCurveToOne(self, p1, p2):
- x0,y0 = self._getCurrentPoint()
- x1,y1 = p1
- x2,y2 = p2
-""",
- file=file,
- )
- elif n == 3:
- print(
- """\
- @cython.locals(x0=cython.double, y0=cython.double)
- @cython.locals(x1=cython.double, y1=cython.double)
- @cython.locals(x2=cython.double, y2=cython.double)
- @cython.locals(x3=cython.double, y3=cython.double)
- def _curveToOne(self, p1, p2, p3):
- x0,y0 = self._getCurrentPoint()
- x1,y1 = p1
- x2,y2 = p2
- x3,y3 = p3
-""",
- file=file,
- )
- for name, value in defs:
- print(" %s = %s" % (name, value), file=file)
-
- print(file=file)
- for name, value in zip([f[0] for f in funcs], exprs):
- print(" self.%s += %s" % (name, value), file=file)
-
- print(
- """
-if __name__ == '__main__':
- from fontTools.misc.symfont import x, y, printGreenPen
- printGreenPen('%s', ["""
- % penName,
- file=file,
- )
- for name, f in funcs:
- print(" ('%s', %s)," % (name, str(f)), file=file)
- print(" ])", file=file)
-
-
-if __name__ == "__main__":
- pen = AreaPen()
- pen.moveTo((100, 100))
- pen.lineTo((100, 200))
- pen.lineTo((200, 200))
- pen.curveTo((200, 250), (300, 300), (250, 350))
- pen.lineTo((200, 100))
- pen.closePath()
- print(pen.value)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/ModifyUpload-87f877d6.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/ModifyUpload-87f877d6.js
deleted file mode 100644
index e949f31777bb4ac559c79d640b90272066a8d943..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/ModifyUpload-87f877d6.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as g,e as w,s as _,J as p,K as o,L as i,p as k,M as m,n as u,A as b,N as z,O as B,k as $,U as v,o as C,z as d,u as I,v as h,y as E,x as M,B as j}from"./index-f877dfd5.js";import"./Button-11a87b79.js";import{I as L}from"./IconButton-34da90d2.js";import"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";function S(a){let e,s,t,l;return{c(){e=p("svg"),s=p("g"),t=p("path"),l=p("path"),o(t,"d","M18,6L6.087,17.913"),i(t,"fill","none"),i(t,"fill-rule","nonzero"),i(t,"stroke-width","2px"),o(s,"transform","matrix(1.14096,-0.140958,-0.140958,1.14096,-0.0559523,0.0559523)"),o(l,"d","M4.364,4.364L19.636,19.636"),i(l,"fill","none"),i(l,"fill-rule","nonzero"),i(l,"stroke-width","2px"),o(e,"width","100%"),o(e,"height","100%"),o(e,"viewBox","0 0 24 24"),o(e,"version","1.1"),o(e,"xmlns","http://www.w3.org/2000/svg"),o(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),o(e,"xml:space","preserve"),o(e,"stroke","currentColor"),i(e,"fill-rule","evenodd"),i(e,"clip-rule","evenodd"),i(e,"stroke-linecap","round"),i(e,"stroke-linejoin","round")},m(n,r){k(n,e,r),m(e,s),m(s,t),m(e,l)},p:u,i:u,o:u,d(n){n&&b(e)}}}class U extends g{constructor(e){super(),w(this,e,null,S,_,{})}}function q(a){let e,s;return{c(){e=p("svg"),s=p("path"),o(s,"d","M17 3a2.828 2.828 0 1 1 4 4L7.5 20.5 2 22l1.5-5.5L17 3z"),o(e,"xmlns","http://www.w3.org/2000/svg"),o(e,"width","100%"),o(e,"height","100%"),o(e,"viewBox","0 0 24 24"),o(e,"fill","none"),o(e,"stroke","currentColor"),o(e,"stroke-width","1.5"),o(e,"stroke-linecap","round"),o(e,"stroke-linejoin","round"),o(e,"class","feather feather-edit-2")},m(t,l){k(t,e,l),m(e,s)},p:u,i:u,o:u,d(t){t&&b(e)}}}class y extends g{constructor(e){super(),w(this,e,null,q,_,{})}}function x(a){let e,s;return e=new L({props:{Icon:y,label:"Edit"}}),e.$on("click",a[3]),{c(){$(e.$$.fragment)},m(t,l){C(e,t,l),s=!0},p:u,i(t){s||(d(e.$$.fragment,t),s=!0)},o(t){h(e.$$.fragment,t),s=!1},d(t){M(e,t)}}}function A(a){let e,s,t,l,n=a[0]&&x(a);return t=new L({props:{Icon:U,label:"Clear"}}),t.$on("click",a[4]),{c(){e=z("div"),n&&n.c(),s=B(),$(t.$$.fragment),o(e,"class","svelte-19sk1im"),v(e,"not-absolute",!a[1]),i(e,"position",a[1]?"absolute":"static")},m(r,c){k(r,e,c),n&&n.m(e,null),m(e,s),C(t,e,null),l=!0},p(r,[c]){r[0]?n?(n.p(r,c),c&1&&d(n,1)):(n=x(r),n.c(),d(n,1),n.m(e,s)):n&&(I(),h(n,1,1,()=>{n=null}),E()),(!l||c&2)&&v(e,"not-absolute",!r[1]),c&2&&i(e,"position",r[1]?"absolute":"static")},i(r){l||(d(n),d(t.$$.fragment,r),l=!0)},o(r){h(n),h(t.$$.fragment,r),l=!1},d(r){r&&b(e),n&&n.d(),M(t)}}}function D(a,e,s){let{editable:t=!1}=e,{absolute:l=!0}=e;const n=j(),r=()=>n("edit"),c=f=>{n("clear"),f.stopPropagation()};return a.$$set=f=>{"editable"in f&&s(0,t=f.editable),"absolute"in f&&s(1,l=f.absolute)},[t,l,n,r,c]}class P extends g{constructor(e){super(),w(this,e,D,A,_,{editable:0,absolute:1})}}export{U as C,P as M};
-//# sourceMappingURL=ModifyUpload-87f877d6.js.map
diff --git a/spaces/cihyFjudo/fairness-paper-search/El Hobbit La Desolacion De Smaug Version Extendida 1080p Tv La mejor forma de disfrutar de la aventura pica.md b/spaces/cihyFjudo/fairness-paper-search/El Hobbit La Desolacion De Smaug Version Extendida 1080p Tv La mejor forma de disfrutar de la aventura pica.md
deleted file mode 100644
index 3a3ab3d91d227955459b184d6948893ddf73bd46..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/El Hobbit La Desolacion De Smaug Version Extendida 1080p Tv La mejor forma de disfrutar de la aventura pica.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Sigue la aventura de Bilbo Bols\u00f3n en su viaje para reclamar el reino de Erebor. En su camino topar\u00e1n con multitud de peligros y har\u00e1n frente al temible drag\u00f3n Smaug.\nTras el gran \u00e9xito de \"El Se\u00f1or de los Anillos\", el director Peter Jackson volvi\u00f3 a Nueva Zelanda para adaptar al audiovisual el proyecto que ten\u00eda en mente desde 1995: el libro de J. R. R. Tolkien \"El Hobbit\". Para ello, tambi\u00e9n dividi\u00f3 la trama en tres pel\u00edculas. \"La desolaci\u00f3n de Smaug\" fue la segunda de ellas. Con el mismo equipo que la original, \"El viaje inesperado\", encabezan el reparto Ian McKellen, todo un experto del universo de Tierra Media, pues lleva encarnando a Gandalf desde 2001; y Martin Freeman (\"Sherlock\", \"Fargo\").\nSu gran despliegue t\u00e9cnico hizo que la propuesta arrasara en taquillas y optara a tres Oscars: al mejor sonido, edici\u00f3n de sonido y efectos visuales.","director":"@type":"Person","name":"Peter Jackson (I)","name":"El Hobbit: La desolaci\u00f3n de Smaug (versi\u00f3n extendida)","image":"@type":"ImageObject","url":"https:\/\/www.abc.es\/media\/peliculas\/000\/033\/409\/el-hobbit-la-desolacion-de-smaug-version-extendida-2.jpg","dateCreated":"2017-03-30T20:41"} Antes de continuar¡Hola, !
-
El Hobbit La Desolacion De Smaug Version Extendida 1080p Tv
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Iggy Pop Discography Completa Download Free Listen to the Classics and the Rarities of the Rebel Musician.md b/spaces/cihyFjudo/fairness-paper-search/Iggy Pop Discography Completa Download Free Listen to the Classics and the Rarities of the Rebel Musician.md
deleted file mode 100644
index 4c48ea2deb1176b736b97b3526867b5190f2dce6..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Iggy Pop Discography Completa Download Free Listen to the Classics and the Rarities of the Rebel Musician.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
As with previous Nine Inch Nails releases Year Zero and Ghosts I-IV, the complete multi-track files to The Slip will be available free at launch, allowing anyone who wants to create his or her own remixes and reinterpretations of the songs. As always, the Remix.NIN.com community will provide a sire and infrastructure for fans to upload, share, stream and download these various remixes as well as the original masters, all free of charge or restrictions.
-
There are many free MP3 music download sites that are unblocked and are alternatives to the popular music streaming services like Spotify, Apple Music and Google Play Music. By unblocked we mean that these sites can be accessed from anywhere in the world, without any restrictions.
Myfree MP3 is a free music archive site that offers a wide selection of songs, albums, and artists to choose from. It's easy to use - just enter the name of the song or artist you want to download and hit the search button. You can also browse by genre or album.
-
Whether you're looking for the latest hits or classic tracks, there's a free MP3 music download site that will have what you're looking for. Be sure to check out a few of them to see which one you like the best. And always be sure to scan any downloads for viruses before opening them.
-
Both the Xbox 360 and the PlayStation 3 version of Guitar Hero III feature the ability to download additional songs from the consoles' respective online stores. Most songs must be purchased in "track packs" of three and cannot be purchased individually while only some songs are available as "singles." There are a number of free songs available. The downloadable songs have been released on the same day on both the Xbox Live Marketplace and the PlayStation Store, with five exceptions. Besides the two console-exclusive songs, the three songs from the Companion Pack were not released for the PlayStation 3 until August 7, 2008.
-
Downloadable songs cost 160 Microsoft Points each for Xbox 360 users, $1.99 each (PlayStation Store) for PS3, or 200 Wii Points for Wii, with a limited number of songs available free of charge. Also, users can purchase track packs of three songs released together for 440 Microsoft Points, 550 Wii Points, or $5.49 for PS3. The Jimi Hendrix Track Pack was originally only available for download as a track pack and not as individual songs,[50] but Activision announced that in March the Wii would receive the Hendrix pack as downloadable singles. These songs have since been released with the second Jimi Hendrix Track Pack.[51]
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/VERIFIED Toshiba Satellite L300 Fn Key Driver Download Easy and Fast Installation.md b/spaces/cihyFjudo/fairness-paper-search/VERIFIED Toshiba Satellite L300 Fn Key Driver Download Easy and Fast Installation.md
deleted file mode 100644
index b58b851d96621c4ba1f5b8c2f2768a52ae8901e0..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/VERIFIED Toshiba Satellite L300 Fn Key Driver Download Easy and Fast Installation.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/F__e_a_t.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/F__e_a_t.py
deleted file mode 100644
index fbcd6ca6e7bc0640263ddab74e1e1c89ea61bbfb..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/F__e_a_t.py
+++ /dev/null
@@ -1,144 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.fixedTools import floatToFixedToStr
-from fontTools.misc.textTools import safeEval
-from . import DefaultTable
-from . import grUtils
-import struct
-
-Feat_hdr_format = """
- >
- version: 16.16F
-"""
-
-
-class table_F__e_a_t(DefaultTable.DefaultTable):
- """The ``Feat`` table is used exclusively by the Graphite shaping engine
- to store features and possible settings specified in GDL. Graphite features
- determine what rules are applied to transform a glyph stream.
-
- Not to be confused with ``feat``, or the OpenType Layout tables
- ``GSUB``/``GPOS``."""
-
- def __init__(self, tag=None):
- DefaultTable.DefaultTable.__init__(self, tag)
- self.features = {}
-
- def decompile(self, data, ttFont):
- (_, data) = sstruct.unpack2(Feat_hdr_format, data, self)
- self.version = float(floatToFixedToStr(self.version, precisionBits=16))
- (numFeats,) = struct.unpack(">H", data[:2])
- data = data[8:]
- allfeats = []
- maxsetting = 0
- for i in range(numFeats):
- if self.version >= 2.0:
- (fid, nums, _, offset, flags, lid) = struct.unpack(
- ">LHHLHH", data[16 * i : 16 * (i + 1)]
- )
- offset = int((offset - 12 - 16 * numFeats) / 4)
- else:
- (fid, nums, offset, flags, lid) = struct.unpack(
- ">HHLHH", data[12 * i : 12 * (i + 1)]
- )
- offset = int((offset - 12 - 12 * numFeats) / 4)
- allfeats.append((fid, nums, offset, flags, lid))
- maxsetting = max(maxsetting, offset + nums)
- data = data[16 * numFeats :]
- allsettings = []
- for i in range(maxsetting):
- if len(data) >= 4 * (i + 1):
- (val, lid) = struct.unpack(">HH", data[4 * i : 4 * (i + 1)])
- allsettings.append((val, lid))
- for i, f in enumerate(allfeats):
- (fid, nums, offset, flags, lid) = f
- fobj = Feature()
- fobj.flags = flags
- fobj.label = lid
- self.features[grUtils.num2tag(fid)] = fobj
- fobj.settings = {}
- fobj.default = None
- fobj.index = i
- for i in range(offset, offset + nums):
- if i >= len(allsettings):
- continue
- (vid, vlid) = allsettings[i]
- fobj.settings[vid] = vlid
- if fobj.default is None:
- fobj.default = vid
-
- def compile(self, ttFont):
- fdat = b""
- vdat = b""
- offset = 0
- for f, v in sorted(self.features.items(), key=lambda x: x[1].index):
- fnum = grUtils.tag2num(f)
- if self.version >= 2.0:
- fdat += struct.pack(
- ">LHHLHH",
- grUtils.tag2num(f),
- len(v.settings),
- 0,
- offset * 4 + 12 + 16 * len(self.features),
- v.flags,
- v.label,
- )
- elif fnum > 65535: # self healing for alphabetic ids
- self.version = 2.0
- return self.compile(ttFont)
- else:
- fdat += struct.pack(
- ">HHLHH",
- grUtils.tag2num(f),
- len(v.settings),
- offset * 4 + 12 + 12 * len(self.features),
- v.flags,
- v.label,
- )
- for s, l in sorted(
- v.settings.items(), key=lambda x: (-1, x[1]) if x[0] == v.default else x
- ):
- vdat += struct.pack(">HH", s, l)
- offset += len(v.settings)
- hdr = sstruct.pack(Feat_hdr_format, self)
- return hdr + struct.pack(">HHL", len(self.features), 0, 0) + fdat + vdat
-
- def toXML(self, writer, ttFont):
- writer.simpletag("version", version=self.version)
- writer.newline()
- for f, v in sorted(self.features.items(), key=lambda x: x[1].index):
- writer.begintag(
- "feature",
- fid=f,
- label=v.label,
- flags=v.flags,
- default=(v.default if v.default else 0),
- )
- writer.newline()
- for s, l in sorted(v.settings.items()):
- writer.simpletag("setting", value=s, label=l)
- writer.newline()
- writer.endtag("feature")
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "version":
- self.version = float(safeEval(attrs["version"]))
- elif name == "feature":
- fid = attrs["fid"]
- fobj = Feature()
- fobj.flags = int(safeEval(attrs["flags"]))
- fobj.label = int(safeEval(attrs["label"]))
- fobj.default = int(safeEval(attrs.get("default", "0")))
- fobj.index = len(self.features)
- self.features[fid] = fobj
- fobj.settings = {}
- for element in content:
- if not isinstance(element, tuple):
- continue
- tag, a, c = element
- if tag == "setting":
- fobj.settings[int(safeEval(a["value"]))] = int(safeEval(a["label"]))
-
-
-class Feature(object):
- pass
diff --git a/spaces/codeparrot/code-explainer/app.py b/spaces/codeparrot/code-explainer/app.py
deleted file mode 100644
index 62a2c208101ac615fe76e2b93ffaed5e6819bb90..0000000000000000000000000000000000000000
--- a/spaces/codeparrot/code-explainer/app.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import gradio as gr
-from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed, pipeline
-
-
-title = "Code Explainer"
-description = "This is a space to convert Python code into english text explaining what it does using [codeparrot-small-code-to-text](https://huggingface.co/codeparrot/codeparrot-small-code-to-text),\
- a code generation model for Python finetuned on [github-jupyter-code-to-text](https://huggingface.co/datasets/codeparrot/github-jupyter-code-to-text) a dataset of Python code followed by a docstring explaining it, the data was originally extracted from Jupyter notebooks."
-
-EXAMPLE_1 = "def sort_function(arr):\n n = len(arr)\n \n # Traverse through all array elements\n for i in range(n):\n \n # Last i elements are already in place\n for j in range(0, n-i-1):\n \n # traverse the array from 0 to n-i-1\n # Swap if the element found is greater\n # than the next element\n if arr[j] > arr[j+1]:\n arr[j], arr[j+1] = arr[j+1], arr[j]"
-EXAMPLE_2 = "from sklearn import model_selection\nX_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=0.2)"
-EXAMPLE_3 = "def load_text(file)\n with open(filename, 'r') as f:\n text = f.read()\n return text"
-example = [
- [EXAMPLE_1, 32, 0.6, 42],
- [EXAMPLE_2, 16, 0.6, 42],
- [EXAMPLE_3, 11, 0.2, 42],
- ]
-
-# change model to the finetuned one
-tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small-code-to-text")
-model = AutoModelForCausalLM.from_pretrained("codeparrot/codeparrot-small-code-to-text")
-
-def make_doctring(gen_prompt):
- return gen_prompt + f"\n\n\"\"\"\nExplanation:"
-
-def code_generation(gen_prompt, max_tokens, temperature=0.6, seed=42):
- set_seed(seed)
- pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
- prompt = make_doctring(gen_prompt)
- generated_text = pipe(prompt, do_sample=True, top_p=0.95, temperature=temperature, max_new_tokens=max_tokens)[0]['generated_text']
- return generated_text
-
-
-iface = gr.Interface(
- fn=code_generation,
- inputs=[
- gr.Code(lines=10, label="Python code"),
- gr.inputs.Slider(
- minimum=8,
- maximum=256,
- step=1,
- default=8,
- label="Number of tokens to generate",
- ),
- gr.inputs.Slider(
- minimum=0,
- maximum=2.5,
- step=0.1,
- default=0.6,
- label="Temperature",
- ),
- gr.inputs.Slider(
- minimum=0,
- maximum=1000,
- step=1,
- default=42,
- label="Random seed to use for the generation"
- )
- ],
- outputs=gr.Code(label="Predicted explanation", lines=10),
- examples=example,
- layout="horizontal",
- theme="peach",
- description=description,
- title=title
-)
-iface.launch()
diff --git a/spaces/codertoro/gpt-academic/check_proxy.py b/spaces/codertoro/gpt-academic/check_proxy.py
deleted file mode 100644
index 7fdd2b0c5ade76d3a828fb7b5fa6b3dd09e8b04a..0000000000000000000000000000000000000000
--- a/spaces/codertoro/gpt-academic/check_proxy.py
+++ /dev/null
@@ -1,142 +0,0 @@
-
-def check_proxy(proxies):
- import requests
- proxies_https = proxies['https'] if proxies is not None else '无'
- try:
- response = requests.get("https://ipapi.co/json/",
- proxies=proxies, timeout=4)
- data = response.json()
- print(f'查询代理的地理位置,返回的结果是{data}')
- if 'country_name' in data:
- country = data['country_name']
- result = f"代理配置 {proxies_https}, 代理所在地:{country}"
- elif 'error' in data:
- result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
- print(result)
- return result
- except:
- result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效"
- print(result)
- return result
-
-
-def backup_and_download(current_version, remote_version):
- """
- 一键更新协议:备份和下载
- """
- from toolbox import get_conf
- import shutil
- import os
- import requests
- import zipfile
- os.makedirs(f'./history', exist_ok=True)
- backup_dir = f'./history/backup-{current_version}/'
- new_version_dir = f'./history/new-version-{remote_version}/'
- if os.path.exists(new_version_dir):
- return new_version_dir
- os.makedirs(new_version_dir)
- shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
- proxies, = get_conf('proxies')
- r = requests.get(
- 'https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
- zip_file_path = backup_dir+'/master.zip'
- with open(zip_file_path, 'wb+') as f:
- f.write(r.content)
- dst_path = new_version_dir
- with zipfile.ZipFile(zip_file_path, "r") as zip_ref:
- for zip_info in zip_ref.infolist():
- dst_file_path = os.path.join(dst_path, zip_info.filename)
- if os.path.exists(dst_file_path):
- os.remove(dst_file_path)
- zip_ref.extract(zip_info, dst_path)
- return new_version_dir
-
-
-def patch_and_restart(path):
- """
- 一键更新协议:覆盖和重启
- """
- import distutils
- import shutil
- import os
- import sys
- import time
- from colorful import print亮黄, print亮绿, print亮红
- # if not using config_private, move origin config.py as config_private.py
- if not os.path.exists('config_private.py'):
- print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
- '另外您可以随时在history子文件夹下找回旧版的程序。')
- shutil.copyfile('config.py', 'config_private.py')
- distutils.dir_util.copy_tree(path+'/chatgpt_academic-master', './')
- import subprocess
- print亮绿('代码已经更新,即将更新pip包依赖……')
- for i in reversed(range(5)): time.sleep(1); print(i)
- try:
- subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
- except:
- print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
- print亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启')
- print亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
- print(' ------------------------------ -----------------------------------')
- for i in reversed(range(8)): time.sleep(1); print(i)
- os.execl(sys.executable, sys.executable, *sys.argv)
-
-
-def get_current_version():
- import json
- try:
- with open('./version', 'r', encoding='utf8') as f:
- current_version = json.loads(f.read())['version']
- except:
- current_version = ""
- return current_version
-
-
-def auto_update():
- """
- 一键更新协议:查询版本和用户意见
- """
- try:
- from toolbox import get_conf
- import requests
- import time
- import json
- proxies, = get_conf('proxies')
- response = requests.get(
- "https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=1)
- remote_json_data = json.loads(response.text)
- remote_version = remote_json_data['version']
- if remote_json_data["show_feature"]:
- new_feature = "新功能:" + remote_json_data["new_feature"]
- else:
- new_feature = ""
- with open('./version', 'r', encoding='utf8') as f:
- current_version = f.read()
- current_version = json.loads(current_version)['version']
- if (remote_version - current_version) >= 0.01:
- from colorful import print亮黄
- print亮黄(
- f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
- print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
- user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?')
- if user_instruction in ['Y', 'y']:
- path = backup_and_download(current_version, remote_version)
- try:
- patch_and_restart(path)
- except:
- print('更新失败。')
- else:
- print('自动更新程序:已禁用')
- return
- else:
- return
- except:
- print('自动更新程序:已禁用')
-
-
-if __name__ == '__main__':
- import os
- os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
- from toolbox import get_conf
- proxies, = get_conf('proxies')
- check_proxy(proxies)
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flacdsp_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flacdsp_template.c
deleted file mode 100644
index 0a6fe59e2869c05e64cef1bf0b68d38f182915b9..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flacdsp_template.c
+++ /dev/null
@@ -1,103 +0,0 @@
-/*
- * Copyright (c) 2012 Mans Rullgard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-#include "libavutil/macros.h"
-
-#undef FUNC
-#undef FSUF
-#undef sample
-#undef sample_type
-#undef OUT
-#undef S
-
-#if SAMPLE_SIZE == 32
-# define sample_type int32_t
-#else
-# define sample_type int16_t
-#endif
-
-#if PLANAR
-# define FSUF AV_JOIN(SAMPLE_SIZE, p)
-# define sample sample_type *
-# define OUT(n) n
-# define S(s, c, i) (s[c][i])
-#else
-# define FSUF SAMPLE_SIZE
-# define sample sample_type
-# define OUT(n) n[0]
-# define S(s, c, i) (*s++)
-#endif
-
-#define FUNC(n) AV_JOIN(n ## _, FSUF)
-
-static void FUNC(flac_decorrelate_indep_c)(uint8_t **out, int32_t **in,
- int channels, int len, int shift)
-{
- sample *samples = (sample *) OUT(out);
- int i, j;
-
- for (j = 0; j < len; j++)
- for (i = 0; i < channels; i++)
- S(samples, i, j) = (int)((unsigned)in[i][j] << shift);
-}
-
-static void FUNC(flac_decorrelate_ls_c)(uint8_t **out, int32_t **in,
- int channels, int len, int shift)
-{
- sample *samples = (sample *) OUT(out);
- int i;
-
- for (i = 0; i < len; i++) {
- unsigned a = in[0][i];
- unsigned b = in[1][i];
- S(samples, 0, i) = a << shift;
- S(samples, 1, i) = (a - b) << shift;
- }
-}
-
-static void FUNC(flac_decorrelate_rs_c)(uint8_t **out, int32_t **in,
- int channels, int len, int shift)
-{
- sample *samples = (sample *) OUT(out);
- int i;
-
- for (i = 0; i < len; i++) {
- unsigned a = in[0][i];
- unsigned b = in[1][i];
- S(samples, 0, i) = (a + b) << shift;
- S(samples, 1, i) = b << shift;
- }
-}
-
-static void FUNC(flac_decorrelate_ms_c)(uint8_t **out, int32_t **in,
- int channels, int len, int shift)
-{
- sample *samples = (sample *) OUT(out);
- int i;
-
- for (i = 0; i < len; i++) {
- unsigned a = in[0][i];
- int b = in[1][i];
- a -= b >> 1;
- S(samples, 0, i) = (a + b) << shift;
- S(samples, 1, i) = a << shift;
- }
-}
diff --git a/spaces/compasspathways/Sentiment2D/app.py b/spaces/compasspathways/Sentiment2D/app.py
deleted file mode 100644
index 79cbd23e046208fa352570f022a50183f24d2fcf..0000000000000000000000000000000000000000
--- a/spaces/compasspathways/Sentiment2D/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import gradio as gr
-import pandas as pd
-from sentiment2d import Sentiment2D
-
-s2d = Sentiment2D()
-
-TITLE = "COMPASS Pathways 2D Sentiment Model"
-EXAMPLES = [
- "This is so awesome!",
- "You're driving me up the wall!",
- "I'm so lonely I could cry.",
- "I'm not feeling very sad at all.",
- "You're slapping your father in the face, aren't you?",
- "Yes, that's how I feel [laughing].",
- "Yes, that's how I feel [sobbing].",
- "Now I hear what you're sayin' 😀",
- "Now I hear what you're sayin' 🙁",
-]
-
-
-def sentiment(text, state):
- valence, arousal = s2d(text)
- res = dict(text=text, valence=valence, arousal=arousal, words=len(text.split()))
- #if clear_history:
- # state = []
- if state == None:
- state = []
- state.append(res)
- df = pd.DataFrame(state)
- res_txt = [
- f"{r['text']}: valence={r['valence']:0.3f}, arousal={r['arousal']:0.3f}"
- for r in state
- ]
- return "\n".join(res_txt), df, state
-
-
-iface = gr.Interface(
- fn=sentiment,
- inputs=[gr.Textbox(lines=1, placeholder="Text for 2d sentiment..."), "state"],
- outputs=[
- gr.Textbox(lines=5, max_lines=5, label="Results"),
- gr.ScatterPlot(
- x="valence",
- y="arousal",
- tooltip="text",
- size="words",
- size_legend_position="none",
- interactive=False,
- x_lim=[-1.05, 1.05],
- y_lim=[-1.05, 1.05],
- ),
- "state",
- ],
- titel=TITLE,
- examples=EXAMPLES,
-)
-iface.launch()
diff --git a/spaces/congsaPfin/Manga-OCR/logs/APKPure Presents Dr. Driving 1 - The Ultimate Driving Challenge.md b/spaces/congsaPfin/Manga-OCR/logs/APKPure Presents Dr. Driving 1 - The Ultimate Driving Challenge.md
deleted file mode 100644
index 5b94935638d86232aea5f3c6afae5f8ce0b6725f..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/APKPure Presents Dr. Driving 1 - The Ultimate Driving Challenge.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Dr Driving 1 APKPure: A Fun and Realistic Driving Game for Android
-
Do you love driving games? Do you want to experience the thrill of driving in different scenarios and conditions? If yes, then you should try Dr Driving 1 APKPure, a popular driving game for Android devices. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, why you should play it, and some tips and tricks to help you master it.
Dr Driving 1 APKPure is a driving simulation game developed by SUD Inc., a Korean game studio. It was released in 2013 and has since gained millions of fans around the world. The game lets you drive various cars in different environments, such as city streets, highways, parking lots, and more. You can choose from different modes and missions, such as speed, fuel efficiency, drift, VIP escort, and more. You can also play online with other players and compete for the best scores and rankings.
-
Features of Dr Driving 1 APKPure
-
Different modes and missions
-
Dr Driving 1 APKPure offers you a variety of modes and missions to test your driving skills and have fun. You can choose from speed mode, where you have to reach the destination as fast as possible; fuel efficiency mode, where you have to save as much gas as possible; drift mode, where you have to perform drifts and earn points; VIP escort mode, where you have to drive a VIP safely to their destination; and more. Each mode has different levels of difficulty and rewards.
-
Realistic physics and graphics
-
Dr Driving 1 APKPure features realistic physics and graphics that make the game more immersive and enjoyable. You can feel the weight and handling of each car, as well as the impact of collisions and crashes. You can also see the details of the cars, the roads, the buildings, the traffic lights, and more. The game also has dynamic weather effects, such as rain, fog, snow, and night time.
-
Online multiplayer and leaderboards
-
Dr Driving 1 APKPure allows you to play online with other players from around the world. You can join or create a room and race against up to three other players in real time. You can also chat with them using emojis and stickers. The game also has leaderboards where you can see your rank and score among other players. You can earn coins and gold by playing online and use them to buy new cars or upgrade your existing ones.
-
How to download and install Dr Driving 1 APKPure
-
Download the APK file from APKPure.com
-
To download Dr Driving 1 APKPure, you need to visit [APKPure.com](^i^) , a website that provides free and safe APK files for Android apps and games. You can search for Dr Driving 1 on the website or use this link: [https://apkpure.com/dr-driving/com.ansangha.drdriving](^i^) . You will see a green button that says "Download APK". Click on it and the download will start automatically. You will need to wait for a few seconds or minutes depending on your internet speed and the size of the file.
-
Enable unknown sources on your device
-
Before you can install Dr Driving 1 APKPure, you need to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. To do this, go to your device's settings and look for security or privacy options. You will see a toggle or checkbox that says "Unknown sources" or "Allow installation of apps from unknown sources". Turn it on and confirm your choice.
-
dr driving 1 apkpure download free
-dr driving 1 apkpure latest version
-dr driving 1 apkpure mod apk
-dr driving 1 apkpure offline
-dr driving 1 apkpure for android
-dr driving 1 apkpure for pc
-dr driving 1 apkpure for ios
-dr driving 1 apkpure for windows
-dr driving 1 apkpure for mac
-dr driving 1 apkpure for laptop
-dr driving 1 apkpure update
-dr driving 1 apkpure hack
-dr driving 1 apkpure cheats
-dr driving 1 apkpure unlimited money
-dr driving 1 apkpure unlocked cars
-dr driving 1 apkpure gameplay
-dr driving 1 apkpure review
-dr driving 1 apkpure tips and tricks
-dr driving 1 apkpure best car
-dr driving 1 apkpure online multiplayer
-dr driving 1 apkpure offline mode
-dr driving 1 apkpure simulator game
-dr driving 1 apkpure realistic graphics
-dr driving 1 apkpure traffic rules
-dr driving 1 apkpure parking challenge
-dr driving 1 apkpure fuel efficiency
-dr driving 1 apkpure speed test
-dr driving 1 apkpure highway mission
-dr driving 1 apkpure drift mode
-dr driving 1 apkpure vip mode
-dr driving 1 apkpure new features
-dr driving 1 apkpure bug fixes
-dr driving 1 apkpure performance improvement
-dr driving 1 apkpure size reduction
-dr driving 1 apkpure compatibility issues
-dr driving 1 apkpure alternative apps
-dr driving 1 apkpure similar games
-dr driving 1 apkpure vs other games
-dr driving 1 apkpure comparison chart
-dr driving 1 apkpure pros and cons
-dr driving 1 apkpure ratings and reviews
-dr driving 1 apkpure user feedbacks and comments
-dr driving 1 apkpure customer support and contact details
-dr driving 1 apkpure developer information and website link
-dr driving 1 apkpure social media accounts and pages
-
Install the APK file and enjoy the game
-
Once you have enabled unknown sources, you can install Dr Driving 1 APKPure. To do this, go to your device's file manager and look for the downloaded APK file. It should be in your downloads folder or wherever you saved it. Tap on it and follow the instructions on the screen. The installation will take a few seconds or minutes depending on your device's performance. When it is done, you will see an icon of Dr Driving 1 on your home screen or app drawer. Tap on it and enjoy the game!
-
Why play Dr Driving 1 APKPure?
-
Dr Driving 1 APKPure is not just another driving game. It is a fun and realistic driving game that will keep you entertained and challenged for hours. Here are some of the benefits of playing this game:
-
Benefits of playing Dr Driving 1 APKPure
-
Improve your driving skills and reflexes
-
Dr Driving 1 APKPure is a great way to improve your driving skills and reflexes. You will learn how to control different cars, how to maneuver in traffic, how to park, how to drift, and more. You will also have to react quickly to avoid collisions, follow traffic rules, and complete missions. The game will test your concentration, coordination, and decision-making skills.
-
Have fun and challenge yourself
-
Dr Driving 1 APKPure is also a lot of fun and challenge. You will enjoy driving in various scenarios and conditions, such as city streets, highways, parking lots, rain, fog, snow, night time, and more. You will also have to face different obstacles and hazards, such as pedestrians, cars, trucks, buses, police, traffic lights, speed cameras, and more. You will have to complete missions with different goals and difficulties, such as speed, fuel efficiency, drift, VIP escort, and more.
-
Compete with other players and earn rewards
-
Dr Driving 1 APKPure also lets you compete with other players and earn rewards. You can play online with up to three other players in real time and race against them in different modes and missions. You can also chat with them using emojis and stickers. You can also see your rank and score among other players on the leaderboards. You can earn coins and gold by playing online and use them to buy new cars or upgrade your existing ones.
-
Tips and tricks for playing Dr Driving 1 APKPure
-
Choose the right car and upgrade it
-
Dr Driving 1 APKPure offers you a variety of cars to choose from, such as sedans, hatchbacks, SUVs, sports cars, trucks, and more. Each car has different attributes, such as speed, acceleration, handling, braking, fuel efficiency, driftability, durability, and more. You should choose the car that suits your style and preference. You should also upgrade your car regularly to improve its performance and appearance. You can upgrade the engine, transmission, brakes, tires, suspension, body kit, paint job, and more.
-
Follow the traffic rules and avoid accidents
-
Dr Driving 1 APKPure is a realistic driving game that requires you to follow the traffic rules and avoid accidents. You should obey the speed limit, stop at the traffic lights, yield to the pedestrians, use the turn signals, and follow the directions. You should also avoid hitting other cars, objects, or people, as this will damage your car and reduce your score. You should also avoid getting caught by the police or the speed cameras, as this will result in fines and penalties. You should drive carefully and responsibly, as the game will reward you for your good driving behavior.
-
Use the brake and accelerator wisely
-
Dr Driving 1 APKPure is a game that requires you to use the brake and accelerator wisely. You should use the brake to slow down or stop your car, especially when you are approaching a turn, a traffic light, or an obstacle. You should also use the brake to perform drifts and earn points. You should use the accelerator to speed up or maintain your speed, especially when you are on a straight road, a highway, or a mission that requires speed. You should also use the accelerator to overtake other cars and gain an advantage. You should balance the use of the brake and accelerator to optimize your performance and efficiency.
-
Conclusion
-
Dr Driving 1 APKPure is a fun and realistic driving game for Android devices that will keep you entertained and challenged for hours. You can drive various cars in different environments, modes, and missions, and improve your driving skills and reflexes. You can also play online with other players and compete for the best scores and rankings. You can also customize and upgrade your cars to suit your style and preference. Dr Driving 1 APKPure is a game that you should not miss if you love driving games.
-
FAQs
-
Here are some of the frequently asked questions about Dr Driving 1 APKPure:
-
-
Q: Is Dr Driving 1 APKPure free?
A: Yes, Dr Driving 1 APKPure is free to download and play. However, it contains ads and in-app purchases that you can disable or buy if you want.
-
Q: Is Dr Driving 1 APKPure safe?
A: Yes, Dr Driving 1 APKPure is safe to download and install. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source like APKPure.com.
-
Q: Is Dr Driving 1 APKPure compatible with my device?
A: Dr Driving 1 APKPure is compatible with most Android devices that have Android 4.0.3 or higher. However, some devices may have different specifications or performance issues that may affect the game's quality or functionality.
-
Q: How can I contact the developer of Dr Driving 1 APKPure?
A: You can contact the developer of Dr Driving 1 APKPure by sending an email to [drdriving@ansangha.com](^i^) or visiting their website at [http://www.ansangha.com](^i^) . You can also follow them on Facebook at [https://www.facebook.com/ansangha](^i^) or Twitter at [https://twitter.com/ansangha](^i^) .
-
Q: How can I rate and review Dr Driving 1 APKPure?
A: You can rate and review Dr Driving 1 APKPure by visiting its page on APKPure.com or Google Play Store. You can also share your feedback and suggestions with other players on the comment section or social media platforms.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download State of Survival Zombie War MOD APK with Premium Features Unlocked.md b/spaces/congsaPfin/Manga-OCR/logs/Download State of Survival Zombie War MOD APK with Premium Features Unlocked.md
deleted file mode 100644
index 9bef9fa7f7113adf48e5f681fa3e9b34f08b0a27..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download State of Survival Zombie War MOD APK with Premium Features Unlocked.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
State of Survival Zombie War Mod APK: A Guide for Beginners
-
If you are looking for a thrilling and immersive zombie-themed survival strategy game, then you should check out State of Survival Zombie War. This game will put you in a post-apocalyptic world where you have to fight against hordes of zombies and other survivors while building your own settlement and recruiting allies. In this article, we will give you an overview of what this game is all about, and how you can enhance your gaming experience with State of Survival Zombie War Mod APK.
A zombie war RPG game where you fight to survive in the apocalypse
-
State of Survival Zombie War is a mobile game developed by FunPlus International AG. It was released in April 2023 and has since gained millions of downloads and positive reviews from players. The game is set in a world that has been devastated by a zombie outbreak six months ago. The virus has infected the cities and turned most people into mindless monsters. You are one of the few survivors who have managed to escape the infection and find a safe place to live.
-
Your mission is to build, survive, and kill all the zombies. You will have to create alliances with other survivors, build your own city, train your troops, research new technologies, complete missions, and fight against zombies and other players on the battlefield. You will also have to deal with the challenges that come with living in a zombie-infested world, such as scarcity of resources, mutating virus, hostile factions, and natural disasters.
-
Features of State of Survival Zombie War
-
Build and upgrade your settlement in a post-apocalyptic world
-
One of the main aspects of the game is building your own settlement where you can produce resources, train troops, research technologies, heal wounded soldiers, and store supplies. You will have to upgrade your buildings to unlock new features and increase your production capacity. You will also have to expand your territory by clearing out the infected zones around your settlement.
-
Recruit and level up a team of survivors
-
You are not alone in this zombie war. You can recruit various heroes who have different skills and abilities that can help you in combat and survival. You can level up your heroes by completing missions, upgrading their gear, and enhancing their talents. You can also assign them as leaders of your settlement, or send them on expeditions to explore the world and gather resources.
-
state of survival zombie war mod apk unlimited skill
-state of survival zombie war mod apk high damage
-state of survival zombie war mod apk latest version
-state of survival zombie war mod apk happymod
-state of survival zombie war mod apk download free
-state of survival zombie war mod apk no root
-state of survival zombie war mod apk offline
-state of survival zombie war mod apk unlimited money
-state of survival zombie war mod apk android 1
-state of survival zombie war mod apk menu
-state of survival zombie war mod apk 1.19.10
-state of survival zombie war mod apk 1.9.90
-state of survival zombie war mod apk 1.18.80
-state of survival zombie war mod apk rexdl
-state of survival zombie war mod apk revdl
-state of survival zombie war mod apk unlimited biocaps
-state of survival zombie war mod apk unlimited resources
-state of survival zombie war mod apk god mode
-state of survival zombie war mod apk one hit kill
-state of survival zombie war mod apk online
-state of survival zombie war mod apk obb
-state of survival zombie war mod apk platinmods
-state of survival zombie war mod apk pure
-state of survival zombie war mod apk unlimited stamina
-state of survival zombie war mod apk unlimited troops
-state of survival zombie war mod apk 2023
-state of survival zombie war mod apk 2022
-state of survival zombie war mod apk 2021
-state of survival zombie war mod apk update
-state of survival zombie war mod apk hack
-state of survival zombie war mod apk cheat
-state of survival zombie war mod apk full version
-state of survival zombie war mod apk mega mod
-state of survival zombie war mod apk vip
-state of survival zombie war mod apk premium
-state of survival zombie war mod apk pro
-state of survival zombie war mod apk unlocked
-state of survival zombie war mod apk cracked
-state of survival zombie war mod apk patched
-state of survival zombie war mod apk original
-
Complete missions and upgrade survivor skills to increase their effectiveness
-
The game offers a variety of missions that you can complete to progress the story, earn rewards, and unlock new features. You can also upgrade your survivor skills by spending skill points that you earn by leveling up. These skills can improve your combat abilities, resource production, research speed, and more.
-
Find ways to unlock new technologies to keep ahead of challenges
-
As you advance in the game, you will face more difficult enemies and situations. You will need to find ways to unlock new technologies that can give you an edge in the zombie war. You can research different branches of science, such as military, economy, development, and support. You can also discover and use special items, such as plasma cores, that can enhance your weapons and buildings.
-
Join PVP fights and solve puzzles to unlock rewards
-
If you are looking for some action and competition, you can join PVP fights against other players. You can attack their settlements, defend your own, or participate in events such as the Capital Clash and the State vs State War. You can also solve puzzles that are hidden in the game map, such as the Intel Missions and the Explorer Trail. These puzzles will test your logic and strategy skills, and reward you with valuable items and resources.
-
Available social features such as in-game chat and alliances
-
You don't have to play this game alone. You can communicate with other players through the in-game chat system, where you can send messages, emojis, and voice notes. You can also join or create an alliance with other players who share your goals and interests. You can cooperate with your alliance members to help each other out, trade resources, share intel, and fight together.
-
What is State of Survival Zombie War Mod APK?
-
A modified version of the game that gives you access to unlimited resources and features
-
State of Survival Zombie War Mod APK is a modified version of the original game that has been altered by some developers to give you access to unlimited resources and features that are not available in the official version. By using this mod apk, you can enjoy the game without any limitations or restrictions.
-
Benefits of using State of Survival Zombie War Mod APK
-
Enjoy god mode, high damage, and one hit kill
-
With this mod apk, you can become invincible in the game. You can activate god mode, which will make you immune to any damage from zombies or other players. You can also increase your damage output, which will allow you to kill any enemy with one hit. This will make the game much easier and faster for you.
-
Get unlimited food, lumber, metal, gas, and biocaps
-
With this mod apk, you can get unlimited amounts of food, lumber, metal, gas, and biocaps. These are the main resources that you need to build and upgrade your settlement, train your troops, research new technologies, and buy items from the shop. You will never run out of these resources with this mod apk.
-
Unlock all heroes and weapons
-
With this mod apk, you can unlock all the heroes and weapons that are available in the game. You can recruit any hero that you want without spending any biocaps or waiting for their recruitment time. You can also equip any weapon that you want without spending any resources or unlocking their requirements. You can have the best team of survivors and the most powerful weapons with this mod apk.
-
Remove ads and bypass anti-cheat detection
-
With this mod apk, you can remove all the ads that may interrupt your gaming experience. You can also bypass the anti-cheat detection system that may ban your account for using a mod apk. You can play the game without any worries or annoyances with this mod apk.
-
How to download and install State of Survival Zombie War Mod APK?
-
Follow these simple steps to get the mod apk on your device
-
Download the mod apk file from a trusted source (e.g. [1](https://m.happymod.com/state-of-survival-app-mod/com.kingsgroup.sos/))
-
The first step is to download the mod apk file from a trusted source. You can use the link provided above or search for other sources online. Make sure that the source is reliable and safe before downloading anything from it.
-
Enable unknown sources in your device settings
-
The next step is to enable unknown sources in your device settings. This will allow you to install apps that are not from the official app store. To do this, go to your device settings, then security, then unknown sources, and enable it.
-
Locate and install the mod apk file
-
The third step is to locate and install the mod apk file. You can use a file manager app to find the file that you downloaded in the first step. Tap on the file and follow the instructions to install it on your device.
-
Launch the game and enjoy the mod features
-
The final step is to launch the game and enjoy the mod features. You can open the game from your app drawer or home screen. You will see a mod menu where you can activate or deactivate the mod features that you want. You can also access the mod settings from the game settings menu.
-
Conclusion
-
State of Survival Zombie War Mod APK is a great way to experience the game with more fun and ease. You can get unlimited resources, unlock all heroes and weapons, enjoy god mode, high damage, and one hit kill, remove ads, and bypass anti-cheat detection. You can download and install the mod apk on your device by following the simple steps that we have provided in this article. We hope that you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about State of Survival Zombie War Mod APK:
-
-
-
Question
-
Answer
-
-
-
Is State of Survival Zombie War Mod APK safe to use?
-
Yes, State of Survival Zombie War Mod APK is safe to use as long as you download it from a trusted source and enable unknown sources in your device settings. However, we cannot guarantee that it will work on all devices or that it will not cause any issues with your game account. Use it at your own risk.
-
-
-
Do I need to root my device to use State of Survival Zombie War Mod APK?
-
No, you do not need to root your device to use State of Survival Zombie War Mod APK. You can install it on any Android device that meets the minimum requirements of the game.
-
-
-
Can I play online with State of Survival Zombie War Mod APK?
-
Yes, you can play online with State of Survival Zombie War Mod APK. However, you may encounter some problems or errors when playing with other players who are using the official version of the game. You may also get banned by the game developers if they detect that you are using a mod apk. We recommend that you play offline or with other players who are using the same mod apk as you.
-
-
-
Can I update State of Survival Zombie War Mod APK?
-
No, you cannot update State of Survival Zombie War Mod APK. If you want to get the latest version of the game, you will have to uninstall the mod apk and install the official version from the app store. You may also lose your progress and data if you do this.
-
-
-
Can I request for more features or mods for State of Survival Zombie War?
-
Yes, you can request for more features or mods for State of Survival Zombie War by contacting the developers of the mod apk. You can find their contact information on their website or social media pages. However, we cannot guarantee that they will fulfill your request or that they will respond to you.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Dvlt Qulluuna Qbul Testlr v Konstitusiya Suallar.md b/spaces/congsaPfin/Manga-OCR/logs/Dvlt Qulluuna Qbul Testlr v Konstitusiya Suallar.md
deleted file mode 100644
index e3004694017cfeff1c2e0def98e6f26fcd8d9e95..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Dvlt Qulluuna Qbul Testlr v Konstitusiya Suallar.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
-
-
-
-
-
-
Dövlət qulluğu test sualları: What are they and how to prepare for them?
-
If you want to work in the public sector in Azerbaijan, you need to pass the dövlət qulluğu test sualları (state service test questions). These are standardized exams that assess your knowledge and skills in various fields related to public administration. In this article, you will learn everything you need to know about these tests, including how to register, study, take, and check your results.
There are different types of tests depending on the category of state service you want to apply for. The main categories are:
-
-
BB - basic level (for positions that require a secondary education)
-
BA - first level (for positions that require a bachelor's degree)
-
BC - second level (for positions that require a master's degree or higher)
-
BD - third level (for senior positions that require a doctoral degree or equivalent)
-
-
The tests consist of multiple-choice questions that cover various subjects, such as:
-
-
Azerbaijani language
-
Constitutional law
-
Public administration
-
Logic
-
Information technology
-
Foreign language (English, Russian, or Turkish)
-
Professional knowledge (depending on the specific position)
-
-
How to register for dövlət qulluğu test sualları
-
To register for the tests, you need to visit the official website of the State Examination Center (https://www.dim.gov.az/) and fill out an online application form. You will need to provide your personal information, education background, desired category and position, and contact details. You will also need to upload a photo of yourself and a scanned copy of your ID card.
-
The registration fee varies depending on the category of the test. For example, for BB tests it is 10 AZN, for BA tests it is 15 AZN, for BC tests it is 20 AZN, and for BD tests it is 25 AZN. You can pay the fee online using a bank card or a mobile wallet. You will receive a confirmation email with your registration number and test date and time after you complete the payment.
-
The registration deadline is usually one month before the test date. The tests are held four times a year, in March, June, September, and December. You can check the exact dates and locations on the website of the State Examination Center.
-
How to study for dövlət qulluğu test sualları
-
Studying for the tests requires a lot of preparation and practice. Here are some tips and resources that can help you:
-
dovlet qullugu test suallari ve cavablari
-dovlet qullugu test suallari 2023
-dovlet qullugu test suallari konstitusiya
-dovlet qullugu test suallari mentiq
-dovlet qullugu test suallari informatika
-dovlet qullugu test suallari pdf
-dovlet qullugu test suallari online
-dovlet qullugu test suallari azerbaycan dili
-dovlet qullugu test suallari ingilis dili
-dovlet qullugu test suallari rus dili
-dovlet qullugu test suallari tarix
-dovlet qullugu test suallari coğrafiya
-dovlet qullugu test suallari iqtisadiyyat
-dovlet qullugu test suallari hüquq
-dovlet qullugu test suallari riyaziyyat
-dovlet qullugu test suallari fizika
-dovlet qullugu test suallari kimya
-dovlet qullugu test suallari biologiya
-dovlet qullugu test suallari edebiyyat
-dovlet qullugu test suallari musahibe
-dovlet qullugu test suallari tqdk
-dovlet qullugu test suallari banklar
-dovlet qullugu test suallari vergiler nazirliyi
-dovlet qullugu test suallari emek ve sosial muhafize nazirliyi
-dovlet qullugu test suallari korrupsiya ile mücadele nazirliyi
-dovlet qullugu test suallari daxili isler nazirliyi
-dovlet qullugu test suallari müdafiye nazirliyi
-dovlet qullugu test suallari xarici isler nazirliyi
-dovlet qullugu test suallari tehsil nazirliyi
-dovlet qullugu test suallari saglamliq nazirliyi
-dovlet qullugu test suallari kultura nazirliyi
-dovlet qullugu test suallari idman ve genclik nazirliyi
-dovlet qullugu test suallari turizm nazirliyi
-dovlet qullugu test suallari ekologiya ve tebii servetler nazirliyi
-dovlet qullugu test suallari meliorasiya ve su təserrüfatı nazirliyi
-dovlet qullugu test suallari kənd təsərrüfatı nazirliyi
-dovlet qullugu test suallari ictimai tv ve radio yayimlari şirketi
-dovlet qullugu test suallari azərbaycan respublikasi prezidentinin administrasiyasi
-dovlet qullugu test suallari azərbaycan respublikasi milli məclisi
-dovlet qullugu test suallari azərbaycan respublikasi konstitusiya məhkəməsi
-dovlet qullugu test suallari azərbaycan respublikasi ali məhkəməsi
-dovlet qullugu test suallari azərbaycan respublikasi prokurorluğu
-dovlet qullugu test suallari azərbaycan respublikasi hesabat palatası
-dovlet qullugu test suallari azərbaycan respublikasi dövlət statistika komitəsi
-dovlet qullugu test suallari azərbaycan respublikasi dövlət neft fondu
-dovlet qullugu test suallari azərbaycan respublikasi dövlət miqrasiya xidməti
-dovlet qullugu test suallari azərbaycan respublikasi dövlət sınagçılıq xidməti
-dovlet qullugu test suallari azərbaycan respublikasi dövlət gömrük komitəsi
-dovlet qullugu test suallari azərbaycan respublikasi dövl
-
-
Review the official syllabus and sample questions for each category and subject on the website of the State Examination Center. This will give you an idea of what to expect and how to answer the questions.
-
Read books and articles on the topics covered by the tests, especially constitutional law, public administration, and professional knowledge. You can find some recommended books on the website of the State Examination Center or in libraries and bookstores.
-
Use online platforms and apps that offer interactive exercises and mock tests for dövlət qulluğu test sualları. Some examples are https://www.testsuallari.com/, https://www.testler.az/, and https://www.testbook.com/. These platforms can help you improve your skills, track your progress, and identify your strengths and weaknesses.
-
Join online or offline study groups or courses that can provide you with guidance, feedback, and support. You can find some options on social media platforms, such as Facebook, Instagram, or Telegram, or on websites, such as https://www.kurslar.info/ or https://www.edu.gov.az/.
-
Study regularly and consistently. Make a study plan and stick to it. Review your notes and practice your questions every day. Avoid cramming or procrastinating.
-
-
How to take dövlət qulluğu test sualları
-
Taking the tests can be challenging and stressful, but with some strategies and best practices, you can increase your chances of success. Here are some suggestions:
-
-
Arrive at the test center at least 30 minutes before your scheduled time. Bring your ID card, registration number, and a pen. Do not bring any electronic devices, books, notes, or other materials that are not allowed.
-
Read the instructions carefully before you start the test. Make sure you understand the format, the scoring system, and the time limit. If you have any questions or problems, ask the invigilator for help.
-
Manage your time wisely. Allocate enough time for each question and section. Do not spend too much time on difficult or unclear questions. Skip them and come back to them later if you have time.
-
Use logic and elimination techniques to answer the questions. Try to eliminate the wrong or irrelevant options first. Use your common sense and background knowledge to infer the correct or best answer.
-
Check your answers before you submit your test. Make sure you have answered all the questions and marked them correctly on the answer sheet. Avoid making careless mistakes or changing your answers without a good reason.
-
-
How to check your results and what to do next
-
You can check your results online on the website of the State Examination Center within 15 days after the test date. You will need to enter your registration number and password to access your score report. Your score report will show your total score, your percentile rank, and your pass/fail status.
-
If you pass the test, you will be eligible to apply for vacant positions in the state service that match your category and qualifications. You will need to submit your CV, cover letter, and other documents to the relevant state agency or institution. You may also need to go through an interview or another selection process before you get hired.
-
If you fail the test, you can retake it in the next session after paying another registration fee. You can also review your mistakes and improve your weak areas by using the feedback and analysis tools on the website of the State Examination Center.
-
Conclusion
-
Dövlət qulluğu test sualları are important exams that can open up many opportunities for you in the public sector in Azerbaijan. However, they are not easy to pass and require a lot of preparation and practice. By following the tips and resources in this article, you can increase your chances of success and achieve your career goals. Good luck!
-
FAQs
-
Here are some common questions and answers about dövlət qulluğu test sualları:
-
-
How many questions are there in each test and how long is the time limit?
-
The number of questions and the time limit vary depending on the category and the subject of the test. For example, for BB tests, there are 100 questions and the time limit is 120 minutes. For BA tests, there are 120 questions and the time limit is 150 minutes. For BC tests, there are 140 questions and the time limit is 180 minutes. For BD tests, there are 160 questions and the time limit is 210 minutes.
-
What is the passing score for each test?
-
The passing score for each test is 50%. However, some subjects have a minimum score requirement that you need to meet in order to pass. For example, for foreign language tests, you need to score at least 40% in each section (listening, reading, writing, and speaking).
-
How can I get my certificate after passing the test?
-
You can download and print your certificate from the website of the State Examination Center after you check your results. You can also request a hard copy of your certificate from the State Examination Center by paying a fee of 5 AZN.
-
How long is my certificate valid for?
-
Your certificate is valid for three years from the date of issue. You can use it to apply for any vacant position in the state service that matches your category and qualifications within this period.
-
Can I appeal my results if I am not satisfied with them?
-
You can appeal your results within five days after they are announced on the website of the State Examination Center. You will need to fill out an online appeal form and pay a fee of 10 AZN. Your appeal will be reviewed by an independent commission within 15 days and you will be notified of the outcome.
-
-
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/NBA 2K Mobile A Realistic and Immersive Basketball Experience.md b/spaces/congsaPfin/Manga-OCR/logs/NBA 2K Mobile A Realistic and Immersive Basketball Experience.md
deleted file mode 100644
index 40cc7a4c1fc467c01107b94f7d1d42e8617e77ca..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/NBA 2K Mobile A Realistic and Immersive Basketball Experience.md
+++ /dev/null
@@ -1,233 +0,0 @@
-
-
NBA APKPure: What Is It and How to Use It
-
If you are a basketball fan, you probably want to keep up with the latest action from the NBA. Whether you want to watch live games, catch up on highlights, get breaking news, or follow your favorite teams and players, you need a reliable app that can deliver all that and more. But what if you don't have access to the official NBA app or you are not satisfied with its features? In that case, you might want to try NBA APKPure, a third-party app that offers a different way to enjoy the NBA on your Android device. In this article, we will explain what NBA APKPure is, how to use it, what are its pros and cons, and what are some alternatives to it.
-
What is NBA APKPure?
-
A brief introduction to APKPure and its features
-
APKPure is a website and an app store that allows users to download and install Android apps that are not available on Google Play Store. APKPure offers a variety of apps and games from different categories, such as entertainment, sports, education, social media, and more. Users can also update their existing apps with the latest versions from APKPure.
One of the main features of APKPure is that it does not require users to sign up or log in with any account. Users can simply browse the app store, search for their desired apps, and download them directly to their devices. APKPure also supports multiple languages, regions, and devices. Users can choose their preferred language and region settings, as well as download apps that are compatible with their device model and Android version.
-
How NBA APKPure differs from the official NBA app
-
NBA APKPure is one of the apps that users can find on APKPure. It is a modified version of the official NBA app that offers some additional features and benefits. For example:
-
nba apkpure download
-nba apkpure mod
-nba apkpure 2k21
-nba apkpure 2k20
-nba apkpure live
-nba apkpure jam
-nba apkpure 2k19
-nba apkpure 2k18
-nba apkpure 2k17
-nba apkpure 2k16
-nba apkpure 2k15
-nba apkpure 2k14
-nba apkpure 2k13
-nba apkpure 2k12
-nba apkpure 2k11
-nba apkpure apk
-nba apkpure obb
-nba apkpure data
-nba apkpure offline
-nba apkpure online
-nba apkpure update
-nba apkpure latest version
-nba apkpure old version
-nba apkpure free
-nba apkpure hack
-nba apkpure cheats
-nba apkpure unlimited money
-nba apkpure android
-nba apkpure ios
-nba apkpure pc
-nba apkpure mac
-nba apkpure windows
-nba apkpure emulator
-nba apkpure bluestacks
-nba apkpure nox player
-nba apkpure memu play
-nba apkpure ld player
-nba apkpure game loop
-nba apkpure review
-nba apkpure rating
-nba apkpure gameplay
-nba apkpure graphics
-nba apkpure sound
-nba apkpure controls
-nba apkpure features
-nba apkpure tips and tricks
-nba apkpure guide and walkthrough
-nba apkpure news and updates
-
-
NBA APKPure does not require users to pay for a subscription or sign up for an account. Users can access all the content for free without any limitations.
-
NBA APKPure does not have any ads or pop-ups that might interrupt the user experience.
-
NBA APKPure has a simpler and cleaner interface that makes it easier to navigate and use.
-
NBA APKPure has a smaller file size than the official NBA app, which means it takes up less storage space on your device.
-
-
However, NBA APKPure also has some drawbacks compared to the official NBA app. For instance:
-
-
NBA APKPure may not be as secure or reliable as the official NBA app. Since it is not verified by Google Play Store, it may contain malware or viruses that could harm your device or compromise your privacy.
-
NBA APKPure may not be as updated or accurate as the official NBA app. Since it is not maintained by the NBA or its partners, it may have outdated or incorrect information or content.
-
NBA APKPure may not be as compatible or stable as the official NBA app. Since it is not optimized for all devices and Android versions, it may crash or malfunction on some devices.
-
-
How to use NBA APKPure?
-
How to download and install NBA APKPure on your Android device
-
To download and install NBA APKPure on your Android device, you need to follow these steps:
-
-
Go to the APKPure website or app and search for NBA APKPure. You can also use this link:
-
Tap on the download button and wait for the APK file to be downloaded to your device.
-
Before installing the APK file, you need to enable the installation of apps from unknown sources on your device. To do this, go to your device settings, security, and toggle on the option that allows installing apps from unknown sources.
-
Once you have enabled the option, locate the APK file on your device and tap on it to start the installation process.
-
Follow the instructions on the screen and grant the necessary permissions to the app.
-
After the installation is complete, you can launch the app and enjoy the NBA content.
-
-
How to access live games, highlights, news, and more with NBA APKPure
-
With NBA APKPure, you can access a variety of NBA content on your device. Here are some of the things you can do with the app:
-
-
Watch live games from the regular season, playoffs, finals, and All-Star events. You can also choose from different camera angles, audio options, and languages.
-
Catch up on highlights, replays, interviews, and analysis from the NBA TV channel and other sources.
-
Get breaking news, updates, stats, standings, schedules, and scores from the NBA and its teams and players.
-
Follow your favorite teams and players and get personalized notifications and recommendations based on your preferences.
-
Interact with other fans and join discussions, polls, quizzes, and contests on the app.
-
-
How to customize your NBA experience with NBA APKPure
-
NBA APKPure also allows you to customize your NBA experience with some features and settings. Here are some of the things you can do with the app:
-
-
Adjust the video quality and resolution of the live games and highlights according to your network speed and device performance.
-
Change the app theme and layout according to your liking. You can choose from dark mode, light mode, or auto mode.
-
Manage your data usage and storage space by deleting or moving the downloaded files to an external storage device.
-
Enable or disable notifications and sounds for different types of events and content.
-
Share your feedback and suggestions with the app developers and rate the app on APKPure.
-
-
What are the pros and cons of NBA APKPure?
-
The advantages of using NBA APKPure
-
NBA APKPure has some advantages over the official NBA app that might appeal to some users. Here are some of them:
-
-
NBA APKPure is free to use and does not require any subscription or account. You can access all the content without any restrictions or fees.
-
NBA APKPure does not have any ads or pop-ups that might annoy or distract you while using the app.
-
NBA APKPure has a simpler and cleaner interface that makes it easier to navigate and use. You can find what you are looking for faster and smoother.
-
NBA APKPure has a smaller file size than the official NBA app, which means it takes up less storage space on your device. You can save more space for other apps or files.
-
-
The disadvantages of using NBA APKPure
-
NBA APKPure also has some disadvantages compared to the official NBA app that might deter some users. Here are some of them:
-
-
NBA APKPure may not be as secure or reliable as the official NBA app. Since it is not verified by Google Play Store, it may contain malware or viruses that could harm your device or compromise your privacy.
-
NBA APKPure may not be as updated or accurate as the official NBA app. Since it is not maintained by the NBA or its partners, it may have outdated or incorrect information or content.
-
NBA APKPure may not be as compatible or stable as the official NBA app. Since it is not optimized for all devices and Android versions, it may crash or malfunction on some devices.
-
-
What are some alternatives to NBA APKPure?
-
A list of other apps and services that offer NBA content
-
If you are not satisfied with NBA APKPure or you want to try other options, there are some alternatives that you can consider. Here are some of them:
-
-
-
App Name
-
Features
-
Price
-
Availability
-
-
-
DAZN: Stream Live Sports
-
- Stream live and on-demand sports from various leagues and events, including NBA, NFL, MLB, UFC, and more. - Watch original shows and documentaries. - Download content for offline viewing. - Cast to your TV or other devices.
-
$19.99 per month or $99.99 per year
-
Available in over 200 countries and territories
-
-
-
ESPN: Live Sports & Scores
-
- Watch thousands of live and on-demand sports events from ESPN networks and partners, including NBA, NFL, MLB, NHL, UFC, and more. - Get scores, news, highlights, analysis, and fantasy tools. - Follow your favorite teams and players. - Listen to podcasts and radio shows.
-
Free with ads or $4.99 per month for ESPN+
-
Available in the US and some other regions
-
-
-
NBA official app
-
- Watch live and on-demand NBA games from the regular season, playoffs, finals, and All-Star events. - Get news, updates, stats, standings, schedules, and scores. - Follow your favorite teams and players. - Join discussions, polls, quizzes, and contests.
-
Free with ads or $29.99-$124.99 per year for NBA League Pass
-
Available worldwide
-
-
-
Reddit
-
- Join communities of NBA fans and discuss topics related to basketball. - Share your opinions, memes, videos, links, and more. - Find live streams, highlights, news, and analysis from various sources. - Vote and comment on posts and interact with other users.
-
Free with ads or $5.99 per month for Reddit Premium
-
Available worldwide
-
-
-
SofaScore
-
- Get live scores, stats, standings, schedules, and results for various sports, including NBA. - Watch highlights and replays of NBA games. - Follow your favorite teams and players. - Set alerts and notifications for events and updates.
-
Free with ads or $2.99 per year for SofaScore Premium
-
Available worldwide
-
-
-
-
A comparison of their features, prices, and availability
-
As you can see, there are many options to choose from when it comes to watching and following the NBA on your device. Each of them has its own strengths and weaknesses, and you need to consider your needs, preferences, and budget before deciding which one to use. Here is a brief comparison of the features, prices, and availability of the apps and services mentioned above:
-
-
-
App Name
-
Features
-
Price
-
Availability
-
-
-
NBA APKPure
-
- Free access to all NBA content - No ads or pop-ups - Simple and clean interface - Small file size
-
Free
-
Available on APKPure website or app
-
-
-
DAZN: Stream Live Sports
-
- Stream live and on-demand sports from various leagues and events - Watch original shows and documentaries - Download content for offline viewing - Cast to your TV or other devices
-
$19.99 per month or $99.99 per year
-
Available in over 200 countries and territories
-
-
-
ESPN: Live Sports & Scores
-
- Watch thousands of live and on-demand sports events from ESPN networks and partners - Get scores, news, highlights, analysis, and fantasy tools - Follow your favorite teams and players - Listen to podcasts and radio shows
-
Free with ads or $4.99 per month for ESPN+
-
Available in the US and some other regions
-
-
-
-
NBA official app
-
- Watch live and on-demand NBA games from the regular season, playoffs, finals, and All-Star events - Get news, updates, stats, standings, schedules, and scores - Follow your favorite teams and players - Join discussions, polls, quizzes, and contests
-
Free with ads or $29.99-$124.99 per year for NBA League Pass
-
Available worldwide
-
-
-
Reddit
-
- Join communities of NBA fans and discuss topics related to basketball - Share your opinions, memes, videos, links, and more - Find live streams, highlights, news, and analysis from various sources - Vote and comment on posts and interact with other users
-
Free with ads or $5.99 per month for Reddit Premium
-
Available worldwide
-
-
-
SofaScore
-
- Get live scores, stats, standings, schedules, and results for various sports, including NBA - Watch highlights and replays of NBA games - Follow your favorite teams and players - Set alerts and notifications for events and updates
-
Free with ads or $2.99 per year for SofaScore Premium
-
Available worldwide
-
-
-
As you can see, each app or service has its own features, prices, and availability that might suit different users. You need to weigh the pros and cons of each option and decide which one meets your needs and expectations the best.
-
Conclusion
-
A summary of the main points of the article
-
In conclusion, NBA APKPure is a third-party app that offers a different way to watch and follow the NBA on your Android device. It is a modified version of the official NBA app that offers some additional features and benefits, such as free access to all content, no ads or pop-ups, simple and clean interface, and small file size. However, it also has some drawbacks compared to the official NBA app, such as potential security and reliability issues, outdated or incorrect information or content, and compatibility or stability problems. Therefore, you need to be careful when using NBA APKPure and make sure you have a backup plan in case something goes wrong.
-
A call to action for the readers to try NBA APKPure or other options
-
If you are interested in trying NBA APKPure or other alternatives to the official NBA app, you can download them from their respective websites or app stores. However, before you do that, make sure you read the reviews and ratings of the apps or services and check their terms and conditions and privacy policies. You should also scan the apps or files for any malware or viruses before installing them on your device. And remember, always use a VPN or a proxy server when accessing content that might be restricted or blocked in your region.
-
Five unique FAQs after the conclusion
-
Here are some frequently asked questions that you might have after reading this article:
-
-
Is NBA APKPure legal? NBA APKPure is not an official app endorsed by the NBA or its partners. It is a third-party app that modifies the original app and offers it for free without any authorization. Therefore, it might violate some intellectual property rights or licensing agreements of the NBA or its partners. However, there is no clear legal status of NBA APKPure or other similar apps in different countries or regions. Therefore, you should check your local laws and regulations before using NBA APKPure or other alternatives.
-
Is NBA APKPure safe? NBA APKPure is not verified by Google Play Store or any other reputable source. It might contain malware or viruses that could harm your device or compromise your privacy. Therefore, you should be careful when downloading and installing NBA APKPure or other similar apps from unknown sources. You should also scan the apps or files for any malware or viruses before installing them on your device. And remember, always use a VPN or a proxy server when accessing content that might be restricted or blocked in your region.
-
How can I update NBA APKPure? NBA APKPure is not maintained by the NBA or its partners. It might not be updated regularly or accurately with the latest information or content from the NBA. Therefore, you should check the APKPure website or app for any updates or new versions of NBA APKPure. You can also enable the auto-update option on the APKPure app to get notified of any updates automatically.
-
How can I uninstall NBA APKPure? If you want to uninstall NBA APKPure from your device, you can follow these steps:
-
-
Go to your device settings and
Go to your device settings and tap on apps or applications.
-
Find and tap on NBA APKPure from the list of apps.
-
Tap on uninstall and confirm your action.
-
-
What are some tips and tricks for using NBA APKPure? Here are some tips and tricks that might help you get the most out of NBA APKPure:
-
-
Use a VPN or a proxy server to access content that might be restricted or blocked in your region.
-
Check the video quality and resolution settings to optimize your viewing experience according to your network speed and device performance.
-
Enable the dark mode or light mode according to your preference and comfort.
-
Delete or move the downloaded files to an external storage device to save space on your device.
-
Share your feedback and suggestions with the app developers and rate the app on APKPure.
-
-
I hope this article has helped you understand what NBA APKPure is, how to use it, what are its pros and cons, and what are some alternatives to it. If you have any questions or comments, feel free to leave them below. And if you are ready to try NBA APKPure or other options, you can download them from their respective websites or app stores. Happy watching!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/coqui/xtts/app.py b/spaces/coqui/xtts/app.py
deleted file mode 100644
index 2a737b347874aea2524827d74d52fe98a46b9c1e..0000000000000000000000000000000000000000
--- a/spaces/coqui/xtts/app.py
+++ /dev/null
@@ -1,707 +0,0 @@
-import sys
-import io, os, stat
-import subprocess
-import random
-from zipfile import ZipFile
-import uuid
-import time
-import torch
-import torchaudio
-
-# By using XTTS you agree to CPML license https://coqui.ai/cpml
-os.environ["COQUI_TOS_AGREED"] = "1"
-
-# langid is used to detect language for longer text
-# Most users expect text to be their own language, there is checkbox to disable it
-import langid
-import base64
-import csv
-from io import StringIO
-import datetime
-import re
-
-import gradio as gr
-from scipy.io.wavfile import write
-from pydub import AudioSegment
-
-from TTS.api import TTS
-from TTS.tts.configs.xtts_config import XttsConfig
-from TTS.tts.models.xtts import Xtts
-from TTS.utils.generic_utils import get_user_data_dir
-
-HF_TOKEN = os.environ.get("HF_TOKEN")
-
-from huggingface_hub import HfApi
-
-# will use api to restart space on a unrecoverable error
-api = HfApi(token=HF_TOKEN)
-repo_id = "coqui/xtts"
-
-# Use never ffmpeg binary for Ubuntu20 to use denoising for microphone input
-print("Export newer ffmpeg binary for denoise filter")
-ZipFile("ffmpeg.zip").extractall()
-print("Make ffmpeg binary executable")
-st = os.stat("ffmpeg")
-os.chmod("ffmpeg", st.st_mode | stat.S_IEXEC)
-
-# This will trigger downloading model
-print("Downloading if not downloaded Coqui XTTS V2")
-from TTS.utils.manage import ModelManager
-
-model_name = "tts_models/multilingual/multi-dataset/xtts_v2"
-ModelManager().download_model(model_name)
-model_path = os.path.join(get_user_data_dir("tts"), model_name.replace("/", "--"))
-print("XTTS downloaded")
-
-config = XttsConfig()
-config.load_json(os.path.join(model_path, "config.json"))
-
-model = Xtts.init_from_config(config)
-model.load_checkpoint(
- config,
- checkpoint_path=os.path.join(model_path, "model.pth"),
- vocab_path=os.path.join(model_path, "vocab.json"),
- eval=True,
- use_deepspeed=True,
-)
-model.cuda()
-
-# This is for debugging purposes only
-DEVICE_ASSERT_DETECTED = 0
-DEVICE_ASSERT_PROMPT = None
-DEVICE_ASSERT_LANG = None
-
-supported_languages = config.languages
-
-def predict(
- prompt,
- language,
- audio_file_pth,
- mic_file_path,
- use_mic,
- voice_cleanup,
- no_lang_auto_detect,
- agree,
-):
- if agree == True:
- if language not in supported_languages:
- gr.Warning(
- f"Language you put {language} in is not in is not in our Supported Languages, please choose from dropdown"
- )
-
- return (
- None,
- None,
- None,
- None,
- )
-
- language_predicted = langid.classify(prompt)[
- 0
- ].strip() # strip need as there is space at end!
-
- # tts expects chinese as zh-cn
- if language_predicted == "zh":
- # we use zh-cn
- language_predicted = "zh-cn"
-
- print(f"Detected language:{language_predicted}, Chosen language:{language}")
-
- # After text character length 15 trigger language detection
- if len(prompt) > 15:
- # allow any language for short text as some may be common
- # If user unchecks language autodetection it will not trigger
- # You may remove this completely for own use
- if language_predicted != language and not no_lang_auto_detect:
- # Please duplicate and remove this check if you really want this
- # Or auto-detector fails to identify language (which it can on pretty short text or mixed text)
- gr.Warning(
- f"It looks like your text isn’t the language you chose , if you’re sure the text is the same language you chose, please check disable language auto-detection checkbox"
- )
-
- return (
- None,
- None,
- None,
- None,
- )
-
- if use_mic == True:
- if mic_file_path is not None:
- speaker_wav = mic_file_path
- else:
- gr.Warning(
- "Please record your voice with Microphone, or uncheck Use Microphone to use reference audios"
- )
- return (
- None,
- None,
- None,
- None,
- )
-
- else:
- speaker_wav = audio_file_pth
-
- # Filtering for microphone input, as it has BG noise, maybe silence in beginning and end
- # This is fast filtering not perfect
-
- # Apply all on demand
- lowpassfilter = denoise = trim = loudness = True
-
- if lowpassfilter:
- lowpass_highpass = "lowpass=8000,highpass=75,"
- else:
- lowpass_highpass = ""
-
- if trim:
- # better to remove silence in beginning and end for microphone
- trim_silence = "areverse,silenceremove=start_periods=1:start_silence=0:start_threshold=0.02,areverse,silenceremove=start_periods=1:start_silence=0:start_threshold=0.02,"
- else:
- trim_silence = ""
-
- if voice_cleanup:
- try:
- out_filename = (
- speaker_wav + str(uuid.uuid4()) + ".wav"
- ) # ffmpeg to know output format
-
- # we will use newer ffmpeg as that has afftn denoise filter
- shell_command = f"./ffmpeg -y -i {speaker_wav} -af {lowpass_highpass}{trim_silence} {out_filename}".split(
- " "
- )
-
- command_result = subprocess.run(
- [item for item in shell_command],
- capture_output=False,
- text=True,
- check=True,
- )
- speaker_wav = out_filename
- print("Filtered microphone input")
- except subprocess.CalledProcessError:
- # There was an error - command exited with non-zero code
- print("Error: failed filtering, use original microphone input")
- else:
- speaker_wav = speaker_wav
-
- if len(prompt) < 2:
- gr.Warning("Please give a longer prompt text")
- return (
- None,
- None,
- None,
- None,
- )
- if len(prompt) > 200:
- gr.Warning(
- "Text length limited to 200 characters for this demo, please try shorter text. You can clone this space and edit code for your own usage"
- )
- return (
- None,
- None,
- None,
- None,
- )
- global DEVICE_ASSERT_DETECTED
- if DEVICE_ASSERT_DETECTED:
- global DEVICE_ASSERT_PROMPT
- global DEVICE_ASSERT_LANG
- # It will likely never come here as we restart space on first unrecoverable error now
- print(
- f"Unrecoverable exception caused by language:{DEVICE_ASSERT_LANG} prompt:{DEVICE_ASSERT_PROMPT}"
- )
-
- # HF Space specific.. This error is unrecoverable need to restart space
- space = api.get_space_runtime(repo_id=repo_id)
- if space.stage!="BUILDING":
- api.restart_space(repo_id=repo_id)
- else:
- print("TRIED TO RESTART but space is building")
-
- try:
- metrics_text = ""
- t_latent = time.time()
-
- # note diffusion_conditioning not used on hifigan (default mode), it will be empty but need to pass it to model.inference
- try:
- (
- gpt_cond_latent,
- speaker_embedding,
- ) = model.get_conditioning_latents(audio_path=speaker_wav, gpt_cond_len=30, max_ref_length=60)
- except Exception as e:
- print("Speaker encoding error", str(e))
- gr.Warning(
- "It appears something wrong with reference, did you unmute your microphone?"
- )
- return (
- None,
- None,
- None,
- None,
- )
-
- latent_calculation_time = time.time() - t_latent
- # metrics_text=f"Embedding calculation time: {latent_calculation_time:.2f} seconds\n"
-
- # temporary comma fix
- prompt= re.sub("([^\x00-\x7F]|\w)(\.|\。|\?)",r"\1 \2\2",prompt)
-
- wav_chunks = []
- ## Direct mode
- """
- print("I: Generating new audio...")
- t0 = time.time()
- out = model.inference(
- prompt,
- language,
- gpt_cond_latent,
- speaker_embedding,
- diffusion_conditioning
- )
- inference_time = time.time() - t0
- print(f"I: Time to generate audio: {round(inference_time*1000)} milliseconds")
- metrics_text+=f"Time to generate audio: {round(inference_time*1000)} milliseconds\n"
- real_time_factor= (time.time() - t0) / out['wav'].shape[-1] * 24000
- print(f"Real-time factor (RTF): {real_time_factor}")
- metrics_text+=f"Real-time factor (RTF): {real_time_factor:.2f}\n"
- torchaudio.save("output.wav", torch.tensor(out["wav"]).unsqueeze(0), 24000)
- """
-
- print("I: Generating new audio in streaming mode...")
- t0 = time.time()
- chunks = model.inference_stream(
- prompt,
- language,
- gpt_cond_latent,
- speaker_embedding,
- repetition_penalty=7.0,
- temperature=0.85,
- )
-
- first_chunk = True
- for i, chunk in enumerate(chunks):
- if first_chunk:
- first_chunk_time = time.time() - t0
- metrics_text += f"Latency to first audio chunk: {round(first_chunk_time*1000)} milliseconds\n"
- first_chunk = False
- wav_chunks.append(chunk)
- print(f"Received chunk {i} of audio length {chunk.shape[-1]}")
- inference_time = time.time() - t0
- print(
- f"I: Time to generate audio: {round(inference_time*1000)} milliseconds"
- )
- #metrics_text += (
- # f"Time to generate audio: {round(inference_time*1000)} milliseconds\n"
- #)
-
- wav = torch.cat(wav_chunks, dim=0)
- print(wav.shape)
- real_time_factor = (time.time() - t0) / wav.shape[0] * 24000
- print(f"Real-time factor (RTF): {real_time_factor}")
- metrics_text += f"Real-time factor (RTF): {real_time_factor:.2f}\n"
-
- torchaudio.save("output.wav", wav.squeeze().unsqueeze(0).cpu(), 24000)
-
- except RuntimeError as e:
- if "device-side assert" in str(e):
- # cannot do anything on cuda device side error, need tor estart
- print(
- f"Exit due to: Unrecoverable exception caused by language:{language} prompt:{prompt}",
- flush=True,
- )
- gr.Warning("Unhandled Exception encounter, please retry in a minute")
- print("Cuda device-assert Runtime encountered need restart")
- if not DEVICE_ASSERT_DETECTED:
- DEVICE_ASSERT_DETECTED = 1
- DEVICE_ASSERT_PROMPT = prompt
- DEVICE_ASSERT_LANG = language
-
- # just before restarting save what caused the issue so we can handle it in future
- # Uploading Error data only happens for unrecovarable error
- error_time = datetime.datetime.now().strftime("%d-%m-%Y-%H:%M:%S")
- error_data = [
- error_time,
- prompt,
- language,
- audio_file_pth,
- mic_file_path,
- use_mic,
- voice_cleanup,
- no_lang_auto_detect,
- agree,
- ]
- error_data = [str(e) if type(e) != str else e for e in error_data]
- print(error_data)
- print(speaker_wav)
- write_io = StringIO()
- csv.writer(write_io).writerows([error_data])
- csv_upload = write_io.getvalue().encode()
-
- filename = error_time + "_" + str(uuid.uuid4()) + ".csv"
- print("Writing error csv")
- error_api = HfApi()
- error_api.upload_file(
- path_or_fileobj=csv_upload,
- path_in_repo=filename,
- repo_id="coqui/xtts-flagged-dataset",
- repo_type="dataset",
- )
-
- # speaker_wav
- print("Writing error reference audio")
- speaker_filename = (
- error_time + "_reference_" + str(uuid.uuid4()) + ".wav"
- )
- error_api = HfApi()
- error_api.upload_file(
- path_or_fileobj=speaker_wav,
- path_in_repo=speaker_filename,
- repo_id="coqui/xtts-flagged-dataset",
- repo_type="dataset",
- )
-
- # HF Space specific.. This error is unrecoverable need to restart space
- space = api.get_space_runtime(repo_id=repo_id)
- if space.stage!="BUILDING":
- api.restart_space(repo_id=repo_id)
- else:
- print("TRIED TO RESTART but space is building")
-
- else:
- if "Failed to decode" in str(e):
- print("Speaker encoding error", str(e))
- gr.Warning(
- "It appears something wrong with reference, did you unmute your microphone?"
- )
- else:
- print("RuntimeError: non device-side assert error:", str(e))
- gr.Warning("Something unexpected happened please retry again.")
- return (
- None,
- None,
- None,
- None,
- )
- return (
- gr.make_waveform(
- audio="output.wav",
- ),
- "output.wav",
- metrics_text,
- speaker_wav,
- )
- else:
- gr.Warning("Please accept the Terms & Condition!")
- return (
- None,
- None,
- None,
- None,
- )
-
-
-title = "Coqui🐸 XTTS"
-
-description = """
-
-
-
-XTTS is a text-to-speech model that lets you clone voices into different languages.
-
-
-
-This is the same model that powers our creator application Coqui Studio as well as the Coqui API. In production we apply modifications to make low-latency streaming possible.
-
-
-
-There are 16 languages.
-
-
-
-13.3 icloud bypass 1.16.2020 Video 12.27.19 Without installing Mac How to . How to unlock iphone 4 without icloud.
-How to remove icloud from iphone 4s.
-How to disable auto-lock on iphone.
-How to disable auto-lock on iphone 4s.
-How to disable auto-lock on Iphone 4s. 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Non Traditional Machining Processes Pk Mishra Pdf 225.md b/spaces/diacanFperku/AutoGPT/Non Traditional Machining Processes Pk Mishra Pdf 225.md
deleted file mode 100644
index 759d2fbd4bd7a323828806311021959bfddbd3f0..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Non Traditional Machining Processes Pk Mishra Pdf 225.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
Non Traditional Machining Processes Pk Mishra Pdf 225
-
-/3 Oct/08
-
-Machining process economics of plastic. new process steps reduce the manufacturing time of a plastic product from weeks to days. An additional reason for this is the short setup time which allows more flexibility and reduces start/stop times. 8.3.1 Plastics supply chain 7. Plastics - The three. Most plastics can be machined into a wide range of finished products.
-
-For example, nylon can be produced from a wide range of synthetic fibers and plastics, including both rigid and flexible thermoplastics. Plastic Products - Plastic Products - (Plastics) Although plastics have traditionally been associated with such products as bottles, containers, and utensils, the high resistance to chemicals and weathering, low weight, and ease of manufacture has allowed many other items to be made from plastic.
-
-Bath and shower products such as shower trays and mats, and kitchenware such as bowls and containers can be made from polypropylene and polyethylene, which are both rigid plastics.
-
-Plastics can also be used to make soft and tough products. 1. Machinability 2. Machining time 3. Manufacturing time 4. Factory life cycle time 5. Setup time 7. Cost per part (production) 8. Machining cost 9. Machining time for part (production) 11. Cost for part (production) 6. Cost of plastic (manufacturing) 7.
-
-Cost of machinery (production) 8. Cost of part (production)
-
-It is the fast and cost effective manufacturing of parts that make this type of plastic machining an attractive option.
-
-As plastics can be thermoformed into virtually any shape and used for a large range of products, there is a large market for these products.
-
-Machining has traditionally been used to create a range of finished products. Alternative manufacturing processes include machining, injection moulding, laser cutting and 3D printing.
-
-Most plastics can be machined into a wide range of finished products. The manufacturing process of plastic products is very fast. It can be machined directly from a large stock and used as structural components or finished products. The machining process of plastic parts is fast and efficient.
-
-A wide variety of plastics can be machined, for example, HDPE can be machined into many shapes such as bottles, which is suitable for numerous applications.
-
-Compared to other materials such as stainless steel, plastic material has lower manufacturing cost and faster production. It can be moulded 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Omerta City Of Gangsters 1.02 Trainerl.md b/spaces/diacanFperku/AutoGPT/Omerta City Of Gangsters 1.02 Trainerl.md
deleted file mode 100644
index fd46041775aada8c299857d728a920e46ebf475d..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Omerta City Of Gangsters 1.02 Trainerl.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-Error On The Game Artwork & Sounds - We are delighted to announce that now Omerta: City of Gangsters v1.0 (v1.02) [MULTI7] Fixed Files Error On The Game Artwork & Sounds are available for PC Windows
-
-Omerta: City of Gangsters v1.0 (v1.02) [MULTI7] Fixed Files Error On The Game Artwork & Sounds with Crack you can install and play. Also, please leave your remarks and thoughts, we are all ears. Our game Omerta: City of Gangsters v1.0 (v1.02) [MULTI7] Fixed Files Error On The Game Artwork & Sounds is played online multiplayer and allow you to play the game Omerta: City of Gangsters v1.0 (v1.02) [MULTI7] Fixed Files Error On The Game Artwork & Sounds with crack and it's so easy and fast to install.
-
-Omerta: City of Gangsters v1.0 (v1.02) [MULTI7] Fixed Files Error On The Game Artwork & Sounds Here is some information about Omerta: City of Gangsters v1.0 (v1.02) [MULTI7] Fixed Files Error On The Game Artwork & Sounds:
-
-Omerta: City of Gangsters v1.0 (v1.02) [MULTI7] Fixed Files Error On The Game Artwork & Sounds has been downloaded more than 6660 times from our fast server and rated 4.9 out of 5.
-
-You can find Omerta: City of Gangsters v1.0 (v1.02) [MULTI7] Fixed Files Error On The Game Artwork & Sounds in the following software categories: Games/****************************************************************************
-
-**
-
-** Copyright (C) 2015 Klaralvdalens Datakonsult AB (KDAB).
-
-** Contact:
-
-** This file is part of the Qt3D module of the Qt Toolkit.
-
-** $QT_BEGIN_LICENSE:LGPL$
-
-** Commercial License Usage
-
-** Licensees holding valid commercial Qt licenses may use this file in
-
-** accordance with the commercial license agreement provided with the
-
-** Software or, alternatively, in accordance with the terms contained in 4fefd39f24
-
-
-
diff --git a/spaces/digitalxingtong/Azusa-Bert-VITS2/text/english_bert_mock.py b/spaces/digitalxingtong/Azusa-Bert-VITS2/text/english_bert_mock.py
deleted file mode 100644
index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Azusa-Bert-VITS2/text/english_bert_mock.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import torch
-
-
-def get_bert_feature(norm_text, word2ph):
- return torch.zeros(1024, sum(word2ph))
diff --git a/spaces/digitalxingtong/Xingtong-All-in-One/app.py b/spaces/digitalxingtong/Xingtong-All-in-One/app.py
deleted file mode 100644
index 2b8b5ca0caadfb72734025170c82e68b5e886ae3..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Xingtong-All-in-One/app.py
+++ /dev/null
@@ -1,195 +0,0 @@
-import streamlit as st
-
-def main():
- st.markdown("""
-
- )
-}
diff --git a/spaces/ennet/ChatDev/camel/agents/chat_agent.py b/spaces/ennet/ChatDev/camel/agents/chat_agent.py
deleted file mode 100644
index 0bb2989ac22be09b5226d836996afde571e999d1..0000000000000000000000000000000000000000
--- a/spaces/ennet/ChatDev/camel/agents/chat_agent.py
+++ /dev/null
@@ -1,229 +0,0 @@
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-# Licensed under the Apache License, Version 2.0 (the “License”);
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an “AS IS” BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-from dataclasses import dataclass
-from typing import Any, Dict, List, Optional
-
-from tenacity import retry
-from tenacity.stop import stop_after_attempt
-from tenacity.wait import wait_exponential
-
-from camel.agents import BaseAgent
-from camel.configs import ChatGPTConfig
-from camel.messages import ChatMessage, MessageType, SystemMessage
-from camel.model_backend import ModelBackend, ModelFactory
-from camel.typing import ModelType, RoleType
-from camel.utils import (
- get_model_token_limit,
- num_tokens_from_messages,
- openai_api_key_required,
-)
-
-
-@dataclass(frozen=True)
-class ChatAgentResponse:
- r"""Response of a ChatAgent.
-
- Attributes:
- msgs (List[ChatMessage]): A list of zero, one or several messages.
- If the list is empty, there is some error in message generation.
- If the list has one message, this is normal mode.
- If the list has several messages, this is the critic mode.
- terminated (bool): A boolean indicating whether the agent decided
- to terminate the chat session.
- info (Dict[str, Any]): Extra information about the chat message.
- """
- msgs: List[ChatMessage]
- terminated: bool
- info: Dict[str, Any]
-
- @property
- def msg(self):
- if self.terminated:
- raise RuntimeError("error in ChatAgentResponse, info:{}".format(str(self.info)))
- if len(self.msgs) > 1:
- raise RuntimeError("Property msg is only available for a single message in msgs")
- elif len(self.msgs) == 0:
- if len(self.info) > 0:
- raise RuntimeError("Empty msgs in ChatAgentResponse, info:{}".format(str(self.info)))
- else:
- # raise RuntimeError("Known issue that msgs is empty and there is no error info, to be fix")
- return None
- return self.msgs[0]
-
-
-class ChatAgent(BaseAgent):
- r"""Class for managing conversations of CAMEL Chat Agents.
-
- Args:
- system_message (SystemMessage): The system message for the chat agent.
- model (ModelType, optional): The LLM model to use for generating
- responses. (default :obj:`ModelType.GPT_3_5_TURBO`)
- model_config (Any, optional): Configuration options for the LLM model.
- (default: :obj:`None`)
- message_window_size (int, optional): The maximum number of previous
- messages to include in the context window. If `None`, no windowing
- is performed. (default: :obj:`None`)
- """
-
- def __init__(
- self,
- system_message: SystemMessage,
- model: Optional[ModelType] = None,
- model_config: Optional[Any] = None,
- message_window_size: Optional[int] = None,
- ) -> None:
-
- self.system_message: SystemMessage = system_message
- self.role_name: str = system_message.role_name
- self.role_type: RoleType = system_message.role_type
- self.model: ModelType = (model if model is not None else ModelType.GPT_3_5_TURBO)
- self.model_config: ChatGPTConfig = model_config or ChatGPTConfig()
- self.model_token_limit: int = get_model_token_limit(self.model)
- self.message_window_size: Optional[int] = message_window_size
- self.model_backend: ModelBackend = ModelFactory.create(self.model, self.model_config.__dict__)
- self.terminated: bool = False
- self.info: bool = False
- self.init_messages()
-
- def reset(self) -> List[MessageType]:
- r"""Resets the :obj:`ChatAgent` to its initial state and returns the
- stored messages.
-
- Returns:
- List[MessageType]: The stored messages.
- """
- self.terminated = False
- self.init_messages()
- return self.stored_messages
-
- def get_info(
- self,
- id: Optional[str],
- usage: Optional[Dict[str, int]],
- termination_reasons: List[str],
- num_tokens: int,
- ) -> Dict[str, Any]:
- r"""Returns a dictionary containing information about the chat session.
-
- Args:
- id (str, optional): The ID of the chat session.
- usage (Dict[str, int], optional): Information about the usage of
- the LLM model.
- termination_reasons (List[str]): The reasons for the termination of
- the chat session.
- num_tokens (int): The number of tokens used in the chat session.
-
- Returns:
- Dict[str, Any]: The chat session information.
- """
- return {
- "id": id,
- "usage": usage,
- "termination_reasons": termination_reasons,
- "num_tokens": num_tokens,
- }
-
- def init_messages(self) -> None:
- r"""Initializes the stored messages list with the initial system
- message.
- """
- self.stored_messages: List[MessageType] = [self.system_message]
-
- def update_messages(self, message: ChatMessage) -> List[MessageType]:
- r"""Updates the stored messages list with a new message.
-
- Args:
- message (ChatMessage): The new message to add to the stored
- messages.
-
- Returns:
- List[ChatMessage]: The updated stored messages.
- """
- self.stored_messages.append(message)
- return self.stored_messages
-
- @retry(wait=wait_exponential(min=5, max=60), stop=stop_after_attempt(5))
- @openai_api_key_required
- def step(
- self,
- input_message: ChatMessage,
- ) -> ChatAgentResponse:
- r"""Performs a single step in the chat session by generating a response
- to the input message.
-
- Args:
- input_message (ChatMessage): The input message to the agent.
-
- Returns:
- ChatAgentResponse: A struct
- containing the output messages, a boolean indicating whether
- the chat session has terminated, and information about the chat
- session.
- """
- messages = self.update_messages(input_message)
- if self.message_window_size is not None and len(
- messages) > self.message_window_size:
- messages = [self.system_message
- ] + messages[-self.message_window_size:]
- openai_messages = [message.to_openai_message() for message in messages]
- num_tokens = num_tokens_from_messages(openai_messages, self.model)
-
- # for openai_message in openai_messages:
- # # print("{}\t{}".format(openai_message.role, openai_message.content))
- # print("{}\t{}\t{}".format(openai_message["role"], hash(openai_message["content"]), openai_message["content"][:60].replace("\n", "")))
- # print()
-
- output_messages: Optional[List[ChatMessage]]
- info: Dict[str, Any]
-
- if num_tokens < self.model_token_limit:
- response = self.model_backend.run(messages=openai_messages)
- if not isinstance(response, dict):
- raise RuntimeError("OpenAI returned unexpected struct")
- output_messages = [
- ChatMessage(role_name=self.role_name, role_type=self.role_type,
- meta_dict=dict(), **dict(choice["message"]))
- for choice in response["choices"]
- ]
- info = self.get_info(
- response["id"],
- response["usage"],
- [str(choice["finish_reason"]) for choice in response["choices"]],
- num_tokens,
- )
-
- # TODO strict check, only in the beginning of the line
- # if "" in output_messages[0].content:
- if output_messages[0].content.split("\n")[-1].startswith(""):
- self.info = True
- else:
- self.terminated = True
- output_messages = []
-
- info = self.get_info(
- None,
- None,
- ["max_tokens_exceeded_by_camel"],
- num_tokens,
- )
-
- return ChatAgentResponse(output_messages, self.terminated, info)
-
- def __repr__(self) -> str:
- r"""Returns a string representation of the :obj:`ChatAgent`.
-
- Returns:
- str: The string representation of the :obj:`ChatAgent`.
- """
- return f"ChatAgent({self.role_name}, {self.role_type}, {self.model})"
diff --git a/spaces/eson/tokenizer-arena/vocab/_alpaca_7b/README.md b/spaces/eson/tokenizer-arena/vocab/_alpaca_7b/README.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/eson/tokenizer-arena/vocab/bloom/README.md b/spaces/eson/tokenizer-arena/vocab/bloom/README.md
deleted file mode 100644
index 2dd5d6afadedd6dc4bbbed7d4c0bf69cd36ad650..0000000000000000000000000000000000000000
--- a/spaces/eson/tokenizer-arena/vocab/bloom/README.md
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
-
-
-词典大小 250680 来自 https://huggingface.co/bigscience/bloom#preprocessing
-"vocab_size": 250880
-
-
-## OOV
-
-有些空格没编码进去,详见`test_oov.py`
-
-## 中文词典
-
-一个中文几个id?
-
-
-##
-
-```
- "pre_tokenizer": {
- "type": "Sequence",
- "pretokenizers": [
- {
- "type": "Split",
- "pattern": {
- "Regex": " ?[^(\\s|[.,!?…。,、।۔،])]+"
- },
- "behavior": "Isolated",
- "invert": false
- },
- {
- "type": "ByteLevel",
- "add_prefix_space": false,
- "trim_offsets": true,
- "use_regex": false
- }
- ]
- },
- "post_processor": {
- "type": "ByteLevel",
- "add_prefix_space": true,
- "trim_offsets": false,
- "use_regex": false
-
- },
-```
\ No newline at end of file
diff --git a/spaces/exbert-project/exbert/client/src/webpack.config.js b/spaces/exbert-project/exbert/client/src/webpack.config.js
deleted file mode 100644
index 6d7543737b4b16d398ce9612fa5790a1aa6ae2c5..0000000000000000000000000000000000000000
--- a/spaces/exbert-project/exbert/client/src/webpack.config.js
+++ /dev/null
@@ -1,141 +0,0 @@
-const path = require('path');
-const ForkTsCheckerWebpackPlugin = require('fork-ts-checker-webpack-plugin');
-const MiniCssExtractPlugin = require("mini-css-extract-plugin");
-const CopyWebpackPlugin = require('copy-webpack-plugin');
-
-module.exports = {
- entry: {
- main: './ts/main.ts',
- // captioning: './ts/captioning.ts'
- },
- module: {
- rules: [
- {
- test: /\.tsx?$/,
- exclude: [
- /node_modules/,
- /ImageContentBox\.ts/,
- /test\.ts/
- // path.resolve(__dirname,'ts/vis/ImageContentBox.ts')
- ],
- use: [{
- loader: 'cache-loader'
- },
- {
- loader: 'thread-loader',
- options: {
- // there should be 1 cpu for the fork-ts-checker-webpack-plugin
- workers: require('os').cpus().length - 1,
- },
- },
- {
- loader: 'ts-loader',
- options: {
- happyPackMode: true // IMPORTANT! use happyPackMode mode to speed-up compilation and reduce errors reported to webpack
- }
- }
- ].slice(process.env.CI ? 2 : 0) // no optimizations for CIs
- },
- {
- test: /\.s?css$/,
- use: [
- {
- loader: MiniCssExtractPlugin.loader,
- options: {
- // you can specify a publicPath here
- // by default it use publicPath in webpackOptions.output
- // publicPath: '../'
-
- }
- },
- {
- loader: 'css-loader',
- options: {
- minimize: true,
- sourceMap: true
- }
- },
- {
- loader: 'sass-loader',
- options: {
- sourceMap: true
- }
- }
- ]
-
- },
- {
- test: /\.(png|jpg)$/,
- loader: 'url-loader',
- options: {
- limit: 20000 //inline <= 10kb
- }
- },
- {
- test: /\.woff(2)?(\?v=[0-9]\.[0-9]\.[0-9])?$/,
- loader: 'url-loader',
- options: {
- limit: 20000, //inline <= 20kb
- mimetype: 'application/font-woff'
- }
- },
- {
- test: /\.svg(2)?(\?v=[0-9]\.[0-9]\.[0-9])?$/,
- loader: 'url-loader',
- options: {
- limit: 10000, //inline <= 10kb
- mimetype: 'image/svg+xml'
- }
- },
- {
- test: /\.(ttf|eot)(\?v=[0-9]\.[0-9]\.[0-9])?$/,
- loader: 'file-loader'
- }
- ]
- },
- resolve: {
- extensions: ['.ts', '.js']
- },
- plugins: [
- new MiniCssExtractPlugin({
- // Options similar to the same options in webpackOptions.output
- // both options are optional
- // filename: "style.css",
- // chunkFilename: "chunk.css"
- }),
- new ForkTsCheckerWebpackPlugin({
- checkSyntacticErrors: true
- }),
- new CopyWebpackPlugin([
- {from: 'img', to: 'img'},
- {from: "demo", to:"demo"}
- ]),
- ],
- optimization: {
- splitChunks: {
- cacheGroups: {
- vendor: {
- test: /node_modules/,
- chunks: "initial",
- name: "vendor",
- priority: 10,
- enforce: true
- }
- }
- }
- },
- output: {
- filename: '[name].js',
- path: path.resolve(__dirname, '../dist/')
- },
- devServer: {
- port: 8090,
- proxy: {
- '/api/*': {
- target: 'http://localhost:8080',
- secure: false,
- ws: true
- }
- }
- }
-};
diff --git a/spaces/falterWliame/Face_Mask_Detection/Chamararanawakasongalbumfreedownload.md b/spaces/falterWliame/Face_Mask_Detection/Chamararanawakasongalbumfreedownload.md
deleted file mode 100644
index 0380dbca4a7b9033cc1efeb70982f687032d66b9..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Chamararanawakasongalbumfreedownload.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
C Bienaym[a href=https://www.diveintopython.org.au/]python xxx chamararanawakasongalbumfreedownload[/a] drchhtevenny[a href=https://www.au/]Python xxx chamararanawakasongalbumfreedownload[/a] jeoreontienub[a href=https://www.au/]peachypie sk[/a] jebanje zena sa psom za gledanje[a href=https://www.au/]feathero[/a] release WIKI [url=https://www.au/]Release WIKI[/url] me me pescadero![/p>