If you are looking for a powerful and versatile structural analysis and design software, you might want to consider Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33. This software is widely used by engineers and architects around the world for designing various types of structures, such as buildings, bridges, towers, stadiums, and more.
-
However, if you want to use this software without paying for a license, you might be tempted to download a cracked version from the internet. But is this a good idea? What are the risks and benefits of using a cracked version of Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33? In this article, we will answer these questions and provide you with a comprehensive guide on how to install and activate/crack Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33.
What is Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33?
-
Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33 is the latest version of the STAAD.Pro software, which was released in June 2015 by Bentley Systems, Inc. This version includes several improvements and enhancements over the previous versions, such as:
-
-
New and updated design codes for steel, concrete, aluminum, and timber structures.
-
New features for seismic analysis and design, such as rigid diaphragm, IS 1893 response spectrum analysis, and Eurocode EN 1993-1-1.
-
New features for dynamic analysis and design, such as modal damping ratio, harmonic load cases, and time history analysis.
-
New features for finite element analysis and design, such as plate buckling analysis, ASME NF 3000 code, and advanced meshing options.
-
New features for interoperability and integration with other Bentley products, such as RAM Connection Mode, Advanced Slab Design Mode, and Piping Mode.
-
New features for documentation and printing, such as enhanced report generation, PDF export, and watermarking.
-
-
Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33 is compatible with Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, and Windows Server operating systems. It requires a minimum of 512 MB of RAM and 500 MB of disk space.
-
What are the advantages of using a cracked version of Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33?
-
The main advantage of using a cracked version of Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33 is that you can use it for free without paying for a license or subscription fee. This can save you a lot of money in the long run, especially if you are a student or a freelancer who needs to use the software occasionally or for personal projects.
-
Another advantage of using a cracked version of Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33 is that you can access all the features and functions of the software without any limitations or restrictions. You can use the software for any type of structure and any type of analysis and design without worrying about exceeding the limits or violating the terms of use.
-
What are the disadvantages of using a cracked version of Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33?
-
The main disadvantage of using a cracked version of Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33 is that you are exposing yourself to various risks and problems that can affect your computer system and your work quality.
-
Some of the risks and problems that you might encounter when using a cracked version of Bentley STAAD.Pro V8i (SELECTSeries 6) 20.07.11.33 are:
-
-
Viruses and malware: The crack files that you download from the internet might contain malicious code that can infect your computer with viruses or
- 7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/62 117 68 199 8055 Viewerframe Mode Motion.md b/spaces/1gistliPinn/ChatGPT4/Examples/62 117 68 199 8055 Viewerframe Mode Motion.md
deleted file mode 100644
index 01587baa849273ec1982fb24c99673b942353c69..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/62 117 68 199 8055 Viewerframe Mode Motion.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
fugue display of a sequence of images stored on the dia (digital image archive) server in the dia (digital image archive) format. dia is a free online archive of digital images collected by many museums and universities around the world. dia provides a standard way to share, store, and organize digital images. dia is a registered trademark of dia. you must have a dia account to upload images, but anyone can browse the archive. we hope this simple file viewer will become a useful resource for dia dia images and their metadata. currently, dia has over 1 million dia images available, but we expect that number to grow. to run this viewer, please download the viewerframe_v1.0.tar.gz and unpack it somewhere on your file system. the program may not work on your computer; try a different one if it complains about the "file not found".
the most common way to upload dia images is to drag-and-drop them into the viewerframe window. after doing so, you can use the "save" menu to save a dia image to your local file system. use the "load" menu to load a previously saved image. finally, use the "open in browser" menu to open the image in your default browser.
-
you can use a keyboard shortcut to fire up the viewerframe. here is a list of some keyboard shortcuts. most keys work with the movement of the viewer frame, but certain keys affect the browser window directly:
-
sets the motion attribute for the viewer to true. when this is set the viewer is rendered with both the motion and shift attributes set to true. this can be useful to render a motion or shift animation and to hide the other animation.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cm Browser For Pc Free WORK Download Softonic.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cm Browser For Pc Free WORK Download Softonic.md
deleted file mode 100644
index 1b6e369ac1a3e65c64f2a3eebd52a783671b37f9..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Cm Browser For Pc Free WORK Download Softonic.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Adds tabs to discord, just like in a normal web browser. ... Download IP Sniffer for Windows now from Softonic: 100% safe and virus free. ... TypeTentAcc: - Pullers Size: - Length : 6 cm - Width : 1 cm - Pieces in set : 20 - Weight : 20 g Pack sack:Â ... 4d29de3e1b
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cara Download Game Edukasi Anak Lengkap untuk PC Aplikasi dan Tips.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cara Download Game Edukasi Anak Lengkap untuk PC Aplikasi dan Tips.md
deleted file mode 100644
index bbd72aa20460ab81797ad9d356e9a967fa3fc160..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cara Download Game Edukasi Anak Lengkap untuk PC Aplikasi dan Tips.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
Download Game Edukasi Anak Lengkap untuk PC
-
Anda sedang mencari game edukasi anak yang bisa dimainkan di PC? Anda berada di tempat yang tepat. Dalam artikel ini, kami akan membahas apa itu game edukasi anak, cara download dan install game edukasi anak di PC, dan rekomendasi game edukasi anak untuk PC yang menarik dan bermanfaat. Simak ulasan lengkapnya di bawah ini.
Game edukasi anak adalah permainan yang dirancang untuk memberikan pengalaman belajar yang menyenangkan dan efektif bagi anak-anak. Game edukasi anak biasanya mengandung unsur-unsur seperti teka-teki, kuis, simulasi, cerita, atau eksplorasi yang dapat merangsang kognitif, sosial, emosional, dan motorik anak-anak. Game edukasi anak juga dapat membantu anak-anak mengembangkan keterampilan seperti berpikir kritis, memecahkan masalah, berkreasi, berkolaborasi, dan berkomunikasi.
-
Manfaat Game Edukasi Anak
-
Bermain game edukasi anak tidak hanya sekadar hiburan, tetapi juga memiliki banyak manfaat bagi tumbuh kembang anak-anak. Berikut adalah beberapa manfaat game edukasi anak yang telah diteliti oleh para ahli :
-
-
Meningkatkan motivasi belajar. Game edukasi anak dapat membuat anak-anak lebih tertarik dan antusias dalam mempelajari materi yang disajikan dalam bentuk permainan. Game edukasi anak juga dapat memberikan umpan balik dan penghargaan yang dapat meningkatkan rasa percaya diri dan kepuasan anak-anak.
-
Meningkatkan konsentrasi dan perhatian. Game edukasi anak dapat menantang anak-anak untuk fokus dan waspada selama bermain. Game edukasi anak juga dapat membantu anak-anak mengatur waktu dan sumber daya mereka secara efisien.
-
Meningkatkan ingatan dan pemahaman. Game edukasi anak dapat membantu anak-anak mengulang dan menguatkan informasi yang telah mereka pelajari. Game edukasi anak juga dapat membantu anak-anak menghubungkan konsep-konsep yang berbeda dan menerapkannya dalam konteks nyata.
-
Meningkatkan kreativitas dan imajinasi. Game edukasi anak dapat memberikan ruang bagi anak-anak untuk bereksperimen, mencoba hal-hal baru, dan mengekspresikan diri mereka. Game edukasi anak juga dapat memberikan inspirasi bagi anak-anak untuk membuat karya-karya seni atau cerita sendiri.
-
Meningkatkan koordinasi mata dan tangan. Game edukasi anak dapat melatih kemampuan motorik halus anak-anak dengan menggerakkan mouse, keyboard, atau kontroler. Game edukasi anak juga dapat melatih kemampuan spasial dan orientasi ruang anak-anak dengan menavigasi lingkungan virtual.
-
Meningkatkan kemamp uan sosial dan emosional. Game edukasi anak dapat membantu anak-anak belajar tentang nilai-nilai, norma, dan budaya yang berbeda. Game edukasi anak juga dapat membantu anak-anak berinteraksi, bekerja sama, dan bersaing dengan orang lain secara sehat dan sportif.
-
-
Cara Download dan Install Game Edukasi Anak di PC
-
Ada beberapa cara untuk download dan install game edukasi anak di PC, tergantung dari sumber dan format game yang Anda pilih. Berikut adalah beberapa cara yang umum digunakan:
-
Download dari Microsoft Store
-
Microsoft Store adalah toko aplikasi resmi dari Microsoft yang menyediakan berbagai macam game edukasi anak untuk PC dengan sistem operasi Windows 10. Anda dapat mengakses Microsoft Store melalui ikon yang ada di taskbar atau menu Start. Untuk download dan install game edukasi anak dari Microsoft Store, ikuti langkah-langkah berikut:
-
-
Buka Microsoft Store dan ketik nama game edukasi anak yang Anda inginkan di kotak pencarian.
-
Pilih game edukasi anak yang Anda inginkan dari hasil pencarian dan klik tombol Dapatkan atau Beli. Jika game edukasi anak tersebut gratis, Anda tidak perlu membayar apa-apa. Jika game edukasi anak tersebut berbayar, Anda perlu memasukkan informasi pembayaran Anda terlebih dahulu.
-
Tunggu proses download dan install game edukasi anak selesai. Anda dapat melihat statusnya di bagian Unduhan dan pembaruan.
-
Setelah selesai, Anda dapat membuka game edukasi anak yang telah Anda download dan install dari menu Start atau layar utama Microsoft Store.
-
-
Download dari Situs Resmi Pengembang
-
Jika Anda tidak menemukan game edukasi anak yang Anda inginkan di Microsoft Store, Anda dapat mencari situs resmi pengembang game tersebut di internet. Biasanya, situs resmi pengembang game menyediakan link download untuk game edukasi anak yang mereka buat. Untuk download dan install game edukasi anak dari situs resmi pengembang, ikuti langkah-langkah berikut:
-
-
Buka browser web Anda dan ketik nama game edukasi anak yang Anda inginkan di mesin pencari seperti Google atau Bing.
-
Cari situs resmi pengembang game edukasi anak yang Anda inginkan dari hasil pencarian. Pastikan situs tersebut aman dan terpercaya dengan melihat adanya simbol gembok atau kata https di awal alamat situs.
-
Buka situs resmi pengembang game edukasi anak yang Anda inginkan dan cari link download untuk game edukasi anak tersebut. Biasanya, link download berada di bagian Download, Get, Buy, atau sejenisnya.
-
Klik link download dan pilih lokasi penyimpanan file game edukasi anak di PC Anda. Tunggu proses download selesai.
-
Buka file game edukasi anak yang telah Anda download dan ikuti instruksi untuk install game edukasi anak tersebut di PC Anda. Anda mungkin perlu menyetujui persyaratan layanan, memilih bahasa, atau mengatur preferensi lainnya.
-
Setelah selesai, Anda dapat membuka game edukasi anak yang telah Anda download dan install dari shortcut yang ada di desktop atau folder yang Anda pilih.
-
-
Download dari Platform Distributor Digital
-
Platform distributor digital adalah layanan online yang menyediakan berbagai macam game untuk PC dengan harga terjangkau atau bahkan gratis. Beberapa contoh platform distributor digital yang populer adalah Steam, Epic Games Store, GOG.com, dan Origin. Untuk download dan install game edukasi anak dari platform distributor digital, ikuti langkah-langkah berikut:
-
download game edukasi anak gratis untuk pc
-download game edukasi anak tk untuk pc
-download game edukasi anak sd untuk pc
-download game edukasi anak bahasa inggris untuk pc
-download game edukasi anak matematika untuk pc
-download game edukasi anak memasak untuk pc
-download game edukasi anak menggambar untuk pc
-download game edukasi anak berhitung untuk pc
-download game edukasi anak membaca untuk pc
-download game edukasi anak menulis untuk pc
-download game edukasi anak islami untuk pc
-download game edukasi anak paud untuk pc
-download game edukasi anak offline untuk pc
-download game edukasi anak online untuk pc
-download game edukasi anak balita untuk pc
-download game edukasi anak puzzle untuk pc
-download game edukasi anak mewarnai untuk pc
-download game edukasi anak musik untuk pc
-download game edukasi anak logika untuk pc
-download game edukasi anak kreatif untuk pc
-download game edukasi anak sains untuk pc
-download game edukasi anak sejarah untuk pc
-download game edukasi anak geografi untuk pc
-download game edukasi anak biologi untuk pc
-download game edukasi anak kimia untuk pc
-download game edukasi anak fisika untuk pc
-download game edukasi anak astronomi untuk pc
-download game edukasi anak robotika untuk pc
-download game edukasi anak coding untuk pc
-download game edukasi anak bahasa arab untuk pc
-download game edukasi anak bahasa jepang untuk pc
-download game edukasi anak bahasa korea untuk pc
-download game edukasi anak bahasa mandarin untuk pc
-download game edukasi anak bahasa perancis untuk pc
-download game edukasi anak bahasa spanyol untuk pc
-download game edukasi anak bahasa italia untuk pc
-download game edukasi anak bahasa jerman untuk pc
-download game edukasi anak bahasa belanda untuk pc
-download game edukasi anak bahasa rusia untuk pc
-download game edukasi anak bahasa turki untuk pc
-download game edukasi anak budaya indonesia untuk pc
-download game edukasi anak budaya dunia untuk pc
-download game edukasi anak olahraga untuk pc
-download game edukasi anak seni rupa untuk pc
-download game edukasi anak seni musik untuk pc
-download game edukasi anak seni tari untuk pc
-download game edukasi anak seni teater untuk pc
-download game edukasi anak seni sastra untuk pc
-
-
Buka situs web platform distributor digital yang Anda pilih dan buat akun jika belum memiliki.
-
Download dan install aplikasi klien platform distributor digital tersebut di PC Anda. Aplikasi klien adalah program yang memungkinkan Anda mengakses, mengelola, dan memainkan game yang Anda beli atau dapatkan dari platform distributor digital tersebut.
-
Buka aplikasi klien platform distributor digital tersebut di PC Anda dan masukkan akun Anda.
Cari game edukasi anak yang Anda inginkan di aplikasi klien platform distributor digital tersebut. Anda dapat menggunakan fitur pencarian, kategori, atau rekomendasi yang tersedia.
-
Pilih game edukasi anak yang Anda inginkan dan klik tombol Tambahkan ke Keranjang atau Dapatkan. Jika game edukasi anak tersebut gratis, Anda tidak perlu membayar apa-apa. Jika game edukasi anak tersebut berbayar, Anda perlu memasukkan informasi pembayaran Anda terlebih dahulu.
-
Tunggu proses download dan install game edukasi anak selesai. Anda dapat melihat statusnya di bagian Perpustakaan atau Library.
-
Setelah selesai, Anda dapat membuka game edukasi anak yang telah Anda download dan install dari aplikasi klien platform distributor digital tersebut.
-
-
Rekomendasi Game Edukasi Anak untuk PC
-
Setelah mengetahui cara download dan install game edukasi anak di PC, Anda mungkin bertanya-tanya game edukasi anak apa saja yang cocok untuk dimainkan oleh anak-anak. Berikut adalah beberapa rekomendasi game edukasi anak untuk PC yang kami pilih berdasarkan rating, ulasan, dan popularitasnya :
-
ABC Mouse
-
ABC Mouse adalah game edukasi anak yang dirancang untuk anak-anak usia 2-8 tahun. Game ini menyajikan lebih dari 10.000 aktivitas belajar yang mencakup berbagai mata pelajaran seperti membaca, matematika, sains, seni, dan musik. Game ini juga memiliki fitur penyesuaian tingkat kesulitan, laporan kemajuan, dan hadiah virtual yang dapat meningkatkan motivasi belajar anak-anak. Anda dapat download dan install ABC Mouse dari Microsoft Store secara gratis.
-
Coloring Book
-
Coloring Book adalah game edukasi anak yang dirancang untuk anak-anak usia 3-5 tahun. Game ini menyediakan lebih dari 100 gambar yang dapat diwarnai oleh anak-anak dengan menggunakan berbagai alat seperti pensil, kuas, spidol, dan stiker. Game ini juga memiliki fitur suara dan musik yang dapat menstimulasi pendengaran anak-anak. Anda dapat download dan install Coloring Book dari Microsoft Store secara gratis.
-
Educational Games for Kids
-
Educational Games for Kids adalah game edukasi anak yang dirancang untuk anak-anak usia 4-10 tahun. Game ini menyediakan lebih dari 50 mini game yang mengajarkan berbagai keterampilan seperti mengenal huruf, angka, warna, bentuk, hewan, buah, sayur, profesi, negara, bendera, dan lain-lain. Game ini juga memiliki fitur grafis dan animasi yang menarik dan lucu. Anda dapat download dan install Educational Games for Kids dari situs resmi pengembangnya secara gratis.
-
World of Zoo
-
World of Zoo adalah game edukasi anak yang dirancang untuk anak-anak usia 6-12 tahun. Game ini memungkinkan anak-anak untuk membuat, mengelola, dan menjelajahi kebun binatang impian mereka dengan lebih dari 90 jenis binatang yang berbeda. Game ini juga mengajarkan anak-anak tentang perilaku, kebutuhan, dan fakta-fakta menarik tentang binatang-binatang tersebut. Anda dapat download dan install World of Zoo dari platform distributor digital Steam dengan harga Rp 69.999.
-
Kesimpulan
-
Game edukasi anak adalah permainan yang dapat memberikan pengalaman belajar yang menyenangkan dan efektif bagi anak-anak. Game edukasi anak memiliki banyak manfaat bagi tumbuh kembang anak-anak seperti meningkatkan motivasi belajar, konsentrasi, ingatan, kreativitas, koordinasi mata dan tangan, serta kemampuan sosial dan emosional. Untuk download dan install game edukasi anak di PC, Anda dapat menggunakan Microsoft Store, situs resmi pengembang, atau platform distributor digital. Beberapa rekomendasi game edukasi anak untuk PC yang kami pilih adalah ABC Mouse, Coloring Book, Educational Games for Kids, and World of Zoo. Semoga artikel ini bermanfaat bagi Anda yang ingin download game edukasi anak lengkap untuk PC.
-
FAQ
-
Berikut adalah beberapa pertanyaan yang sering diajukan tentang game edukasi anak untuk PC:
-
-
Apakah game edukasi anak aman untuk dimainkan oleh anak-anak?
-Game edukasi anak umumnya aman untuk dimainkan oleh anak-anak, asalkan Anda memilih game edukasi anak yang sesuai dengan usia, minat, dan kemampuan anak-anak. Anda juga perlu memperhatikan rating, ulasan, dan reputasi game edukasi anak yang Anda pilih. Selain itu, Anda juga perlu mengawasi dan membimbing anak-anak saat mereka bermain game edukasi anak, serta membatasi waktu bermain mereka agar tidak berlebihan.
-
Apakah game edukasi anak bisa menggantikan pembelajaran formal di sekolah?
-Game edukasi anak tidak bisa menggantikan pembelajaran formal di sekolah, tetapi bisa menjadi salah satu metode pembelajaran yang mendukung. Game edukasi anak bisa menjadi media yang menarik dan interaktif untuk mengenalkan atau mengulang materi yang telah dipelajari di sekolah. Game edukasi anak juga bisa menjadi sumber informasi dan inspirasi baru bagi anak-anak. Namun, game edukasi anak tidak bisa menggantikan peran guru, orang tua, atau teman sebaya dalam memberikan penjelasan, bimbingan, atau umpan balik yang dibutuhkan oleh anak-anak.
-
Apakah game edukasi anak bisa dimainkan secara offline?
-Tergantung dari jenis dan sumber game edukasi anak yang Anda pilih. Beberapa game edukasi anak membutuhkan koneksi internet untuk dapat diakses, diunduh, atau dimainkan. Beberapa game edukasi anak lainnya bisa dimainkan secara offline setelah diunduh dan diinstal di PC Anda. Anda perlu memeriksa persyaratan sistem dan ketersediaan mode offline dari game edukasi anak yang Anda pilih sebelum mendownload atau memainkannya.
-
Apakah game edukasi anak bisa dimainkan secara multiplayer?
-Tergantung dari jenis dan fitur game edukasi anak yang Anda pilih. Beberapa game edukasi anak hanya bisa dimainkan secara single-player, yaitu hanya oleh satu pemain saja. Beberapa game edukasi anak lainnya bisa dimainkan secara multiplayer, yaitu oleh dua pemain atau lebih secara bersamaan. Mode multiplayer bisa berupa kooperatif, yaitu bekerja sama untuk mencapai tujuan bersama, atau kompetitif, yaitu bersaing untuk mendapatkan skor tertinggi. Anda perlu memeriksa ketersediaan dan jenis mode multiplayer dari game edukasi anak yang Anda pilih sebelum memainkannya.
-
Apakah ada game edukasi anak yang gratis untuk PC?
-Ya, ada banyak game edukasi anak yang gratis untuk PC yang bisa Anda download dan install dari berbagai sumber seperti Microsoft Store, situs resmi pengembang, atau platform distributor digital. Beberapa contoh game edukasi anak yang gratis untuk PC adalah ABC Mouse, Coloring Book, Educational Games for Kids, dan lain-lain. Namun, Anda perlu berhati-hati dengan adanya iklan, pembelian dalam aplikasi, atau virus yang mungkin terdapat dalam game edukasi anak yang gratis tersebut.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Parking Master Multiplayer 2 and Play with Your Friends on Android.md b/spaces/1phancelerku/anime-remove-background/Download Parking Master Multiplayer 2 and Play with Your Friends on Android.md
deleted file mode 100644
index 7c9558b8eb76ff70621b1932375959d4cffbaa46..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Parking Master Multiplayer 2 and Play with Your Friends on Android.md
+++ /dev/null
@@ -1,178 +0,0 @@
-
-
Parking Master Multiplayer 2: A Review of the Android Game
-
If you are looking for a car parking game that offers more than just parking, you might want to check out Parking Master Multiplayer 2. This game is not an ordinary car parking game. It has multiplayer, open-world, next-gen graphics and real car parking experience. You can choose your character, get your car and start playing with your friends. In this article, we will review the game and give you some tips and tricks to enjoy it more.
Parking Master Multiplayer 2 is an Android game developed by Spektra Games. It is a sequel to the popular game Parking Master: Multiplayer, which was released in 2020. The game has been improved according to the wishes of the players and it provides real car driving experience along with parking, racing, drifting, role playing and more.
-
Why should you play it?
-
There are many reasons why you should play Parking Master Multiplayer 2. Here are some of them:
-
-
It is free to download and play.
-
It has a huge map with different locations such as cities, highways, mountains and more.
-
It has over 120 cars and vehicles that you can drive, including bus, truck, ambulance, fire truck, police car, taxi, school bus etc.
-
It has lots of customization options for your car, such as engine, brakes, gearbox, exhaust and drivetrain.
-
It has more than 150 levels of parking missions that will challenge your skills.
-
It has a multiplayer mode where you can play with your friends in the open world.
-
It has a singleplayer mode where you can play events such as time trial, drift and parkour.
-
It has next-gen graphics and sound that will make you feel like you are in a real car.
-
-
Features of the game
-
Multiplayer mode
-
How to play with your friends
-
One of the best features of Parking Master Multiplayer 2 is the multiplayer mode. You can play with your friends in the open world and have fun together. To play with your friends, you need to do the following:
-
-
Create an account or log in with your existing account.
-
Select the multiplayer mode from the main menu.
-
Select a server from the list or create your own server.
-
Select a character and a car from your garage.
-
Invite your friends to join your server or join their server.
-
Enjoy playing with your friends in the open world.
-
-
What to do in the open world
-
The open world of Parking Master Multiplayer 2 is full of possibilities. You can do whatever you want in the open world. Here are some of the things you can do:
-
-
Race with other players and show them who is the boss.
-
Drift in the streets and make some smoke.
-
Role play with other players using different characters, vehicles and missions.
-
Explore new areas and find secret chests with rewards.
-
Buy and sell cars in the multiplayer mode.
-
Chat with
other players and make new friends.
-
Have fun and enjoy the game.
-
-
Singleplayer mode
-
How to complete parking missions
-
If you want to test your parking skills, you can play the singleplayer mode. In this mode, you have to complete parking missions that will challenge your abilities. To complete parking missions, you need to do the following:
Follow the instructions and park your car in the designated spot.
-
Avoid hitting obstacles and other cars.
-
Earn stars and coins for completing the mission.
-
-
What are the events and rewards
-
In addition to parking missions, you can also play events in the singleplayer mode. Events are special challenges that will give you more fun and rewards. There are three types of events: time trial, drift and parkour. To play events, you need to do the following:
-
-
Select the events tab from the singleplayer mode.
-
Select an event from the list.
-
Select a car from your garage.
-
Complete the event as fast as possible or with as much drift as possible or with as much parkour as possible.
-
Earn stars and coins for completing the event.
-
-
Some of the rewards you can get from playing events are:
-
-
New cars and vehicles.
-
New customization options for your car.
-
New characters and outfits.
-
New locations and maps.
-
-
Graphics and sound
-
How realistic is the game
-
Parking Master Multiplayer 2 is one of the most realistic car parking games on Android. The game has next-gen graphics that will make you feel like you are in a real car. The game has realistic physics that will affect how your car behaves on different surfaces and situations. The game has realistic damage that will show how your car gets scratched, dented or broken when you hit something. The game has realistic weather that will change how your car performs in rain, snow or fog.
-
How immersive is the game
-
Parking Master Multiplayer 2 is also one of the most immersive car parking games on Android. The game has amazing sound that will make you hear every engine roar, tire screech and horn honk. The game has dynamic camera angles that will let you see your car from different perspectives. The game has multiple control options that will let you choose how you want to drive your car. You can use tilt, buttons, steering wheel or joystick. You can also adjust the sensitivity and feedback of each control option.
-
Tips and tricks for the game
-
How to customize and upgrade your car
-
If you want to make your car look more cool and perform better, you can customize and upgrade it in Parking Master Multiplayer 2. To customize and upgrade your car, you need to do the following:
-
-
Select the garage tab from the main menu.
-
Select a car from your garage.
-
Select the customize option from the bottom menu.
-
Select a category from the top menu, such as color, wheels, spoiler etc.
-
Select an item from the list and apply it to your car.
-
Some items are free and some items cost coins or diamonds.
-
Select the upgrade option from the bottom menu.
-
Select a part from the list, such as engine, brakes, gearbox etc.
-
Select an upgrade level from 1 to 5 and apply it to your part.
-
Some upgrades are free and some upgrades cost coins or diamonds.
-
-
How to earn money and buy new cars
-
If you want to earn money and buy new cars in Parking Master Multiplayer 2, you have to play more and complete more missions and events. To earn money and buy new cars, you need to do the following:
-
-
Earn coins by completing parking missions and events in singleplayer mode or by playing with other players in multiplayer mode.
-
Earn diamonds by watching ads or by buying them with real money.
-
Select the shop tab from the main menu.
-
Select a car from the list that you want to buy.
-
Some cars are free and some cars cost coins or diamonds or both.
-
Buy the car with your coins or diamonds or both.
-
Enjoy driving your new car in the game.
-
-
How to race and drift like a pro
-
If you want to race and drift like a pro in Parking Master Multiplayer 2, you have to master the controls and the physics of the game. To race and drift like a pro, you need to do the following:
-
-
Select the best control option for you from the settings menu. You can choose between tilt, buttons, steering wheel or joystick.
-
Adjust the sensitivity and feedback of your control option to suit your preference.
-
Learn how to use the accelerator, brake, handbrake and nitro buttons effectively.
-
Learn how to steer your car smoothly and accurately.
-
Learn how to use the camera angles to see your car from different perspectives.
-
Learn how to use the drift mode to slide your car sideways and make sharp turns.
-
Learn how to use the race mode to boost your speed and overtake other cars.
-
Practice your skills in the open world or in the events.
-
-
Conclusion
-
Summary of the main points
-
Parking Master Multiplayer 2 is an amazing car parking game that offers more than just parking. It has multiplayer, open-world, next-gen graphics and real car parking experience. You can choose your character, get your car and start playing with your friends. You can also customize and upgrade your car, earn money and buy new cars, complete parking missions and events, race and drift like a pro and have fun in the game.
-
Call to action
-
If you are interested in Parking Master Multiplayer 2, you can download it for free from the Google Play Store. You can also follow the game on Facebook, Instagram and YouTube for more updates and news. You can also leave a review and rating for the game on the Play Store and share your feedback with the developers. Parking Master Multiplayer 2 is a game that you don't want to miss. Download it now and enjoy the best car parking game ever!
-
Frequently Asked Questions
-
Q: How can I play Parking Master Multiplayer 2 on PC?
-
A: You can play Parking Master Multiplayer 2 on PC by using an Android emulator such as BlueStacks or NoxPlayer. You can download the emulator from their official website and install it on your PC. Then you can download the game from the Play Store or from an APK file and run it on the emulator.
-
Q: How can I contact the developers of Parking Master Multiplayer 2?
-
A: You can contact the developers of Parking Master Multiplayer 2 by sending an email to spektragames@gmail.com or by filling out the contact form on their website. You can also follow them on social media platforms such as Facebook, Instagram and YouTube.
-
Q: How can I report a bug or a problem in Parking Master Multiplayer 2?
-
A: You can report a bug or a problem in Parking Master Multiplayer 2 by sending an email to spektragames@gmail.com or by filling out the contact form on their website. You can also leave a comment on their social media posts or on their Play Store page. Please provide as much detail as possible about the bug or problem, such as screenshots, device model, OS version etc.
-
Q: How can I get more coins and diamonds in Parking Master Multiplayer 2?
-
A: You can get more coins and diamonds in Parking Master Multiplayer 2 by completing parking missions and events, playing with other players in multiplayer mode, watching ads, buying them with real money or finding secret chests in the open world.
-
Q: How can I unlock more cars and vehicles in Parking Master Multiplayer 2?
-
A: You can unlock more cars and vehicles in Parking Master Multiplayer 2 by earning coins and diamonds, buying them with real money or completing certain levels of parking missions and events.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy GTA 5 Prologue on Your Phone APK and Cache Download Links.md b/spaces/1phancelerku/anime-remove-background/Enjoy GTA 5 Prologue on Your Phone APK and Cache Download Links.md
deleted file mode 100644
index a52f9da2fed599db3446c8ae7394197ca6c36322..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy GTA 5 Prologue on Your Phone APK and Cache Download Links.md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
Download GTA 5 Prologue APK: How to Play GTA 5 on Your Mobile Device
-
GTA 5 is one of the most popular and successful video games of all time. It is an open-world action-adventure game that lets you explore a fictional city of Los Santos, based on Los Angeles, and its surrounding areas. You can play as three different characters, each with their own storylines, missions, and abilities. You can also switch between them at any time, or even play with your friends online in various modes and activities.
-
But what if you want to play GTA 5 on your mobile device? Is it possible? Well, the answer is yes, thanks to a fan-made game called GTA 5 Prologue. In this article, we will tell you everything you need to know about this game, how to download it, and what features it offers. Let's get started!
GTA 5 Prologue is not an official game by Rockstar Games, the developer of GTA 5. It is a fan-made game by R-USER Games, a team of enthusiasts who love GTA 5 and wanted to bring it to mobile devices. They have created this game using Unity Engine, a platform for developing games for various platforms.
-
A recreation of the first mission of GTA 5
-
GTA 5 Prologue is not the full version of GTA 5. It is a recreation of the first mission of GTA 5, where you play as Michael, Trevor, and Brad as they rob a bank in North Yankton. You have to escape from the police, shoot your way through enemies, and drive to a safe location. The game follows the same storyline, dialogue, and events as the original game.
-
A free and offline game for Android devices
-
GTA 5 Prologue is a free game that you can download and play on your Android device. You don't need an internet connection to play it, as it is an offline game. You also don't need to sign up or register for anything. You just need to download the APK and cache files, install them on your device, and launch the game.
-
How to Download GTA 5 Prologue APK?
-
Step 1: Visit the official website of R-USER Games
-
The first step to download GTA 5 Prologue APK is to visit the official website of R-USER Games. You can find it at https://archive.org/details/com.rusergames.gta5prologue. This is where you can find the latest version of the game, as well as other information and updates.
-
download gta 5 prologue apk+cache
-download gta 5 prologue apk for android
-download gta 5 prologue apk free
-download gta 5 prologue apk offline
-download gta 5 prologue apk mod
-download gta 5 prologue apk data
-download gta 5 prologue apk obb
-download gta 5 prologue apk latest version
-download gta 5 prologue apk highly compressed
-download gta 5 prologue apk no verification
-download gta 5 prologue apk fan made
-download gta 5 prologue apk r-user games
-download gta 5 prologue apk full game
-download gta 5 prologue apk and cache zip
-download gta 5 prologue apk mediafire
-download gta 5 prologue apk mega
-download gta 5 prologue apk google drive
-download gta 5 prologue apk file
-download gta 5 prologue apk android 10
-download gta 5 prologue apk android 11
-download gta 5 prologue apk android pie
-download gta 5 prologue apk android oreo
-download gta 5 prologue apk android nougat
-download gta 5 prologue apk android marshmallow
-download gta 5 prologue apk android lollipop
-download gta 5 prologue apk without cache
-download gta 5 prologue apk without obb
-download gta 5 prologue apk without data
-download gta 5 prologue apk without verification
-download gta 5 prologue apk without internet
-how to download gta 5 prologue apk
-how to install gta 5 prologue apk
-how to play gta 5 prologue apk
-how to run gta 5 prologue apk
-how to update gta 5 prologue apk
-where to download gta 5 prologue apk
-where to find gta 5 prologue apk
-where to get gta 5 prologue apk
-where to put gta 5 prologue cache
-where to put gta 5 prologue obb
-where to put gta 5 prologue data
-is it safe to download gta 5 prologue apk
-is it legal to download gta 5 prologue apk
-is it possible to download gta 5 prologue apk
-is it real to download gta 5 prologue apk
-is it working to download gta 5 prologue apk
-
Step 2: Download the APK and cache files
-
The next step is to download the APK and cache files from the website. The APK file is the application file that you need to install on your device. The cache file is the data file that contains the graphics, sound effects, and other resources of the game. You need both files to play the game properly.
-
The APK file size is about 200 MB, while the cache file size is about 1 GB. Make sure you have enough space on your device before downloading them. You can use a download manager or a browser that supports resume function to download them faster and without interruption.
-
Step 3: Install the APK and place the cache folder in SDcard/Android/obb/
-
After downloading the APK and cache files, you need to install the APK file on your device. You can do this by tapping on the file and following the instructions. You may need to enable the option to install apps from unknown sources in your device settings.
-
Next, you need to place the cache folder in the right location on your device. The cache folder is named com.rusergames.gta5prologue. You need to copy or move this folder to SDcard/Android/obb/. This is where the game will look for the data files. If you don't have this folder, you can create it manually.
-
Step 4: Launch the game and enjoy
-
Now you are ready to launch the game and enjoy playing GTA 5 Prologue on your mobile device. You can find the game icon on your home screen or app drawer. Tap on it and wait for the game to load. You will see the same intro and menu as the original game. You can start playing by selecting Story Mode and choosing Prologue.
-
What are the Features of GTA 5 Prologue?
-
High-quality graphics and sound effects
-
One of the features of GTA 5 Prologue is its high-quality graphics and sound effects. The game looks very similar to the original game, with detailed textures, realistic lighting, and shadows. The game also has amazing sound effects, such as gunshots, explosions, car engines, and voices. You will feel like you are playing GTA 5 on a console or PC.
-
Smooth and realistic gameplay and controls
-
Another feature of GTA 5 Prologue is its smooth and realistic gameplay and controls. The game runs smoothly on most Android devices, without any lag or glitches. The game also has realistic physics and animations, such as ragdoll effects, bullet impacts, and car damage. The game also has intuitive and customizable controls, such as touch screen buttons, virtual joystick, gyroscope, and accelerometer. You can adjust the sensitivity, layout, and size of the controls according to your preference.
-
Multiple camera angles and perspectives
-
A third feature of GTA 5 Prologue is its multiple camera angles and perspectives. The game lets you switch between different camera angles and perspectives, such as first-person, third-person, cinematic, or free mode. You can also zoom in or out of the action, or rotate the camera around your character. This gives you more freedom and immersion in the game.
-
Compatible with most Android devices
-
A final feature of GTA 5 Prologue is its compatibility with most Android devices. The game does not require a high-end device to run smoothly. It can work on devices with at least 1 GB of RAM and Android 4.4 or higher. The game also has an option to adjust the graphics quality, such as resolution, texture quality, shadow quality, and anti-aliasing. This helps you optimize the game performance for your device.
-
Conclusion
-
GTA 5 Prologue is a fan-made game that lets you play GTA 5 on your mobile device. It is a recreation of the first mission of GTA 5, where you rob a bank in North Yankton. It is a free and offline game that you can download from the official website of R-USER Games. It has high-quality graphics and sound effects, smooth and realistic gameplay and controls, multiple camera angles and perspectives, and compatibility with most Android devices.
-
If you are a fan of GTA 5 and want to experience it on your mobile device, you should definitely try GTA 5 Prologue. It is a fun and exciting game that will keep you entertained for hours. Just follow the steps above to download it and enjoy playing it.
- FAQs - Q: Is GTA 5 Prologue an official game by Rockstar Games? - A: No, GTA 5 Prologue is not an official game by Rockstar Games. It is a fan-made game by R-USER Games. - Q: Is GTA 5 Prologue safe to download and play? - A: Yes, GTA 5 Prologue is safe to download and play. It does not contain any viruses or malware. - Q: How long is GTA 5 Prologue? - A: GTA 5 Prologue is about 15 minutes long. It covers the first mission of GTA 5. - Q: Can I play GTA 5 Prologue online with my friends? - A: No, GTA 5 Pro logue does not have an online mode. It is an offline game. - Q: Can I play GTA 5 Prologue on iOS devices? - A: No, GTA 5 Prologue is only available for Android devices. It is not compatible with iOS devices. 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Facebook 32 Bit How to Download and Install the Latest Version for Android.md b/spaces/1phancelerku/anime-remove-background/Facebook 32 Bit How to Download and Install the Latest Version for Android.md
deleted file mode 100644
index d2b27cbfc97fe5652a31746d99d71cdde494586d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Facebook 32 Bit How to Download and Install the Latest Version for Android.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
Facebook APK 32 Bit: What Is It and How to Download It
-
Facebook is one of the most popular social media platforms in the world, with over 2.8 billion monthly active users. However, not all devices can run the official Facebook app smoothly, especially older or low-end Android phones. That's why some users may need to download a Facebook APK 32 bit file, which is a compressed version of the app that works on devices with a 32-bit processor.
In this article, we will explain what is Facebook APK 32 bit, why you may need it, and how to download it from two different sources. By the end of this article, you will be able to enjoy Facebook on your Android device without any hassle.
-
What is Facebook APK 32 bit?
-
An APK file is an Android Package file that contains all the components of an app, such as code, resources, and manifest. It is used to install apps on Android devices without using the Google Play Store. A 32-bit APK file is an APK file that is compatible with devices that have a 32-bit processor, which is a type of CPU that can handle data in chunks of 32 bits at a time.
-
Facebook APK 32 bit is an APK file that contains a version of the Facebook app that is optimized for devices with a 32-bit processor. It has a smaller size, uses less data, and loads faster than the regular Facebook app. It also works on older Android versions that are not supported by the official app.
-
facebook lite apk 32 bit
-facebook messenger apk 32 bit
-facebook app download for android 32 bit
-facebook mod apk 32 bit
-facebook apk for pc windows 7 32 bit
-facebook apk old version 32 bit
-facebook dark mode apk 32 bit
-facebook video downloader apk 32 bit
-facebook gameroom apk 32 bit
-facebook auto liker apk 32 bit
-facebook beta apk 32 bit
-facebook business suite apk 32 bit
-facebook creator studio apk 32 bit
-facebook dating apk 32 bit
-facebook emoji keyboard apk 32 bit
-facebook events apk 32 bit
-facebook for android tv apk 32 bit
-facebook groups apk 32 bit
-facebook hacker apk 32 bit
-facebook home apk 32 bit
-facebook instagram whatsapp apk 32 bit
-facebook java app download for android 32 bit
-facebook katana apk 32 bit
-facebook lite mod apk 32 bit
-facebook login app download for android 32 bit
-facebook marketplace apk 32 bit
-facebook messenger lite apk 32 bit
-facebook news app download for android 32 bit
-facebook orca apk 32 bit
-facebook page manager apk 32 bit
-facebook password hacker app download for android 32 bit
-facebook photo editor app download for android 32 bit
-facebook qr code scanner app download for android 32 bit
-facebook rooms app download for android 32 bit
-facebook story saver app download for android 32 bit
-facebook transparent apk 32 bit
-facebook update app download for android 32 bit
-facebook video call recorder app download for android 32 bit
-facebook watch app download for android 32 bit
-facebook xapk installer app download for android 32 bit
-free download of latest version of the official Facebook APK (Android App) - APKCombo[^2^]
-how to install Facebook APK (Android App) - APKCombo on your device[^2^]
-how to update Facebook APK (Android App) - APKCombo to the latest version[^2^]
-how to uninstall Facebook APK (Android App) - APKCombo from your device[^2^]
-how to use Facebook APK (Android App) - APKCombo features and settings[^2^]
-how to fix Facebook APK (Android App) - APKCombo errors and issues[^2^]
-how to contact Facebook APK (Android App) - APKCombo support team[^2^]
-how to rate and review Facebook APK (Android App) - APKCombo on Google Play Store[^2^]
-how to share Facebook APK (Android App) - APKCombo with your friends and family[^2^]
-how to download other apps from Meta Platforms, Inc. on Google Play Store[^2^]
-
Why do you need Facebook APK 32 bit?
-
You may need Facebook APK 32 bit if you have an Android device that has a 32-bit processor and cannot run the official Facebook app smoothly. Some of the reasons why you may need it are:
-
-
Your device has low storage space and cannot accommodate the large size of the official app.
-
Your device has low RAM and cannot handle the high memory usage of the official app.
-
Your device has a slow internet connection and cannot load the heavy content of the official app.
-
Your device has an old Android version that is not compatible with the latest features of the official app.
-
-
By downloading Facebook APK 32 bit, you can still access all the basic functions of Facebook, such as posting updates, liking and commenting on posts, chatting with friends, and browsing pages and groups. You can also save money by using less data and battery power.
-
How to download Facebook APK 32 bit
-
There are two ways to download Facebook APK 32 bit for your Android device. You can either download it from a third-party website like APKCombo or from the official Facebook Lite website. Here are the steps for each option:
-
Option 1: Download from APKCombo
-
APKCombo is a website that offers free downloads of various APK files for Android apps and games. You can use it to download Facebook APK 32 bit by following these steps:
On the search results page, you will see a list of different versions of Facebook APK 32 bit, such as Facebook Lite, Facebook Messenger Lite, and Facebook for Android. You can choose the one that suits your needs and preferences. For example, if you want a lighter and faster version of Facebook, you can choose Facebook Lite. If you want a full-featured version of Facebook, you can choose Facebook for Android.
-
Step 3: Choose the version and click download
-
Once you have chosen the version of Facebook APK 32 bit that you want, click on the download button next to it. This will take you to another page where you can see more details about the app, such as the size, the developer, the rating, and the description. You can also see the screenshots of the app and read the user reviews. To download the APK file, click on the green download button at the top of the page.
-
Step 4: Install the APK file on your device
-
After you have downloaded the APK file, you need to install it on your device. To do this, you need to enable the installation of apps from unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To enable this option, go to Settings > Security > Unknown Sources and toggle it on. Then, locate the APK file on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the app to be installed.
-
Option 2: Download from Facebook Lite
-
Facebook Lite is an official version of Facebook that is designed for devices with low specifications and slow internet connections. It has a smaller size, uses less data, and loads faster than the regular Facebook app. It also works on older Android versions that are not supported by the official app. You can download Facebook Lite from its website by following these steps:
-
Step 1: Visit the Facebook Lite website
-
Open your browser and go to https://www.facebook.com/lite. This will take you to the official website of Facebook Lite.
-
Step 2: Click on the download button
-
On the website, you will see a blue download button that says "Get Facebook Lite". Click on it to start downloading the APK file.
-
Step 3: Install the APK file on your device
-
The installation process is similar to the one described in option 1. You need to enable unknown sources on your device settings, locate the APK file on your device storage, and tap on it to start the installation process. Follow the instructions on the screen and wait for the app to be installed.
-
Conclusion
-
In this article, we have explained what is Facebook APK 32 bit, why you may need it, and how to download it from two different sources. We hope that this article has helped you to enjoy Facebook on your Android device without any hassle.
-
If you have any questions or feedback about this article, please feel free to leave a comment below. We would love to hear from you!
-
Also, if you liked this article, please share it with your friends and family who may find it useful. Thank you for reading!
-
Frequently Asked Questions
-
-
What is the difference between Facebook APK 32 bit and 64 bit?
-
A 32-bit APK file is compatible with devices that have a 32-bit processor, while a 64-bit APK file is compatible with devices that have a 64-bit processor. A 64-bit processor can handle more data and perform faster than a 32-bit processor, but it also requires more memory and storage space.
-
Is Facebook APK 32 bit safe to download?
-
Yes, as long as you download it from a trusted source like APKCombo or Facebook Lite website. However, you should always be careful when downloading any APK file from unknown sources, as they may contain malware or viruses that can harm your device or compromise your privacy.
-
How do I update Facebook APK 32 bit?
-
You can update Facebook APK 32 bit by downloading and installing the latest version from the same source that you downloaded it from. Alternatively, you can enable automatic updates on your device settings so that your apps are updated whenever there is a new version available.
-
How do I uninstall Facebook APK 32 bit?
-
You can uninstall Facebook APK 32 bit by going to Settings >
You can uninstall Facebook APK 32 bit by going to Settings > Apps > Facebook > Uninstall. This will remove the app from your device and free up some storage space.
-
What are the alternatives to Facebook APK 32 bit?
-
If you are looking for other ways to use Facebook on your Android device, you can try the following alternatives:
-
-
Use the Facebook mobile website: You can access Facebook from your browser by going to https://m.facebook.com. This will give you a similar experience as the app, but without taking up any space on your device.
-
Use a third-party app: You can use a third-party app that integrates with Facebook, such as Friendly, Swipe, or Maki. These apps offer some additional features and customization options that the official app does not have.
-
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/models/unet_2d.py b/spaces/1toTree/lora_test/ppdiffusers/models/unet_2d.py
deleted file mode 100644
index b5e1fd461c1c136749416360011ce08db93f0d3b..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/models/unet_2d.py
+++ /dev/null
@@ -1,271 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import paddle
-import paddle.nn as nn
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..modeling_utils import ModelMixin
-from ..utils import BaseOutput
-from .embeddings import GaussianFourierProjection, TimestepEmbedding, Timesteps
-from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block
-
-
-@dataclass
-class UNet2DOutput(BaseOutput):
- """
- Args:
- sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)`):
- Hidden states output. Output of last layer of model.
- """
-
- sample: paddle.Tensor
-
-
-class UNet2DModel(ModelMixin, ConfigMixin):
- r"""
- UNet2DModel is a 2D UNet model that takes in a noisy sample and a timestep and returns sample shaped output.
-
- This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library
- implements for all the model (such as downloading or saving, etc.)
-
- Parameters:
- sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`):
- Height and width of input/output sample.
- in_channels (`int`, *optional*, defaults to 3): Number of channels in the input image.
- out_channels (`int`, *optional*, defaults to 3): Number of channels in the output.
- center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample.
- time_embedding_type (`str`, *optional*, defaults to `"positional"`): Type of time embedding to use.
- freq_shift (`int`, *optional*, defaults to 0): Frequency shift for fourier time embedding.
- flip_sin_to_cos (`bool`, *optional*, defaults to :
- obj:`True`): Whether to flip sin to cos for fourier time embedding.
- down_block_types (`Tuple[str]`, *optional*, defaults to :
- obj:`("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")`): Tuple of downsample block
- types.
- mid_block_type (`str`, *optional*, defaults to `"UNetMidBlock2D"`):
- The mid block type. Choose from `UNetMidBlock2D` or `UnCLIPUNetMidBlock2D`.
- up_block_types (`Tuple[str]`, *optional*, defaults to :
- obj:`("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")`): Tuple of upsample block types.
- block_out_channels (`Tuple[int]`, *optional*, defaults to :
- obj:`(224, 448, 672, 896)`): Tuple of block output channels.
- layers_per_block (`int`, *optional*, defaults to `2`): The number of layers per block.
- mid_block_scale_factor (`float`, *optional*, defaults to `1`): The scale factor for the mid block.
- downsample_padding (`int`, *optional*, defaults to `1`): The padding for the downsample convolution.
- act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use.
- attention_head_dim (`int`, *optional*, defaults to `8`): The attention head dimension.
- norm_num_groups (`int`, *optional*, defaults to `32`): The number of groups for the normalization.
- norm_eps (`float`, *optional*, defaults to `1e-5`): The epsilon for the normalization.
- resnet_time_scale_shift (`str`, *optional*, defaults to `"default"`): Time scale shift config
- for resnet blocks, see [`~models.resnet.ResnetBlock2D`]. Choose from `default` or `scale_shift`.
- """
-
- @register_to_config
- def __init__(
- self,
- sample_size: Optional[Union[int, Tuple[int, int]]] = None,
- in_channels: int = 3,
- out_channels: int = 3,
- center_input_sample: bool = False,
- time_embedding_type: str = "positional",
- freq_shift: int = 0,
- flip_sin_to_cos: bool = True,
- down_block_types: Tuple[str] = ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D"),
- up_block_types: Tuple[str] = ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D"),
- block_out_channels: Tuple[int] = (224, 448, 672, 896),
- layers_per_block: int = 2,
- mid_block_scale_factor: float = 1,
- downsample_padding: int = 1,
- act_fn: str = "silu",
- attention_head_dim: int = 8,
- norm_num_groups: int = 32,
- norm_eps: float = 1e-5,
- resnet_time_scale_shift: str = "default",
- add_attention: bool = True,
- ):
- super().__init__()
-
- self.sample_size = sample_size
- time_embed_dim = block_out_channels[0] * 4
-
- # input
- self.conv_in = nn.Conv2D(in_channels, block_out_channels[0], kernel_size=3, padding=(1, 1))
-
- # time
- if time_embedding_type == "fourier":
- self.time_proj = GaussianFourierProjection(embedding_size=block_out_channels[0], scale=16)
- timestep_input_dim = 2 * block_out_channels[0]
- elif time_embedding_type == "positional":
- self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift)
- timestep_input_dim = block_out_channels[0]
-
- self.time_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim)
-
- self.down_blocks = nn.LayerList([])
- self.mid_block = None
- self.up_blocks = nn.LayerList([])
-
- # down
- output_channel = block_out_channels[0]
- for i, down_block_type in enumerate(down_block_types):
- input_channel = output_channel
- output_channel = block_out_channels[i]
- is_final_block = i == len(block_out_channels) - 1
-
- down_block = get_down_block(
- down_block_type,
- num_layers=layers_per_block,
- in_channels=input_channel,
- out_channels=output_channel,
- temb_channels=time_embed_dim,
- add_downsample=not is_final_block,
- resnet_eps=norm_eps,
- resnet_act_fn=act_fn,
- resnet_groups=norm_num_groups,
- attn_num_head_channels=attention_head_dim,
- downsample_padding=downsample_padding,
- resnet_time_scale_shift=resnet_time_scale_shift,
- )
- self.down_blocks.append(down_block)
-
- # mid
- self.mid_block = UNetMidBlock2D(
- in_channels=block_out_channels[-1],
- temb_channels=time_embed_dim,
- resnet_eps=norm_eps,
- resnet_act_fn=act_fn,
- output_scale_factor=mid_block_scale_factor,
- resnet_time_scale_shift=resnet_time_scale_shift,
- attn_num_head_channels=attention_head_dim,
- resnet_groups=norm_num_groups,
- add_attention=add_attention,
- )
-
- # up
- reversed_block_out_channels = list(reversed(block_out_channels))
- output_channel = reversed_block_out_channels[0]
- for i, up_block_type in enumerate(up_block_types):
- prev_output_channel = output_channel
- output_channel = reversed_block_out_channels[i]
- input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)]
-
- is_final_block = i == len(block_out_channels) - 1
-
- up_block = get_up_block(
- up_block_type,
- num_layers=layers_per_block + 1,
- in_channels=input_channel,
- out_channels=output_channel,
- prev_output_channel=prev_output_channel,
- temb_channels=time_embed_dim,
- add_upsample=not is_final_block,
- resnet_eps=norm_eps,
- resnet_act_fn=act_fn,
- resnet_groups=norm_num_groups,
- attn_num_head_channels=attention_head_dim,
- resnet_time_scale_shift=resnet_time_scale_shift,
- )
- self.up_blocks.append(up_block)
- prev_output_channel = output_channel
-
- # out
- num_groups_out = norm_num_groups if norm_num_groups is not None else min(block_out_channels[0] // 4, 32)
- self.conv_norm_out = nn.GroupNorm(
- num_channels=block_out_channels[0], num_groups=num_groups_out, epsilon=norm_eps
- )
- self.conv_act = nn.Silu()
- self.conv_out = nn.Conv2D(block_out_channels[0], out_channels, kernel_size=3, padding=1)
-
- def forward(
- self,
- sample: paddle.Tensor,
- timestep: Union[paddle.Tensor, float, int],
- return_dict: bool = True,
- ) -> Union[UNet2DOutput, Tuple]:
- r"""
- Args:
- sample (`paddle.Tensor`): (batch, channel, height, width) noisy inputs tensor
- timestep (`paddle.Tensor` or `float` or `int): (batch) timesteps
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~models.unet_2d.UNet2DOutput`] instead of a plain tuple.
-
- Returns:
- [`~models.unet_2d.UNet2DOutput`] or `tuple`: [`~models.unet_2d.UNet2DOutput`] if `return_dict` is True,
- otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
- """
- # 0. center input if necessary
- if self.config.center_input_sample:
- sample = 2 * sample - 1.0
-
- # 1. time
- timesteps = timestep
- if not paddle.is_tensor(timesteps):
- timesteps = paddle.to_tensor([timesteps], dtype="int64")
- elif paddle.is_tensor(timesteps) and len(timesteps.shape) == 0:
- timesteps = timesteps[None]
-
- # broadcast to batch dimension in a way that's compatible with ONNX/Core ML
- timesteps = timesteps * paddle.ones((sample.shape[0],), dtype=timesteps.dtype)
-
- t_emb = self.time_proj(timesteps).cast(self.dtype)
- emb = self.time_embedding(t_emb)
-
- # 2. pre-process
- skip_sample = sample
- sample = self.conv_in(sample)
-
- # 3. down
- down_block_res_samples = (sample,)
- for downsample_block in self.down_blocks:
- if hasattr(downsample_block, "skip_conv"):
- sample, res_samples, skip_sample = downsample_block(
- hidden_states=sample, temb=emb, skip_sample=skip_sample
- )
- else:
- sample, res_samples = downsample_block(hidden_states=sample, temb=emb)
-
- down_block_res_samples += res_samples
-
- # 4. mid
- sample = self.mid_block(sample, emb)
-
- # 5. up
- skip_sample = None
- for upsample_block in self.up_blocks:
- res_samples = down_block_res_samples[-len(upsample_block.resnets) :]
- down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)]
-
- if hasattr(upsample_block, "skip_conv"):
- sample, skip_sample = upsample_block(sample, res_samples, emb, skip_sample)
- else:
- sample = upsample_block(sample, res_samples, emb)
-
- # 6. post-process
- sample = self.conv_norm_out(sample)
- sample = self.conv_act(sample)
- sample = self.conv_out(sample)
-
- if skip_sample is not None:
- sample += skip_sample
-
- if self.config.time_embedding_type == "fourier":
- timesteps = timesteps.reshape([sample.shape[0], *([1] * len(sample.shape[1:]))])
- sample = sample / timesteps
-
- if not return_dict:
- return (sample,)
-
- return UNet2DOutput(sample=sample)
diff --git a/spaces/801artistry/RVC801/infer/lib/infer_pack/commons.py b/spaces/801artistry/RVC801/infer/lib/infer_pack/commons.py
deleted file mode 100644
index ccd334b7320543b0c3a2166f82093564c9721317..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/infer/lib/infer_pack/commons.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import math
-
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index ab523020325fa3f30676ad20125c6a9f059a9d84..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import parselmouth
-import numpy as np
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git "a/spaces/AI4PD/hexviz/hexviz/pages/1_\360\237\227\272\357\270\217Identify_Interesting_Heads.py" "b/spaces/AI4PD/hexviz/hexviz/pages/1_\360\237\227\272\357\270\217Identify_Interesting_Heads.py"
deleted file mode 100644
index c99d579fb5de58c25023505e5132be75b90351ad..0000000000000000000000000000000000000000
--- "a/spaces/AI4PD/hexviz/hexviz/pages/1_\360\237\227\272\357\270\217Identify_Interesting_Heads.py"
+++ /dev/null
@@ -1,152 +0,0 @@
-import re
-
-import streamlit as st
-
-from hexviz.attention import clean_and_validate_sequence, get_attention, res_to_1letter
-from hexviz.config import URL
-from hexviz.models import Model, ModelType
-from hexviz.plot import plot_single_heatmap, plot_tiled_heatmap
-from hexviz.view import (
- menu_items,
- select_heads_and_layers,
- select_model,
- select_pdb,
- select_protein,
- select_sequence_slice,
-)
-
-st.set_page_config(layout="wide", menu_items=menu_items)
-st.title("Identify Interesting Heads")
-
-
-for k, v in st.session_state.items():
- st.session_state[k] = v
-
-models = [
- Model(name=ModelType.TAPE_BERT, layers=12, heads=12),
- Model(name=ModelType.ZymCTRL, layers=36, heads=16),
- Model(name=ModelType.PROT_BERT, layers=30, heads=16),
- Model(name=ModelType.PROT_T5, layers=24, heads=32),
-]
-
-with st.expander("Input a PDB id, upload a PDB file or input a sequence", expanded=True):
- pdb_id = select_pdb()
- uploaded_file = st.file_uploader("2.Upload PDB", type=["pdb"])
- input_sequence = st.text_area("3.Input sequence", "", key="input_sequence", max_chars=400)
- sequence, error = clean_and_validate_sequence(input_sequence)
- if error:
- st.error(error)
- pdb_str, structure, source = select_protein(pdb_id, uploaded_file, sequence)
- st.write(f"Visualizing: {source}")
-
-selected_model = select_model(models)
-
-
-chains = list(structure.get_chains())
-chain_ids = [chain.id for chain in chains]
-if "selected_chain" not in st.session_state:
- st.session_state.selected_chain = chain_ids[0]
-chain_selection = st.sidebar.selectbox(
- label="Select Chain",
- options=chain_ids,
- key="selected_chain",
-)
-
-selected_chain = next(chain for chain in chains if chain.id == chain_selection)
-
-ec_number = ""
-if selected_model.name == ModelType.ZymCTRL:
- st.sidebar.markdown(
- """
- ZymCTRL EC number
- ---
- """
- )
- try:
- ec_number = structure.header["compound"]["1"]["ec"]
- except KeyError:
- pass
- ec_number = st.sidebar.text_input("Enzyme Comission number (EC)", ec_number)
-
- # Validate EC number
- if not re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ec_number):
- st.sidebar.error(
- """Please enter a valid Enzyme Commission number in the format of 4
- integers separated by periods (e.g., 1.2.3.21)"""
- )
-
-
-residues = [res for res in selected_chain.get_residues()]
-sequence = res_to_1letter(residues)
-
-l = len(sequence)
-slice_start, slice_end = select_sequence_slice(l)
-truncated_sequence = sequence[slice_start - 1 : slice_end]
-remove_special_tokens = st.sidebar.checkbox(
- "Hide attention to special tokens", key="remove_special_tokens"
-)
-if "fixed_scale" not in st.session_state:
- st.session_state.fixed_scale = True
-fixed_scale = st.sidebar.checkbox("Fixed scale", help="For long sequences the default fixed 0 to 1 scale can have very low contrast heatmaps, consider using a relative scale to increase the contrast between high attention and low attention areas. Note that each subplot will have separate color scales so don't compare colors between attention heads if using a non-fixed scale.", key="fixed_scale")
-if not fixed_scale:
- st.sidebar.warning("With `Fixed scale` set to False each cell in the grid has a dynamic color scale where the highest attention value in that cell is bright yellow. Colors can not be compared between cells.")
-
-
-layer_sequence, head_sequence = select_heads_and_layers(st.sidebar, selected_model)
-
-st.markdown(
- f"""Each tile is a heatmap of attention for a section of the {source} chain
- ({chain_selection}) from residue {slice_start} to {slice_end}. Adjust the
- section length and starting point in the sidebar."""
-)
-
-# TODO: Decide if you should get attention for the full sequence or just the truncated sequence
-# Attention values will change depending on what we do.
-attention, tokens = get_attention(
- sequence=truncated_sequence,
- model_type=selected_model.name,
- remove_special_tokens=remove_special_tokens,
- ec_number=ec_number,
-)
-
-fig = plot_tiled_heatmap(attention, layer_sequence=layer_sequence, head_sequence=head_sequence, fixed_scale=fixed_scale)
-
-
-st.pyplot(fig)
-
-st.subheader("Plot single head")
-
-if selected_model.name == ModelType.PROT_T5:
- # Remove leading underscores from residue tokens
- tokens = [token[1:] if str(token) != "" else token for token in tokens]
-
-left, mid, right = st.columns(3)
-with left:
- if "selected_layer" not in st.session_state:
- st.session_state["selected_layer"] = 5
- layer_one = st.selectbox(
- "Layer",
- options=[i for i in range(1, selected_model.layers + 1)],
- key="selected_layer",
- )
- layer = layer_one - 1
-with mid:
- if "selected_head" not in st.session_state:
- st.session_state["selected_head"] = 1
- head_one = st.selectbox(
- "Head",
- options=[i for i in range(1, selected_model.heads + 1)],
- key="selected_head",
- )
- head = head_one - 1
-with right:
- if "label_tokens" not in st.session_state:
- st.session_state.label_tokens = []
- tokens_to_label = st.multiselect("Label tokens", options=tokens, key="label_tokens")
-
-if len(tokens_to_label) > 0:
- tokens = [token if token in tokens_to_label else "" for token in tokens]
-
-
-single_head_fig = plot_single_heatmap(attention, layer, head, tokens=tokens, fixed_scale=fixed_scale)
-st.pyplot(single_head_fig)
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/modules/codebooks_patterns.py b/spaces/AIConsultant/MusicGen/audiocraft/modules/codebooks_patterns.py
deleted file mode 100644
index 3cf3bb41774700a679ffe4325236d0324a99c546..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/modules/codebooks_patterns.py
+++ /dev/null
@@ -1,539 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import namedtuple
-from dataclasses import dataclass
-from functools import lru_cache
-import logging
-import typing as tp
-
-from abc import ABC, abstractmethod
-import torch
-
-LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index)
-PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class Pattern:
- """Base implementation of a pattern over a sequence with multiple codebooks.
-
- The codebook pattern consists in a layout, defining for each sequence step
- the list of coordinates of each codebook timestep in the resulting interleaved sequence.
- The first item of the pattern is always an empty list in order to properly insert a special token
- to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern
- and ``timesteps`` the number of timesteps corresponding to the original sequence.
-
- The pattern provides convenient methods to build and revert interleaved sequences from it:
- ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T]
- to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size,
- K being the number of codebooks, T the number of original timesteps and S the number of sequence steps
- for the output sequence. The unfilled positions are replaced with a special token and the built sequence
- is returned along with a mask indicating valid tokens.
- ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment
- of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask
- to fill and specify invalid positions if needed.
- See the dedicated methods for more details.
- """
- # Pattern layout, for each sequence step, we have a list of coordinates
- # corresponding to the original codebook timestep and position.
- # The first list is always an empty list in order to properly insert
- # a special token to start with.
- layout: PatternLayout
- timesteps: int
- n_q: int
-
- def __post_init__(self):
- assert len(self.layout) > 0
- assert self.layout[0] == []
- self._validate_layout()
- self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes)
- self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes)
- logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout))
-
- def _validate_layout(self):
- """Runs checks on the layout to ensure a valid pattern is defined.
- A pattern is considered invalid if:
- - Multiple timesteps for a same codebook are defined in the same sequence step
- - The timesteps for a given codebook are not in ascending order as we advance in the sequence
- (this would mean that we have future timesteps before past timesteps).
- """
- q_timesteps = {q: 0 for q in range(self.n_q)}
- for s, seq_coords in enumerate(self.layout):
- if len(seq_coords) > 0:
- qs = set()
- for coord in seq_coords:
- qs.add(coord.q)
- last_q_timestep = q_timesteps[coord.q]
- assert coord.t >= last_q_timestep, \
- f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}"
- q_timesteps[coord.q] = coord.t
- # each sequence step contains at max 1 coordinate per codebook
- assert len(qs) == len(seq_coords), \
- f"Multiple entries for a same codebook are found at step {s}"
-
- @property
- def num_sequence_steps(self):
- return len(self.layout) - 1
-
- @property
- def max_delay(self):
- max_t_in_seq_coords = 0
- for seq_coords in self.layout[1:]:
- for coords in seq_coords:
- max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1)
- return max_t_in_seq_coords - self.timesteps
-
- @property
- def valid_layout(self):
- valid_step = len(self.layout) - self.max_delay
- return self.layout[:valid_step]
-
- def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None):
- """Get codebook coordinates in the layout that corresponds to the specified timestep t
- and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step
- and the actual codebook coordinates.
- """
- assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps"
- if q is not None:
- assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks"
- coords = []
- for s, seq_codes in enumerate(self.layout):
- for code in seq_codes:
- if code.t == t and (q is None or code.q == q):
- coords.append((s, code))
- return coords
-
- def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]:
- return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)]
-
- def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]:
- steps_with_timesteps = self.get_steps_with_timestep(t, q)
- return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None
-
- def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool,
- device: tp.Union[torch.device, str] = 'cpu'):
- """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps.
-
- Args:
- timesteps (int): Maximum number of timesteps steps to consider.
- keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps.
- device (torch.device or str): Device for created tensors.
- Returns:
- indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S].
- """
- assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}"
- assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern"
- # use the proper layout based on whether we limit ourselves to valid steps only or not,
- # note that using the valid_layout will result in a truncated sequence up to the valid steps
- ref_layout = self.valid_layout if keep_only_valid_steps else self.layout
- # single item indexing being super slow with pytorch vs. numpy, so we use numpy here
- indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy()
- mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy()
- # fill indexes with last sequence step value that will correspond to our special token
- # the last value is n_q * timesteps as we have flattened z and append special token as the last token
- # which will correspond to the index: n_q * timesteps
- indexes[:] = n_q * timesteps
- # iterate over the pattern and fill scattered indexes and mask
- for s, sequence_coords in enumerate(ref_layout):
- for coords in sequence_coords:
- if coords.t < timesteps:
- indexes[coords.q, s] = coords.t + coords.q * timesteps
- mask[coords.q, s] = 1
- indexes = torch.from_numpy(indexes).to(device)
- mask = torch.from_numpy(mask).to(device)
- return indexes, mask
-
- def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False):
- """Build sequence corresponding to the pattern from the input tensor z.
- The sequence is built using up to sequence_steps if specified, and non-pattern
- coordinates are filled with the special token.
-
- Args:
- z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T].
- special_token (int): Special token used to fill non-pattern coordinates in the new sequence.
- keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps.
- Steps that are beyond valid steps will be replaced by the special_token in that case.
- Returns:
- values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S
- corresponding either to the sequence_steps if provided, otherwise to the length of the pattern.
- indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S].
- """
- B, K, T = z.shape
- indexes, mask = self._build_pattern_sequence_scatter_indexes(
- T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device)
- )
- z = z.view(B, -1)
- # we append the special token as the last index of our flattened z tensor
- z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1)
- values = z[:, indexes.view(-1)]
- values = values.view(B, K, indexes.shape[-1])
- return values, indexes, mask
-
- def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int,
- keep_only_valid_steps: bool = False,
- is_model_output: bool = False,
- device: tp.Union[torch.device, str] = 'cpu'):
- """Builds scatter indexes required to retrieve the original multi-codebook sequence
- from interleaving pattern.
-
- Args:
- sequence_steps (int): Sequence steps.
- n_q (int): Number of codebooks.
- keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps.
- Steps that are beyond valid steps will be replaced by the special_token in that case.
- is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not.
- device (torch.device or str): Device for created tensors.
- Returns:
- indexes (torch.Tensor): Indexes for reconstructing the output, of shape [K, T].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T].
- """
- ref_layout = self.valid_layout if keep_only_valid_steps else self.layout
- # TODO(jade): Do we want to further truncate to only valid timesteps here as well?
- timesteps = self.timesteps
- assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}"
- assert sequence_steps <= len(ref_layout), \
- f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}"
-
- # ensure we take the appropriate indexes to keep the model output from the first special token as well
- if is_model_output:
- ref_layout = ref_layout[1:]
-
- # single item indexing being super slow with pytorch vs. numpy, so we use numpy here
- indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy()
- mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy()
- # fill indexes with last sequence step value that will correspond to our special token
- indexes[:] = n_q * sequence_steps
- for s, sequence_codes in enumerate(ref_layout):
- if s < sequence_steps:
- for code in sequence_codes:
- if code.t < timesteps:
- indexes[code.q, code.t] = s + code.q * sequence_steps
- mask[code.q, code.t] = 1
- indexes = torch.from_numpy(indexes).to(device)
- mask = torch.from_numpy(mask).to(device)
- return indexes, mask
-
- def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False):
- """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving.
- The sequence is reverted using up to timesteps if specified, and non-pattern coordinates
- are filled with the special token.
-
- Args:
- s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S].
- special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence.
- Returns:
- values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T
- corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise.
- indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T].
- """
- B, K, S = s.shape
- indexes, mask = self._build_reverted_sequence_scatter_indexes(
- S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device)
- )
- s = s.view(B, -1)
- # we append the special token as the last index of our flattened z tensor
- s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1)
- values = s[:, indexes.view(-1)]
- values = values.view(B, K, indexes.shape[-1])
- return values, indexes, mask
-
- def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False):
- """Revert model logits obtained on a sequence built from the pattern
- back to a tensor matching the original sequence.
-
- This method is similar to ``revert_pattern_sequence`` with the following specificities:
- 1. It is designed to work with the extra cardinality dimension
- 2. We return the logits for the first sequence item that matches the special_token and
- which matching target in the original sequence is the first item of the sequence,
- while we skip the last logits as there is no matching target
- """
- B, card, K, S = logits.shape
- indexes, mask = self._build_reverted_sequence_scatter_indexes(
- S, K, keep_only_valid_steps, is_model_output=True, device=logits.device
- )
- logits = logits.reshape(B, card, -1)
- # we append the special token as the last index of our flattened z tensor
- logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S]
- values = logits[:, :, indexes.view(-1)]
- values = values.view(B, card, K, indexes.shape[-1])
- return values, indexes, mask
-
-
-class CodebooksPatternProvider(ABC):
- """Abstraction around providing pattern for interleaving codebooks.
-
- The CodebooksPatternProvider abstraction allows to implement various strategies to
- define interleaving pattern of sequences composed of multiple codebooks. For a given
- number of codebooks `n_q`, the pattern provider can generate a specified pattern
- corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern
- can be used to construct a new sequence from the original codes respecting the specified
- pattern. The pattern is defined as a list of list of code coordinates, code coordinate
- being a tuple with the original timestep and codebook to build the new sequence.
- Note that all patterns must start with an empty list that is then used to insert a first
- sequence step of special tokens in the newly generated sequence.
-
- Args:
- n_q (int): number of codebooks.
- cached (bool): if True, patterns for a given length are cached. In general
- that should be true for efficiency reason to avoid synchronization points.
- """
- def __init__(self, n_q: int, cached: bool = True):
- assert n_q > 0
- self.n_q = n_q
- self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore
-
- @abstractmethod
- def get_pattern(self, timesteps: int) -> Pattern:
- """Builds pattern with specific interleaving between codebooks.
-
- Args:
- timesteps (int): Total number of timesteps.
- """
- raise NotImplementedError()
-
-
-class DelayedPatternProvider(CodebooksPatternProvider):
- """Provider for delayed pattern across delayed codebooks.
- Codebooks are delayed in the sequence and sequence steps will contain codebooks
- from different timesteps.
-
- Example:
- Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- The resulting sequence obtained from the returned pattern is:
- [[S, 1, 2, 3, 4],
- [S, S, 1, 2, 3],
- [S, S, S, 1, 2]]
- (with S being a special token)
-
- Args:
- n_q (int): Number of codebooks.
- delays (list of int, optional): Delay for each of the codebooks.
- If delays not defined, each codebook is delayed by 1 compared to the previous one.
- flatten_first (int): Flatten the first N timesteps.
- empty_initial (int): Prepend with N empty list of coordinates.
- """
- def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None,
- flatten_first: int = 0, empty_initial: int = 0):
- super().__init__(n_q)
- if delays is None:
- delays = list(range(n_q))
- self.delays = delays
- self.flatten_first = flatten_first
- self.empty_initial = empty_initial
- assert len(self.delays) == self.n_q
- assert sorted(self.delays) == self.delays
-
- def get_pattern(self, timesteps: int) -> Pattern:
- out: PatternLayout = [[]]
- max_delay = max(self.delays)
- if self.empty_initial:
- out += [[] for _ in range(self.empty_initial)]
- if self.flatten_first:
- for t in range(min(timesteps, self.flatten_first)):
- for q in range(self.n_q):
- out.append([LayoutCoord(t, q)])
- for t in range(self.flatten_first, timesteps + max_delay):
- v = []
- for q, delay in enumerate(self.delays):
- t_for_q = t - delay
- if t_for_q >= self.flatten_first:
- v.append(LayoutCoord(t_for_q, q))
- out.append(v)
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
-
-
-class ParallelPatternProvider(DelayedPatternProvider):
- """Provider for parallel pattern across codebooks.
- This pattern provider is a special case of the delayed pattern with actually no delay,
- hence delays=repeat(0, n_q).
-
- Args:
- n_q (int): Number of codebooks.
- """
- def __init__(self, n_q: int):
- super().__init__(n_q, [0] * n_q)
-
-
-class UnrolledPatternProvider(CodebooksPatternProvider):
- """Provider for unrolling codebooks pattern.
- This pattern provider enables to represent the codebook flattened completely or only to some extend
- while also specifying a given delay between the flattened codebooks representation, allowing to
- unroll the codebooks in the sequence.
-
- Example:
- 1. Flattening of the codebooks.
- By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q),
- taking n_q = 3 and timesteps = 4:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- will result into:
- [[S, S, 1, S, S, 2, S, S, 3, S, S, 4],
- [S, 1, S, S, 2, S, S, 3, S, S, 4, S],
- [1, S, S, 2, S, S, 3, S, S, 4, S, S]]
- 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step
- for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example
- taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- will result into:
- [[S, 1, S, S, 2, S, S, 3, S, S, 4, S],
- [S, 1, S, S, 2, S, S, 3, S, S, 4, S],
- [1, S, S, 2, S, S, 3, S, S, 4, S, S]]
- 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks
- allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the
- same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1]
- and delays = [0, 3, 3]:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- will result into:
- [[S, S, S, 1, S, 2, S, 3, S, 4],
- [S, S, S, 1, S, 2, S, 3, S, 4],
- [1, 2, 3, S, 4, S, 5, S, 6, S]]
-
- Args:
- n_q (int): Number of codebooks.
- flattening (list of int, optional): Flattening schema over the codebooks. If not defined,
- the codebooks will be flattened to 1 codebook per step, meaning that the sequence will
- have n_q extra steps for each timestep.
- delays (list of int, optional): Delay for each of the codebooks. If not defined,
- no delay is added and therefore will default to [0] * ``n_q``.
- Note that two codebooks that will be flattened to the same inner step
- should have the same delay, otherwise the pattern is considered as invalid.
- """
- FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay'])
-
- def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None,
- delays: tp.Optional[tp.List[int]] = None):
- super().__init__(n_q)
- if flattening is None:
- flattening = list(range(n_q))
- if delays is None:
- delays = [0] * n_q
- assert len(flattening) == n_q
- assert len(delays) == n_q
- assert sorted(flattening) == flattening
- assert sorted(delays) == delays
- self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening)
- self.max_delay = max(delays)
-
- def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]):
- """Build a flattened codebooks representation as a dictionary of inner step
- and the actual codebook indices corresponding to the flattened codebook. For convenience, we
- also store the delay associated to the flattened codebook to avoid maintaining an extra mapping.
- """
- flattened_codebooks: dict = {}
- for q, (inner_step, delay) in enumerate(zip(flattening, delays)):
- if inner_step not in flattened_codebooks:
- flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay)
- else:
- flat_codebook = flattened_codebooks[inner_step]
- assert flat_codebook.delay == delay, (
- "Delay and flattening between codebooks is inconsistent: ",
- "two codebooks flattened to the same position should have the same delay."
- )
- flat_codebook.codebooks.append(q)
- flattened_codebooks[inner_step] = flat_codebook
- return flattened_codebooks
-
- @property
- def _num_inner_steps(self):
- """Number of inner steps to unroll between timesteps in order to flatten the codebooks.
- """
- return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1
-
- def num_virtual_steps(self, timesteps: int) -> int:
- return timesteps * self._num_inner_steps + 1
-
- def get_pattern(self, timesteps: int) -> Pattern:
- """Builds pattern for delay across codebooks.
-
- Args:
- timesteps (int): Total number of timesteps.
- """
- # the PatternLayout is built as a tuple of sequence position and list of coordinates
- # so that it can be reordered properly given the required delay between codebooks of given timesteps
- indexed_out: list = [(-1, [])]
- max_timesteps = timesteps + self.max_delay
- for t in range(max_timesteps):
- # for each timestep, we unroll the flattened codebooks,
- # emitting the sequence step with the corresponding delay
- for step in range(self._num_inner_steps):
- if step in self._flattened_codebooks:
- # we have codebooks at this virtual step to emit
- step_codebooks = self._flattened_codebooks[step]
- t_for_q = t + step_codebooks.delay
- coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks]
- if t_for_q < max_timesteps and t < max_timesteps:
- indexed_out.append((t_for_q, coords))
- else:
- # there is no codebook in this virtual step so we emit an empty list
- indexed_out.append((t, []))
- out = [coords for _, coords in sorted(indexed_out)]
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
-
-
-class VALLEPattern(CodebooksPatternProvider):
- """Almost VALL-E style pattern.
- We further allow some delays for the codebooks other than the first one.
-
- Args:
- n_q (int): Number of codebooks.
- delays (list of int, optional): Delay for each of the codebooks.
- If delays not defined, each codebook is delayed by 1 compared to the previous one.
- """
- def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None):
- super().__init__(n_q)
- if delays is None:
- delays = [0] * (n_q - 1)
- self.delays = delays
- assert len(self.delays) == self.n_q - 1
- assert sorted(self.delays) == self.delays
-
- def get_pattern(self, timesteps: int) -> Pattern:
- out: PatternLayout = [[]]
- for t in range(timesteps):
- out.append([LayoutCoord(t, 0)])
- max_delay = max(self.delays)
- for t in range(timesteps + max_delay):
- v = []
- for q, delay in enumerate(self.delays):
- t_for_q = t - delay
- if t_for_q >= 0:
- v.append(LayoutCoord(t_for_q, q + 1))
- out.append(v)
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
-
-
-class MusicLMPattern(CodebooksPatternProvider):
- """Almost MusicLM style pattern. This is equivalent to full flattening
- but in a different order.
-
- Args:
- n_q (int): Number of codebooks.
- group_by (int): Number of codebooks to group together.
- """
- def __init__(self, n_q: int, group_by: int = 2):
- super().__init__(n_q)
- self.group_by = group_by
-
- def get_pattern(self, timesteps: int) -> Pattern:
- out: PatternLayout = [[]]
- for offset in range(0, self.n_q, self.group_by):
- for t in range(timesteps):
- for q in range(offset, offset + self.group_by):
- out.append([LayoutCoord(t, q)])
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/tensor_utils.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/tensor_utils.py
deleted file mode 100644
index be4b69a4f135b95fcf18618668ed909314f24871..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/tensor_utils.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import torch
-import torch.distributed as dist
-
-
-def reduce_tensors(metrics):
- new_metrics = {}
- for k, v in metrics.items():
- if isinstance(v, torch.Tensor):
- dist.all_reduce(v)
- v = v / dist.get_world_size()
- if type(v) is dict:
- v = reduce_tensors(v)
- new_metrics[k] = v
- return new_metrics
-
-
-def tensors_to_scalars(tensors):
- if isinstance(tensors, torch.Tensor):
- tensors = tensors.item()
- return tensors
- elif isinstance(tensors, dict):
- new_tensors = {}
- for k, v in tensors.items():
- v = tensors_to_scalars(v)
- new_tensors[k] = v
- return new_tensors
- elif isinstance(tensors, list):
- return [tensors_to_scalars(v) for v in tensors]
- else:
- return tensors
-
-
-def tensors_to_np(tensors):
- if isinstance(tensors, dict):
- new_np = {}
- for k, v in tensors.items():
- if isinstance(v, torch.Tensor):
- v = v.cpu().numpy()
- if type(v) is dict:
- v = tensors_to_np(v)
- new_np[k] = v
- elif isinstance(tensors, list):
- new_np = []
- for v in tensors:
- if isinstance(v, torch.Tensor):
- v = v.cpu().numpy()
- if type(v) is dict:
- v = tensors_to_np(v)
- new_np.append(v)
- elif isinstance(tensors, torch.Tensor):
- v = tensors
- if isinstance(v, torch.Tensor):
- v = v.cpu().numpy()
- if type(v) is dict:
- v = tensors_to_np(v)
- new_np = v
- else:
- raise Exception(f'tensors_to_np does not support type {type(tensors)}.')
- return new_np
-
-
-def move_to_cpu(tensors):
- ret = {}
- for k, v in tensors.items():
- if isinstance(v, torch.Tensor):
- v = v.cpu()
- if type(v) is dict:
- v = move_to_cpu(v)
- ret[k] = v
- return ret
-
-
-def move_to_cuda(batch, gpu_id=0):
- # base case: object can be directly moved using `cuda` or `to`
- if callable(getattr(batch, 'cuda', None)):
- return batch.cuda(gpu_id, non_blocking=True)
- elif callable(getattr(batch, 'to', None)):
- return batch.to(torch.device('cuda', gpu_id), non_blocking=True)
- elif isinstance(batch, list):
- for i, x in enumerate(batch):
- batch[i] = move_to_cuda(x, gpu_id)
- return batch
- elif isinstance(batch, tuple):
- batch = list(batch)
- for i, x in enumerate(batch):
- batch[i] = move_to_cuda(x, gpu_id)
- return tuple(batch)
- elif isinstance(batch, dict):
- for k, v in batch.items():
- batch[k] = move_to_cuda(v, gpu_id)
- return batch
- return batch
diff --git a/spaces/AIWaves/Software_Company/src/agents/Environment/__init__.py b/spaces/AIWaves/Software_Company/src/agents/Environment/__init__.py
deleted file mode 100644
index 3612cfec012dd670048a4d5f1ac844cf776b155c..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Software_Company/src/agents/Environment/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .base_environment import Environment
\ No newline at end of file
diff --git a/spaces/Abdullahw72/bark-voice-cloning/app.py b/spaces/Abdullahw72/bark-voice-cloning/app.py
deleted file mode 100644
index 4382649733cfacbc1267ca3253d4533fd4454325..0000000000000000000000000000000000000000
--- a/spaces/Abdullahw72/bark-voice-cloning/app.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import math
-import os.path
-import uuid
-
-import gradio
-import numpy
-import torch
-
-from hubert.hubert_manager import HuBERTManager
-from hubert.pre_kmeans_hubert import CustomHubert
-from hubert.customtokenizer import CustomTokenizer
-from encodec import EncodecModel
-from encodec.utils import convert_audio
-
-
-hubert_model = CustomHubert(HuBERTManager.make_sure_hubert_installed())
-tokenizer_model = CustomTokenizer.load_from_checkpoint(
- HuBERTManager.make_sure_tokenizer_installed(model='quantifier_V1_hubert_base_ls960_23.pth'),
- map_location=torch.device('cpu')
-)
-encodec_model = EncodecModel.encodec_model_24khz()
-
-
-
-def clone(audio, *args):
- sr, wav = audio
-
- wav = torch.tensor(wav)
-
- if wav.dtype == torch.int16:
- wav = wav.float() / 32767.0
-
- if len(wav.shape) == 2:
- if wav.shape[0] == 2: # Stereo to mono if needed
- wav = wav.mean(0, keepdim=True)
- if wav.shape[1] == 2:
- wav = wav.mean(1, keepdim=False).unsqueeze(-1)
-
- wav = wav[-int(sr*20):] # Take only the last 20 seconds
-
- wav = wav.reshape(1, -1) # Reshape from gradio style to HuBERT shape. (N, 1) to (1, N)
-
- semantic_vectors = hubert_model.forward(wav, input_sample_hz=sr)
- semantic_tokens = tokenizer_model.get_token(semantic_vectors)
-
- encodec_model.set_target_bandwidth(6.0)
- wav = convert_audio(wav, sr, encodec_model.sample_rate, 1)
- wav = wav.unsqueeze(0)
-
- with torch.no_grad():
- encoded_frames = encodec_model.encode(wav)
-
- codes = torch.cat([encoded[0] for encoded in encoded_frames], dim=-1).squeeze() # [B, n_q, T]
-
- if not os.path.isdir('data/speakers'):
- os.makedirs('data/speakers')
-
- file_path = f'data/speakers/{uuid.uuid4().hex}.npz'
-
- numpy.savez(
- file_path,
- semantic_prompt=semantic_tokens,
- fine_prompt=codes,
- coarse_prompt=codes[:2, :]
- )
-
- return file_path
-
-
-
-iface = gradio.interface.Interface(fn=clone, inputs=[
- 'audio',
- gradio.Markdown(
- '''
- # Bark text to speech voice cloning
- [Model](https://huggingface.co/GitMylo/bark-voice-cloning/), [Model GitHub](https://github.com/gitmylo/bark-voice-cloning-HuBERT-quantizer), [Webui GitHub](https://github.com/gitmylo/audio-webui)
-
- For faster creation of voice clones [Duplicate this space](https://huggingface.co/spaces/GitMylo/bark-voice-cloning?duplicate=true)
-
- Uploaded audio files get cut to 20 seconds in order to keep it fast for everyone. Only the last 20 seconds will be used. (Bark only uses the last 14 seconds anyway)
-
- ## Tips for better cloning
- ### Make sure these things are **NOT** in your voice input: (in no particular order)
- * Noise (You can use a noise remover before)
- * Music (There are also music remover tools) (Unless you want music in the background)
- * A cut-off at the end (This will cause it to try and continue on the generation)
- * Under 1 second of training data (i personally suggest around 10 seconds for good potential, but i've had great results with 5 seconds as well.)
-
- ### What makes for good prompt audio? (in no particular order)
- * Clearly spoken
- * No weird background noises
- * Only one speaker
- * Audio which ends after a sentence ends
- * Regular/common voice (They usually have more success, it's still capable of cloning complex voices, but not as good at it)
- * Around 10 seconds of data
- ''')
-], outputs='file')
-iface.launch()
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GptForLove.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GptForLove.py
deleted file mode 100644
index 53c403e16beecfe2e6f7255ef90c9a1bb8444230..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GptForLove.py
+++ /dev/null
@@ -1,82 +0,0 @@
-from __future__ import annotations
-
-from aiohttp import ClientSession
-import execjs, os, json
-
-from ..typing import AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-from .helper import format_prompt
-
-class GptForLove(AsyncGeneratorProvider):
- url = "https://ai18.gptforlove.com"
- supports_gpt_35_turbo = True
- working = True
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> AsyncGenerator:
- if not model:
- model = "gpt-3.5-turbo"
- headers = {
- "authority": "api.gptplus.one",
- "accept": "application/json, text/plain, */*",
- "accept-language": "de-DE,de;q=0.9,en-DE;q=0.8,en;q=0.7,en-US;q=0.6,nl;q=0.5,zh-CN;q=0.4,zh-TW;q=0.3,zh;q=0.2",
- "content-type": "application/json",
- "origin": cls.url,
- "referer": f"{cls.url}/",
- "sec-ch-ua": "\"Google Chrome\";v=\"117\", \"Not;A=Brand\";v=\"8\", \"Chromium\";v=\"117\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "Linux",
- "sec-fetch-dest": "empty",
- "sec-fetch-mode": "cors",
- "sec-fetch-site": "cross-site",
- "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36"
- }
- async with ClientSession(headers=headers) as session:
- prompt = format_prompt(messages)
- data = {
- "prompt": prompt,
- "options": {},
- "systemMessage": "You are ChatGPT, the version is GPT3.5, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.",
- "temperature": 0.8,
- "top_p": 1,
- "secret": get_secret(),
- **kwargs
- }
- async with session.post("https://api.gptplus.one/chat-process", json=data) as response:
- response.raise_for_status()
- async for line in response.content:
- try:
- line = json.loads(line)
- except:
- raise RuntimeError(f"Broken line: {line}")
- if "detail" in line:
- content = line["detail"]["choices"][0]["delta"].get("content")
- if content:
- yield content
- elif "10分钟内提问超过了5次" in line:
- raise RuntimeError("Rate limit reached")
- else:
- raise RuntimeError(f"Response: {line}")
-
-
-def get_secret() -> str:
- dir = os.path.dirname(__file__)
- dir += '/npm/node_modules/crypto-js'
- source = """
-CryptoJS = require('{dir}/crypto-js')
-var k = '14487141bvirvvG'
- , e = Math.floor(new Date().getTime() / 1e3);
-var t = CryptoJS.enc.Utf8.parse(e)
- , o = CryptoJS.AES.encrypt(t, k, {
- mode: CryptoJS.mode.ECB,
- padding: CryptoJS.pad.Pkcs7
-});
-return o.toString()
-"""
- source = source.replace('{dir}', dir)
- return execjs.compile(source).call('')
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/RunLayout.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/RunLayout.js
deleted file mode 100644
index ec91878f029cf6e7af0fc5ffeb670a630bb6bb37..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/RunLayout.js
+++ /dev/null
@@ -1,47 +0,0 @@
-// Override
-var RunLayout = function (parent, newWidth, newHeight) {
- // Skip hidden or !dirty sizer
- if (this.ignoreLayout) {
- return this;
- }
-
- var isTopmostParent = !parent;
- // Preprocessor, top parent only
- if (isTopmostParent) {
- this.preLayout();
- }
-
- // Calculate parent width
- newWidth = this.resolveWidth(newWidth);
- // Calculate all children width, run width wrap
- if (isTopmostParent) {
- this.resolveChildrenWidth(newWidth);
- this.runWidthWrap(newWidth);
- }
- // Calculate parent height
- newHeight = this.resolveHeight(newHeight);
- // The last chance of resolving size
- this.postResolveSize(newWidth, newHeight);
- // Resize parent
- this.resize(newWidth, newHeight);
-
- if (this.sizerEventsEnable) {
- if (this.layoutedChildren === undefined) {
- this.layoutedChildren = [];
- }
- }
-
- // Layout children
- this.layoutChildren();
-
- // Layout background children
- this.layoutBackgrounds();
-
- if (this.sizerEventsEnable) {
- this.emit('postlayout', this.layoutedChildren, this);
- this.layoutedChildren.length = 0;
- }
-
- return this.postLayout();
-}
-export default RunLayout;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/knob/input/OnTouchPad.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/knob/input/OnTouchPad.js
deleted file mode 100644
index 08ca2c2c5225fc192771cbe61bd719e499b652c5..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/knob/input/OnTouchPad.js
+++ /dev/null
@@ -1,40 +0,0 @@
-import IsLocalPointInKnob from './IsLocalPointInKnob.js';
-
-const GetAngle = Phaser.Math.Angle.Between;
-const NormalizeAngle = Phaser.Math.Angle.Normalize;
-
-var OnTouchPad = function (pointer, localX, localY) {
- if (!this.enable) {
- return;
- }
- if (!pointer.isDown) {
- return;
- }
- var knob = this.sizerChildren.knob;
- if (!IsLocalPointInKnob(knob, localX, localY)) {
- return;
- }
-
- var centerX = knob.width / 2;
- var startAngle = knob.startAngle;
- var endAngle = GetAngle(centerX, centerX, localX, localY);
- var deltaAngle = (knob.anticlockwise) ? (startAngle - endAngle) : (endAngle - startAngle);
- var value = NormalizeAngle(deltaAngle) / (2 * Math.PI);
-
- this.stopEaseValue();
- if ((this.easeValueDuration === 0) || (Math.abs(this.value - value) < 0.1)) {
- this.value = value;
- } else {
- this.easeValueTo(value);
- }
-}
-
-var InstallEvents = function () {
- var knob = this.sizerChildren.knob;
- knob
- .on('pointerdown', OnTouchPad, this)
- .on('pointermove', OnTouchPad, this)
- .setInteractive()
-}
-
-export default InstallEvents;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/namevaluelabel/NameValueLabel.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/namevaluelabel/NameValueLabel.js
deleted file mode 100644
index 3bc17ecd1e9fa3a0451bab77f6eb3c6547bfeef5..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/namevaluelabel/NameValueLabel.js
+++ /dev/null
@@ -1,161 +0,0 @@
-import Sizer from '../sizer/Sizer.js';
-import Build from './methods/Build.js';
-import SetValueMethods from './methods/SetValueMethods.js';
-
-class NameValueLabel extends Sizer {
- constructor(scene, config) {
- // Create sizer
- super(scene, config);
- this.type = 'rexNameValueLabel';
-
- Build.call(this, scene, config);
- }
-
- // Access nameText game object
- get nameText() {
- var textObject = this.childrenMap.name;
- if (textObject === undefined) {
- return '';
- }
- return textObject.text;
- }
-
- set nameText(value) {
- var textObject = this.childrenMap.name;
- if (textObject === undefined) {
- return;
- }
- textObject.setText(value);
- }
-
- setNameText(value) {
- this.nameText = value;
- return this;
- }
-
- // Access valueText game object
- get valueText() {
- var textObject = this.childrenMap.value;
- if (textObject === undefined) {
- return '';
- }
- return textObject.text;
- }
-
- set valueText(value) {
- var textObject = this.childrenMap.value;
- if (textObject === undefined) {
- return;
- }
- textObject.setText(value);
- }
-
- setValueText(value) {
- this.valueText = value;
- return this;
- }
-
- // Accrss bar game object
- get barValue() {
- var bar = this.childrenMap.bar;
- if (bar === undefined) {
- return;
- }
- return bar.value;
- }
-
- set barValue(value) {
- var bar = this.childrenMap.bar;
- if (bar === undefined) {
- return;
- }
- bar.setValue(value);
- }
-
- setBarValue(value, min, max) {
- var bar = this.childrenMap.bar;
- if (bar === undefined) {
- return this;
- }
- bar.setValue(value, min, max);
- return this;
- }
-
- easeBarValueTo(value, min, max) {
- var bar = this.childrenMap.bar;
- if (bar === undefined) {
- return this;
- }
- bar.easeValueTo(value, min, max);
- return this;
- }
-
- // Access icon game object
- setTexture(key, frame) {
- var imageObject = this.childrenMap.icon;
- if (imageObject === undefined) {
- return;
- }
- imageObject.setTexture(key, frame);
- return this;
- }
-
- get texture() {
- var imageObject = this.childrenMap.icon;
- if (imageObject === undefined) {
- return undefined;
- }
- return imageObject.texture;
- }
-
- get frame() {
- var imageObject = this.childrenMap.icon;
- if (imageObject === undefined) {
- return undefined;
- }
- return imageObject.frame;
- }
-
- runLayout(parent, newWidth, newHeight) {
- if (this.ignoreLayout) {
- return this;
- }
-
- super.runLayout(parent, newWidth, newHeight);
- // Pin icon-mask to icon game object
- var iconMask = this.childrenMap.iconMask;
- if (iconMask) {
- iconMask.setPosition();
- this.resetChildPositionState(iconMask);
- }
- // Pin action-mask to action game object
- var actionMask = this.childrenMap.actionMask;
- if (actionMask) {
- actionMask.setPosition();
- this.resetChildPositionState(actionMask);
- }
- return this;
- }
-
- resize(width, height) {
- super.resize(width, height);
- // Resize icon-mask to icon game object
- var iconMask = this.childrenMap.iconMask;
- if (iconMask) {
- iconMask.resize();
- }
- // Resize action-mask to icon game object
- var actionMask = this.childrenMap.actionMask;
- if (actionMask) {
- actionMask.resize();
- }
- return this;
- }
-}
-
-Object.assign(
- NameValueLabel.prototype,
- SetValueMethods,
-)
-
-export default NameValueLabel;
\ No newline at end of file
diff --git a/spaces/AixiaGreyatt/QQsign/Dockerfile b/spaces/AixiaGreyatt/QQsign/Dockerfile
deleted file mode 100644
index 535624113f3b520e4829240a48bd3652430de828..0000000000000000000000000000000000000000
--- a/spaces/AixiaGreyatt/QQsign/Dockerfile
+++ /dev/null
@@ -1,23 +0,0 @@
-FROM openjdk:17-slim
-
-# 设置时区
-ENV TZ Asia/Shanghai
-
-# 设置工作目录
-WORKDIR /app
-
-# 复制文件到工作目录
-COPY bin /app/bin
-COPY lib /app/lib
-COPY txlib /app/txlib
-
-# 设置命令
-RUN chmod -R 777 /tmp
-RUN chmod -R 777 /app
-RUN sed 's/"key": ".*"/"key": "'"$KEY_VALUE"'"/' txlib/$TXLIB_VERSION/config.json > /app/txlib/$TXLIB_VERSION/config.json
-
-# 运行
-CMD bash bin/unidbg-fetch-qsign --basePath=txlib/$TXLIB_VERSION
-
-# 暴露端口
-EXPOSE 7860
\ No newline at end of file
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/stylegan2/op/__init__.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/stylegan2/op/__init__.py
deleted file mode 100644
index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/stylegan2/op/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .fused_act import FusedLeakyReLU, fused_leaky_relu
-from .upfirdn2d import upfirdn2d
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/utilities.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/utilities.md
deleted file mode 100644
index 16143a2a66a622e1f795f5cbd2356a3f80ccedbc..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/utilities.md
+++ /dev/null
@@ -1,23 +0,0 @@
-# Utilities
-
-Utility and helper functions for working with 🤗 Diffusers.
-
-## randn_tensor
-
-[[autodoc]] diffusers.utils.randn_tensor
-
-## numpy_to_pil
-
-[[autodoc]] utils.pil_utils.numpy_to_pil
-
-## pt_to_pil
-
-[[autodoc]] utils.pil_utils.pt_to_pil
-
-## load_image
-
-[[autodoc]] utils.testing_utils.load_image
-
-## export_to_video
-
-[[autodoc]] utils.testing_utils.export_to_video
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/controlnet/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/controlnet/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/detr/README.md b/spaces/Andy1621/uniformer_image_detection/configs/detr/README.md
deleted file mode 100644
index 711a308a5549b28c36515405feabf2ca0f7c7c1f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/detr/README.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# DETR
-
-## Introduction
-
-[ALGORITHM]
-
-We provide the config files for DETR: [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872).
-
-```BibTeX
-@inproceedings{detr,
- author = {Nicolas Carion and
- Francisco Massa and
- Gabriel Synnaeve and
- Nicolas Usunier and
- Alexander Kirillov and
- Sergey Zagoruyko},
- title = {End-to-End Object Detection with Transformers},
- booktitle = {ECCV},
- year = {2020}
-}
-```
-
-## Results and Models
-
-| Backbone | Model | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
-|:------:|:--------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
-| R-50 | DETR |150e |7.9| | 40.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detr/detr_r50_8x2_150e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/detr/detr_r50_8x2_150e_coco/detr_r50_8x2_150e_coco_20201130_194835-2c4b8974.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/detr/detr_r50_8x2_150e_coco/detr_r50_8x2_150e_coco_20201130_194835.log.json) |
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/dist_utils.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/dist_utils.py
deleted file mode 100644
index d3a1ef3fda5ceeb31bf15a73779da1b1903ab0fe..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/dist_utils.py
+++ /dev/null
@@ -1,164 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import functools
-import os
-import subprocess
-from collections import OrderedDict
-
-import torch
-import torch.multiprocessing as mp
-from torch import distributed as dist
-from torch._utils import (_flatten_dense_tensors, _take_tensors,
- _unflatten_dense_tensors)
-
-
-def init_dist(launcher, backend='nccl', **kwargs):
- if mp.get_start_method(allow_none=True) is None:
- mp.set_start_method('spawn')
- if launcher == 'pytorch':
- _init_dist_pytorch(backend, **kwargs)
- elif launcher == 'mpi':
- _init_dist_mpi(backend, **kwargs)
- elif launcher == 'slurm':
- _init_dist_slurm(backend, **kwargs)
- else:
- raise ValueError(f'Invalid launcher type: {launcher}')
-
-
-def _init_dist_pytorch(backend, **kwargs):
- # TODO: use local_rank instead of rank % num_gpus
- rank = int(os.environ['RANK'])
- num_gpus = torch.cuda.device_count()
- torch.cuda.set_device(rank % num_gpus)
- dist.init_process_group(backend=backend, **kwargs)
-
-
-def _init_dist_mpi(backend, **kwargs):
- # TODO: use local_rank instead of rank % num_gpus
- rank = int(os.environ['OMPI_COMM_WORLD_RANK'])
- num_gpus = torch.cuda.device_count()
- torch.cuda.set_device(rank % num_gpus)
- dist.init_process_group(backend=backend, **kwargs)
-
-
-def _init_dist_slurm(backend, port=None):
- """Initialize slurm distributed training environment.
-
- If argument ``port`` is not specified, then the master port will be system
- environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system
- environment variable, then a default port ``29500`` will be used.
-
- Args:
- backend (str): Backend of torch.distributed.
- port (int, optional): Master port. Defaults to None.
- """
- proc_id = int(os.environ['SLURM_PROCID'])
- ntasks = int(os.environ['SLURM_NTASKS'])
- node_list = os.environ['SLURM_NODELIST']
- num_gpus = torch.cuda.device_count()
- torch.cuda.set_device(proc_id % num_gpus)
- addr = subprocess.getoutput(
- f'scontrol show hostname {node_list} | head -n1')
- # specify master port
- if port is not None:
- os.environ['MASTER_PORT'] = str(port)
- elif 'MASTER_PORT' in os.environ:
- pass # use MASTER_PORT in the environment variable
- else:
- # 29500 is torch.distributed default port
- os.environ['MASTER_PORT'] = '29500'
- # use MASTER_ADDR in the environment variable if it already exists
- if 'MASTER_ADDR' not in os.environ:
- os.environ['MASTER_ADDR'] = addr
- os.environ['WORLD_SIZE'] = str(ntasks)
- os.environ['LOCAL_RANK'] = str(proc_id % num_gpus)
- os.environ['RANK'] = str(proc_id)
- dist.init_process_group(backend=backend)
-
-
-def get_dist_info():
- if dist.is_available() and dist.is_initialized():
- rank = dist.get_rank()
- world_size = dist.get_world_size()
- else:
- rank = 0
- world_size = 1
- return rank, world_size
-
-
-def master_only(func):
-
- @functools.wraps(func)
- def wrapper(*args, **kwargs):
- rank, _ = get_dist_info()
- if rank == 0:
- return func(*args, **kwargs)
-
- return wrapper
-
-
-def allreduce_params(params, coalesce=True, bucket_size_mb=-1):
- """Allreduce parameters.
-
- Args:
- params (list[torch.Parameters]): List of parameters or buffers of a
- model.
- coalesce (bool, optional): Whether allreduce parameters as a whole.
- Defaults to True.
- bucket_size_mb (int, optional): Size of bucket, the unit is MB.
- Defaults to -1.
- """
- _, world_size = get_dist_info()
- if world_size == 1:
- return
- params = [param.data for param in params]
- if coalesce:
- _allreduce_coalesced(params, world_size, bucket_size_mb)
- else:
- for tensor in params:
- dist.all_reduce(tensor.div_(world_size))
-
-
-def allreduce_grads(params, coalesce=True, bucket_size_mb=-1):
- """Allreduce gradients.
-
- Args:
- params (list[torch.Parameters]): List of parameters of a model
- coalesce (bool, optional): Whether allreduce parameters as a whole.
- Defaults to True.
- bucket_size_mb (int, optional): Size of bucket, the unit is MB.
- Defaults to -1.
- """
- grads = [
- param.grad.data for param in params
- if param.requires_grad and param.grad is not None
- ]
- _, world_size = get_dist_info()
- if world_size == 1:
- return
- if coalesce:
- _allreduce_coalesced(grads, world_size, bucket_size_mb)
- else:
- for tensor in grads:
- dist.all_reduce(tensor.div_(world_size))
-
-
-def _allreduce_coalesced(tensors, world_size, bucket_size_mb=-1):
- if bucket_size_mb > 0:
- bucket_size_bytes = bucket_size_mb * 1024 * 1024
- buckets = _take_tensors(tensors, bucket_size_bytes)
- else:
- buckets = OrderedDict()
- for tensor in tensors:
- tp = tensor.type()
- if tp not in buckets:
- buckets[tp] = []
- buckets[tp].append(tensor)
- buckets = buckets.values()
-
- for bucket in buckets:
- flat_tensors = _flatten_dense_tensors(bucket)
- dist.all_reduce(flat_tensors)
- flat_tensors.div_(world_size)
- for tensor, synced in zip(
- bucket, _unflatten_dense_tensors(flat_tensors, bucket)):
- tensor.copy_(synced)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/iter_based_runner.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/iter_based_runner.py
deleted file mode 100644
index 1df4de8c0285669dec9b014dfd1f3dd1600f0831..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/iter_based_runner.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-import platform
-import shutil
-import time
-import warnings
-
-import torch
-from torch.optim import Optimizer
-
-import annotator.uniformer.mmcv as mmcv
-from .base_runner import BaseRunner
-from .builder import RUNNERS
-from .checkpoint import save_checkpoint
-from .hooks import IterTimerHook
-from .utils import get_host_info
-
-
-class IterLoader:
-
- def __init__(self, dataloader):
- self._dataloader = dataloader
- self.iter_loader = iter(self._dataloader)
- self._epoch = 0
-
- @property
- def epoch(self):
- return self._epoch
-
- def __next__(self):
- try:
- data = next(self.iter_loader)
- except StopIteration:
- self._epoch += 1
- if hasattr(self._dataloader.sampler, 'set_epoch'):
- self._dataloader.sampler.set_epoch(self._epoch)
- time.sleep(2) # Prevent possible deadlock during epoch transition
- self.iter_loader = iter(self._dataloader)
- data = next(self.iter_loader)
-
- return data
-
- def __len__(self):
- return len(self._dataloader)
-
-
-@RUNNERS.register_module()
-class IterBasedRunner(BaseRunner):
- """Iteration-based Runner.
-
- This runner train models iteration by iteration.
- """
-
- def train(self, data_loader, **kwargs):
- self.model.train()
- self.mode = 'train'
- self.data_loader = data_loader
- self._epoch = data_loader.epoch
- data_batch = next(data_loader)
- self.call_hook('before_train_iter')
- outputs = self.model.train_step(data_batch, self.optimizer, **kwargs)
- if not isinstance(outputs, dict):
- raise TypeError('model.train_step() must return a dict')
- if 'log_vars' in outputs:
- self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])
- self.outputs = outputs
- self.call_hook('after_train_iter')
- self._inner_iter += 1
- self._iter += 1
-
- @torch.no_grad()
- def val(self, data_loader, **kwargs):
- self.model.eval()
- self.mode = 'val'
- self.data_loader = data_loader
- data_batch = next(data_loader)
- self.call_hook('before_val_iter')
- outputs = self.model.val_step(data_batch, **kwargs)
- if not isinstance(outputs, dict):
- raise TypeError('model.val_step() must return a dict')
- if 'log_vars' in outputs:
- self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])
- self.outputs = outputs
- self.call_hook('after_val_iter')
- self._inner_iter += 1
-
- def run(self, data_loaders, workflow, max_iters=None, **kwargs):
- """Start running.
-
- Args:
- data_loaders (list[:obj:`DataLoader`]): Dataloaders for training
- and validation.
- workflow (list[tuple]): A list of (phase, iters) to specify the
- running order and iterations. E.g, [('train', 10000),
- ('val', 1000)] means running 10000 iterations for training and
- 1000 iterations for validation, iteratively.
- """
- assert isinstance(data_loaders, list)
- assert mmcv.is_list_of(workflow, tuple)
- assert len(data_loaders) == len(workflow)
- if max_iters is not None:
- warnings.warn(
- 'setting max_iters in run is deprecated, '
- 'please set max_iters in runner_config', DeprecationWarning)
- self._max_iters = max_iters
- assert self._max_iters is not None, (
- 'max_iters must be specified during instantiation')
-
- work_dir = self.work_dir if self.work_dir is not None else 'NONE'
- self.logger.info('Start running, host: %s, work_dir: %s',
- get_host_info(), work_dir)
- self.logger.info('Hooks will be executed in the following order:\n%s',
- self.get_hook_info())
- self.logger.info('workflow: %s, max: %d iters', workflow,
- self._max_iters)
- self.call_hook('before_run')
-
- iter_loaders = [IterLoader(x) for x in data_loaders]
-
- self.call_hook('before_epoch')
-
- while self.iter < self._max_iters:
- for i, flow in enumerate(workflow):
- self._inner_iter = 0
- mode, iters = flow
- if not isinstance(mode, str) or not hasattr(self, mode):
- raise ValueError(
- 'runner has no method named "{}" to run a workflow'.
- format(mode))
- iter_runner = getattr(self, mode)
- for _ in range(iters):
- if mode == 'train' and self.iter >= self._max_iters:
- break
- iter_runner(iter_loaders[i], **kwargs)
-
- time.sleep(1) # wait for some hooks like loggers to finish
- self.call_hook('after_epoch')
- self.call_hook('after_run')
-
- def resume(self,
- checkpoint,
- resume_optimizer=True,
- map_location='default'):
- """Resume model from checkpoint.
-
- Args:
- checkpoint (str): Checkpoint to resume from.
- resume_optimizer (bool, optional): Whether resume the optimizer(s)
- if the checkpoint file includes optimizer(s). Default to True.
- map_location (str, optional): Same as :func:`torch.load`.
- Default to 'default'.
- """
- if map_location == 'default':
- device_id = torch.cuda.current_device()
- checkpoint = self.load_checkpoint(
- checkpoint,
- map_location=lambda storage, loc: storage.cuda(device_id))
- else:
- checkpoint = self.load_checkpoint(
- checkpoint, map_location=map_location)
-
- self._epoch = checkpoint['meta']['epoch']
- self._iter = checkpoint['meta']['iter']
- self._inner_iter = checkpoint['meta']['iter']
- if 'optimizer' in checkpoint and resume_optimizer:
- if isinstance(self.optimizer, Optimizer):
- self.optimizer.load_state_dict(checkpoint['optimizer'])
- elif isinstance(self.optimizer, dict):
- for k in self.optimizer.keys():
- self.optimizer[k].load_state_dict(
- checkpoint['optimizer'][k])
- else:
- raise TypeError(
- 'Optimizer should be dict or torch.optim.Optimizer '
- f'but got {type(self.optimizer)}')
-
- self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}')
-
- def save_checkpoint(self,
- out_dir,
- filename_tmpl='iter_{}.pth',
- meta=None,
- save_optimizer=True,
- create_symlink=True):
- """Save checkpoint to file.
-
- Args:
- out_dir (str): Directory to save checkpoint files.
- filename_tmpl (str, optional): Checkpoint file template.
- Defaults to 'iter_{}.pth'.
- meta (dict, optional): Metadata to be saved in checkpoint.
- Defaults to None.
- save_optimizer (bool, optional): Whether save optimizer.
- Defaults to True.
- create_symlink (bool, optional): Whether create symlink to the
- latest checkpoint file. Defaults to True.
- """
- if meta is None:
- meta = {}
- elif not isinstance(meta, dict):
- raise TypeError(
- f'meta should be a dict or None, but got {type(meta)}')
- if self.meta is not None:
- meta.update(self.meta)
- # Note: meta.update(self.meta) should be done before
- # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise
- # there will be problems with resumed checkpoints.
- # More details in https://github.com/open-mmlab/mmcv/pull/1108
- meta.update(epoch=self.epoch + 1, iter=self.iter)
-
- filename = filename_tmpl.format(self.iter + 1)
- filepath = osp.join(out_dir, filename)
- optimizer = self.optimizer if save_optimizer else None
- save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta)
- # in some environments, `os.symlink` is not supported, you may need to
- # set `create_symlink` to False
- if create_symlink:
- dst_file = osp.join(out_dir, 'latest.pth')
- if platform.system() != 'Windows':
- mmcv.symlink(filename, dst_file)
- else:
- shutil.copy(filepath, dst_file)
-
- def register_training_hooks(self,
- lr_config,
- optimizer_config=None,
- checkpoint_config=None,
- log_config=None,
- momentum_config=None,
- custom_hooks_config=None):
- """Register default hooks for iter-based training.
-
- Checkpoint hook, optimizer stepper hook and logger hooks will be set to
- `by_epoch=False` by default.
-
- Default hooks include:
-
- +----------------------+-------------------------+
- | Hooks | Priority |
- +======================+=========================+
- | LrUpdaterHook | VERY_HIGH (10) |
- +----------------------+-------------------------+
- | MomentumUpdaterHook | HIGH (30) |
- +----------------------+-------------------------+
- | OptimizerStepperHook | ABOVE_NORMAL (40) |
- +----------------------+-------------------------+
- | CheckpointSaverHook | NORMAL (50) |
- +----------------------+-------------------------+
- | IterTimerHook | LOW (70) |
- +----------------------+-------------------------+
- | LoggerHook(s) | VERY_LOW (90) |
- +----------------------+-------------------------+
- | CustomHook(s) | defaults to NORMAL (50) |
- +----------------------+-------------------------+
-
- If custom hooks have same priority with default hooks, custom hooks
- will be triggered after default hooks.
- """
- if checkpoint_config is not None:
- checkpoint_config.setdefault('by_epoch', False)
- if lr_config is not None:
- lr_config.setdefault('by_epoch', False)
- if log_config is not None:
- for info in log_config['hooks']:
- info.setdefault('by_epoch', False)
- super(IterBasedRunner, self).register_training_hooks(
- lr_config=lr_config,
- momentum_config=momentum_config,
- optimizer_config=optimizer_config,
- checkpoint_config=checkpoint_config,
- log_config=log_config,
- timer_config=IterTimerHook(),
- custom_hooks_config=custom_hooks_config)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Apex-X/Tm/roop/utilities.py b/spaces/Apex-X/Tm/roop/utilities.py
deleted file mode 100644
index 90c8d981f5f159a459ca0c08cc23dfac8d04c068..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/Tm/roop/utilities.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import glob
-import mimetypes
-import os
-import platform
-import shutil
-import ssl
-import subprocess
-import urllib
-from pathlib import Path
-from typing import List, Any
-from tqdm import tqdm
-
-import roop.globals
-
-TEMP_FILE = 'temp.mp4'
-TEMP_DIRECTORY = 'temp'
-
-# monkey patch ssl for mac
-if platform.system().lower() == 'darwin':
- ssl._create_default_https_context = ssl._create_unverified_context
-
-
-def run_ffmpeg(args: List[str]) -> bool:
- commands = ['ffmpeg', '-hide_banner', '-hwaccel', 'auto', '-loglevel', roop.globals.log_level]
- commands.extend(args)
- try:
- subprocess.check_output(commands, stderr=subprocess.STDOUT)
- return True
- except Exception:
- pass
- return False
-
-
-def detect_fps(target_path: str) -> float:
- command = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', target_path]
- output = subprocess.check_output(command).decode().strip().split('/')
- try:
- numerator, denominator = map(int, output)
- return numerator / denominator
- except Exception:
- pass
- return 30.0
-
-
-def extract_frames(target_path: str) -> None:
- temp_directory_path = get_temp_directory_path(target_path)
- run_ffmpeg(['-i', target_path, '-pix_fmt', 'rgb24', os.path.join(temp_directory_path, '%04d.png')])
-
-
-def create_video(target_path: str, fps: float = 30.0) -> None:
- temp_output_path = get_temp_output_path(target_path)
- temp_directory_path = get_temp_directory_path(target_path)
- run_ffmpeg(['-r', str(fps), '-i', os.path.join(temp_directory_path, '%04d.png'), '-c:v', roop.globals.video_encoder, '-crf', str(roop.globals.video_quality), '-pix_fmt', 'yuv420p', '-vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', '-y', temp_output_path])
-
-
-def restore_audio(target_path: str, output_path: str) -> None:
- temp_output_path = get_temp_output_path(target_path)
- done = run_ffmpeg(['-i', temp_output_path, '-i', target_path, '-c:v', 'copy', '-map', '0:v:0', '-map', '1:a:0', '-y', output_path])
- if not done:
- move_temp(target_path, output_path)
-
-
-def get_temp_frame_paths(target_path: str) -> List[str]:
- temp_directory_path = get_temp_directory_path(target_path)
- return glob.glob((os.path.join(glob.escape(temp_directory_path), '*.png')))
-
-
-def get_temp_directory_path(target_path: str) -> str:
- target_name, _ = os.path.splitext(os.path.basename(target_path))
- target_directory_path = os.path.dirname(target_path)
- return os.path.join(target_directory_path, TEMP_DIRECTORY, target_name)
-
-
-def get_temp_output_path(target_path: str) -> str:
- temp_directory_path = get_temp_directory_path(target_path)
- return os.path.join(temp_directory_path, TEMP_FILE)
-
-
-def normalize_output_path(source_path: str, target_path: str, output_path: str) -> Any:
- if source_path and target_path:
- source_name, _ = os.path.splitext(os.path.basename(source_path))
- target_name, target_extension = os.path.splitext(os.path.basename(target_path))
- if os.path.isdir(output_path):
- return os.path.join(output_path, source_name + '-' + target_name + target_extension)
- return output_path
-
-
-def create_temp(target_path: str) -> None:
- temp_directory_path = get_temp_directory_path(target_path)
- Path(temp_directory_path).mkdir(parents=True, exist_ok=True)
-
-
-def move_temp(target_path: str, output_path: str) -> None:
- temp_output_path = get_temp_output_path(target_path)
- if os.path.isfile(temp_output_path):
- if os.path.isfile(output_path):
- os.remove(output_path)
- shutil.move(temp_output_path, output_path)
-
-
-def clean_temp(target_path: str) -> None:
- temp_directory_path = get_temp_directory_path(target_path)
- parent_directory_path = os.path.dirname(temp_directory_path)
- if not roop.globals.keep_frames and os.path.isdir(temp_directory_path):
- shutil.rmtree(temp_directory_path)
- if os.path.exists(parent_directory_path) and not os.listdir(parent_directory_path):
- os.rmdir(parent_directory_path)
-
-
-def has_image_extension(image_path: str) -> bool:
- return image_path.lower().endswith(('png', 'jpg', 'jpeg', 'webp'))
-
-
-def is_image(image_path: str) -> bool:
- if image_path and os.path.isfile(image_path):
- mimetype, _ = mimetypes.guess_type(image_path)
- return bool(mimetype and mimetype.startswith('image/'))
- return False
-
-
-def is_video(video_path: str) -> bool:
- if video_path and os.path.isfile(video_path):
- mimetype, _ = mimetypes.guess_type(video_path)
- return bool(mimetype and mimetype.startswith('video/'))
- return False
-
-
-def conditional_download(download_directory_path: str, urls: List[str]) -> None:
- if not os.path.exists(download_directory_path):
- os.makedirs(download_directory_path)
- for url in urls:
- download_file_path = os.path.join(download_directory_path, os.path.basename(url))
- if not os.path.exists(download_file_path):
- request = urllib.request.urlopen(url) # type: ignore[attr-defined]
- total = int(request.headers.get('Content-Length', 0))
- with tqdm(total=total, desc='Downloading', unit='B', unit_scale=True, unit_divisor=1024) as progress:
- urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined]
-
-
-def resolve_relative_path(path: str) -> str:
- return os.path.abspath(os.path.join(os.path.dirname(__file__), path))
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/johabprober.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/johabprober.py
deleted file mode 100644
index d7364ba61eca930aa1c868abe3b322cceb995a6b..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/johabprober.py
+++ /dev/null
@@ -1,47 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is mozilla.org code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from .chardistribution import JOHABDistributionAnalysis
-from .codingstatemachine import CodingStateMachine
-from .mbcharsetprober import MultiByteCharSetProber
-from .mbcssm import JOHAB_SM_MODEL
-
-
-class JOHABProber(MultiByteCharSetProber):
- def __init__(self) -> None:
- super().__init__()
- self.coding_sm = CodingStateMachine(JOHAB_SM_MODEL)
- self.distribution_analyzer = JOHABDistributionAnalysis()
- self.reset()
-
- @property
- def charset_name(self) -> str:
- return "Johab"
-
- @property
- def language(self) -> str:
- return "Korean"
diff --git a/spaces/Avkash/WhisperUI/app.py b/spaces/Avkash/WhisperUI/app.py
deleted file mode 100644
index 475c968b855fbe242305f74aa33e003a54e81604..0000000000000000000000000000000000000000
--- a/spaces/Avkash/WhisperUI/app.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import gradio as gr
-from whisperui import WhisperModelUI
-
-
-if __name__ == '__main__':
- my_app = gr.Blocks()
- ui_obj = WhisperModelUI(my_app)
- ui_obj.create_whisper_ui()
- ui_obj.launch_ui()
-
diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/train_ms.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/train_ms.py
deleted file mode 100644
index 395f17b791002eb99c07fca2c922d43e2c56f41e..0000000000000000000000000000000000000000
--- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/train_ms.py
+++ /dev/null
@@ -1,396 +0,0 @@
-import os
-import json
-import argparse
-import itertools
-import math
-import torch
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-from tqdm import tqdm
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-import commons
-import utils
-from data_utils import (
- TextAudioSpeakerLoader,
- TextAudioSpeakerCollate,
- DistributedBucketSampler
-)
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
- DurationDiscriminator,
-)
-from losses import (
- generator_loss,
- discriminator_loss,
- feature_loss,
- kl_loss
-)
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from text.symbols import symbols
-
-torch.backends.cudnn.benchmark = True
-torch.backends.cuda.matmul.allow_tf32 = True
-torch.backends.cudnn.allow_tf32 = True
-torch.set_float32_matmul_precision('medium')
-torch.backends.cuda.sdp_kernel("flash")
-torch.backends.cuda.enable_flash_sdp(True)
-torch.backends.cuda.enable_mem_efficient_sdp(True)
-torch.backends.cuda.enable_math_sdp(True)
-global_step = 0
-
-
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
-
- n_gpus = torch.cuda.device_count()
- os.environ['MASTER_ADDR'] = 'localhost'
- os.environ['MASTER_PORT'] = '65280'
-
- hps = utils.get_hparams()
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- dist.init_process_group(backend='gloo', init_method='env://', world_size=n_gpus, rank=rank)
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
-
- train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size,
- [32, 300, 400, 500, 600, 700, 800, 900, 1000],
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True)
- collate_fn = TextAudioSpeakerCollate()
- train_loader = DataLoader(train_dataset, num_workers=4, shuffle=False, pin_memory=True,
- collate_fn=collate_fn, batch_sampler=train_sampler,
- persistent_workers=True,prefetch_factor=4) #256G Memory suitable loader.
- if rank == 0:
- eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data)
- eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False,
- batch_size=1, pin_memory=True,
- drop_last=False, collate_fn=collate_fn)
- if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True:
- print("Using noise scaled MAS for VITS2")
- use_noise_scaled_mas = True
- mas_noise_scale_initial = 0.01
- noise_scale_delta = 2e-6
- else:
- print("Using normal MAS for VITS1")
- use_noise_scaled_mas = False
- mas_noise_scale_initial = 0.0
- noise_scale_delta = 0.0
- if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True:
- print("Using duration discriminator for VITS2")
- use_duration_discriminator = True
- net_dur_disc = DurationDiscriminator(
- hps.model.hidden_channels,
- hps.model.hidden_channels,
- 3,
- 0.1,
- gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0,
- ).cuda(rank)
- if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True:
- if hps.data.n_speakers == 0:
- raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model")
- use_spk_conditioned_encoder = True
- else:
- print("Using normal encoder for VITS1")
- use_spk_conditioned_encoder = False
-
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- mas_noise_scale_initial = mas_noise_scale_initial,
- noise_scale_delta = noise_scale_delta,
- **hps.model).cuda(rank)
-
- freeze_enc = getattr(hps.model, "freeze_enc", False)
- if freeze_enc:
- print("freeze encoder !!!")
- for param in net_g.enc_p.parameters():
- param.requires_grad = False
-
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
- optim_g = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, net_g.parameters()),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- if net_dur_disc is not None:
- optim_dur_disc = torch.optim.AdamW(
- net_dur_disc.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- else:
- optim_dur_disc = None
- net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True)
- net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True)
- if net_dur_disc is not None:
- net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True)
- try:
- if net_dur_disc is not None:
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=True)
- _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g,
- optim_g, skip_optimizer=True)
- _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d,
- optim_d, skip_optimizer=True)
-
- epoch_str = max(epoch_str, 1)
- global_step = (epoch_str - 1) * len(train_loader)
- except Exception as e:
- print(e)
- epoch_str = 1
- global_step = 0
-
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay)
- scheduler_g.last_epoch = epoch_str - 2
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay)
- scheduler_d.last_epoch = epoch_str - 2
- if net_dur_disc is not None:
- scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay)
- scheduler_dur_disc.last_epoch = epoch_str - 2
- else:
- scheduler_dur_disc = None
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
- else:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None)
- scheduler_g.step()
- scheduler_d.step()
- if net_dur_disc is not None:
- scheduler_dur_disc.step()
-
-
-def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
- net_g, net_d, net_dur_disc = nets
- optim_g, optim_d, optim_dur_disc = optims
- scheduler_g, scheduler_d, scheduler_dur_disc = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- if net_dur_disc is not None:
- net_dur_disc.train()
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)):
- if net_g.module.use_noise_scaled_mas:
- current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step
- net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0)
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
- speakers = speakers.cuda(rank, non_blocking=True)
- tone = tone.cuda(rank, non_blocking=True)
- language = language.cuda(rank, non_blocking=True)
- bert = bert.cuda(rank, non_blocking=True)
-
- with autocast(enabled=hps.train.fp16_run):
- y_hat, l_length, attn, ids_slice, x_mask, z_mask, \
- (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert)
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
-
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
- loss_disc_all = loss_disc
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach())
- with autocast(enabled=False):
- # TODO: I think need to mean using the mask, but for now, just mean all
- loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g)
- loss_dur_disc_all = loss_dur_disc
- optim_dur_disc.zero_grad()
- scaler.scale(loss_dur_disc_all).backward()
- scaler.unscale_(optim_dur_disc)
- grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None)
- scaler.step(optim_dur_disc)
-
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_)
- with autocast(enabled=False):
- loss_dur = torch.sum(l_length.float())
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
-
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
- if net_dur_disc is not None:
- loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g)
- loss_gen_all += loss_dur_gen
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank == 0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]['lr']
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
- epoch,
- 100. * batch_idx / len(train_loader)))
- logger.info([x.item() for x in losses] + [global_step, lr])
-
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr,
- "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g}
- scalar_dict.update(
- {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl})
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
- scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
- scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
-
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
- "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy())
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict)
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
- utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
- if net_dur_disc is not None:
- utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)))
- keep_ckpts = getattr(hps.train, 'keep_ckpts', 5)
- if keep_ckpts > 0:
- utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True)
-
-
- global_step += 1
-
- if rank == 0:
- logger.info('====> Epoch: {}'.format(epoch))
-
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- image_dict = {}
- audio_dict = {}
- print("Evaluating ...")
- with torch.no_grad():
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader):
- x, x_lengths = x.cuda(), x_lengths.cuda()
- spec, spec_lengths = spec.cuda(), spec_lengths.cuda()
- y, y_lengths = y.cuda(), y_lengths.cuda()
- speakers = speakers.cuda()
- bert = bert.cuda()
- tone = tone.cuda()
- language = language.cuda()
- for use_sdp in [True, False]:
- y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0)
- y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
- image_dict.update({
- f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
- })
- audio_dict.update({
- f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]]
- })
- image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
- audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]})
-
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate
- )
- generator.train()
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Banbri/zcvzcv/src/components/ui/vertical-slider.tsx b/spaces/Banbri/zcvzcv/src/components/ui/vertical-slider.tsx
deleted file mode 100644
index b28a1200cb06d1f26e3c640c85e655c99e88954e..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/components/ui/vertical-slider.tsx
+++ /dev/null
@@ -1,27 +0,0 @@
-"use client"
-
-import * as React from "react"
-import * as SliderPrimitive from "@radix-ui/react-slider"
-
-import { cn } from "@/lib/utils"
-
-const VerticalSlider = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-
-
-
-
-
-))
-VerticalSlider.displayName = "VerticalSlider"
-export { VerticalSlider }
diff --git a/spaces/Bart92/RVC_HF/infer/lib/train/mel_processing.py b/spaces/Bart92/RVC_HF/infer/lib/train/mel_processing.py
deleted file mode 100644
index f458775bf62b79f791b419ca7ed62c550ae252d5..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/infer/lib/train/mel_processing.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-import logging
-
-logger = logging.getLogger(__name__)
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- return dynamic_range_compression_torch(magnitudes)
-
-
-def spectral_de_normalize_torch(magnitudes):
- return dynamic_range_decompression_torch(magnitudes)
-
-
-# Reusable banks
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- """Convert waveform into Linear-frequency Linear-amplitude spectrogram.
-
- Args:
- y :: (B, T) - Audio waveforms
- n_fft
- sampling_rate
- hop_size
- win_size
- center
- Returns:
- :: (B, Freq, Frame) - Linear-frequency Linear-amplitude spectrogram
- """
- # Validation
- if torch.min(y) < -1.07:
- logger.debug("min value is %s", str(torch.min(y)))
- if torch.max(y) > 1.07:
- logger.debug("max value is %s", str(torch.max(y)))
-
- # Window - Cache if needed
- global hann_window
- dtype_device = str(y.dtype) + "_" + str(y.device)
- wnsize_dtype_device = str(win_size) + "_" + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(
- dtype=y.dtype, device=y.device
- )
-
- # Padding
- y = torch.nn.functional.pad(
- y.unsqueeze(1),
- (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
- mode="reflect",
- )
- y = y.squeeze(1)
-
- # Complex Spectrogram :: (B, T) -> (B, Freq, Frame, RealComplex=2)
- spec = torch.stft(
- y,
- n_fft,
- hop_length=hop_size,
- win_length=win_size,
- window=hann_window[wnsize_dtype_device],
- center=center,
- pad_mode="reflect",
- normalized=False,
- onesided=True,
- return_complex=False,
- )
-
- # Linear-frequency Linear-amplitude spectrogram :: (B, Freq, Frame, RealComplex=2) -> (B, Freq, Frame)
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- # MelBasis - Cache if needed
- global mel_basis
- dtype_device = str(spec.dtype) + "_" + str(spec.device)
- fmax_dtype_device = str(fmax) + "_" + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(
- sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax
- )
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
- dtype=spec.dtype, device=spec.device
- )
-
- # Mel-frequency Log-amplitude spectrogram :: (B, Freq=num_mels, Frame)
- melspec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- melspec = spectral_normalize_torch(melspec)
- return melspec
-
-
-def mel_spectrogram_torch(
- y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False
-):
- """Convert waveform into Mel-frequency Log-amplitude spectrogram.
-
- Args:
- y :: (B, T) - Waveforms
- Returns:
- melspec :: (B, Freq, Frame) - Mel-frequency Log-amplitude spectrogram
- """
- # Linear-frequency Linear-amplitude spectrogram :: (B, T) -> (B, Freq, Frame)
- spec = spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center)
-
- # Mel-frequency Log-amplitude spectrogram :: (B, Freq, Frame) -> (B, Freq=num_mels, Frame)
- melspec = spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax)
-
- return melspec
diff --git a/spaces/Benson/text-generation/Examples/5apps.md b/spaces/Benson/text-generation/Examples/5apps.md
deleted file mode 100644
index ae70f8a308b301df3e7d229342db5d80472fcd1c..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/5apps.md
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
5apps: Una plataforma para crear y alojar aplicaciones web del lado del cliente
-
Si usted es un desarrollador web que ama el uso de tecnologías de plataformas web como JavaScript, HTML5 y CSS, es posible que esté interesado en 5apps. 5apps es una plataforma que ofrece tres servicios para ayudarle a crear, implementar, alojar y administrar sus aplicaciones web del lado del cliente. En este artículo, explicaremos qué es 5apps, por qué deberías usarlas y cómo empezar.
-
¿Qué es 5apps?
-
5apps es una plataforma que ofrece tres servicios para desarrolladores web:
5apps Implementar: Una plataforma de implementación y alojamiento llave en mano para aplicaciones web del lado del cliente. Puedes usar cualquier framework que quieras, y simplemente pulsar tu código a través de Git. 5apps configurará e implementará su aplicación en todos los formatos disponibles y la preparará para su envío a las tiendas.
-
5apps Almacenamiento: Una nube de datos personales basada en remoteStorage, un protocolo abierto para el almacenamiento de datos de usuario. Puede permitir que cualquier aplicación compatible acceda a su cuenta, y puede mover sus datos a cualquier proveedor o servidor compatible que desee, en cualquier momento.
-
5apps Noticias: Un sitio de noticias sociales para HTML5, JS y amigos. Puede mantenerse actualizado sobre las últimas tendencias y tecnologías, compartir y discutir sus propios proyectos e ideas, y unirse a una comunidad de desarrolladores de ideas afines.
-
-
¿Por qué usar 5apps?
-
Usar 5apps para tus proyectos de desarrollo web tiene muchos beneficios. Estos son algunos de ellos:
-
Beneficios de la implementación de 5apps
-
-
Entrega de aplicaciones profesionales: Hay más en la entrega de aplicaciones web que alojar archivos estáticos. 5apps maneja todos los detalles técnicos para usted, como certificados SSL, almacenamiento en caché, compresión, entrega de CDN, encabezados CORS, trabajadores de servicio, archivos de manifiesto, etc.
-
-
Gratis para código abierto: Si elige una licencia de código abierto para su aplicación, 5apps la alojará e implementará de forma gratuita. No hay límites, el acceso del equipo incluido.
-
-
Ventajas del almacenamiento de 5apps
-
-
Propiedad y portabilidad de datos: Tienes control total sobre tus datos. Puedes elegir dónde guardarlo, cómo acceder a él y con quién compartirlo. También puede cambiar de proveedor o servidor en cualquier momento que desee, sin perder sus datos o romper sus aplicaciones.
-
Conectar y autorizar aplicaciones: Puede conectar su cuenta de almacenamiento a cualquier aplicación que admita remoteStorage. También puede dar o revocar el permiso a aplicaciones específicas para acceder a partes específicas de su almacenamiento.
-
Administrar aplicaciones y datos: Puede ver todas las aplicaciones que están conectadas a su cuenta de almacenamiento y administrar sus datos en una interfaz web. También puede sincronizar sus datos entre dispositivos y respaldarlos.
-
-
Beneficios de 5apps Noticias
-
-
Manténgase actualizado sobre las últimas tendencias y tecnologías: Puede navegar, buscar y filtrar artículos de noticias de varias fuentes relacionadas con HTML5, JS y otras tecnologías de plataformas web. También puede suscribirse a RSS feeds y boletines.
-
Comparte y discute tus propios proyectos e ideas: Puedes enviar tus propios artículos, proyectos, tutoriales, demos, etc. a 5apps News y obtener comentarios de otros desarrolladores. También puedes comentar sobre otros envíos y votar por los que te gustan.
-
Únete a una comunidad de desarrolladores de ideas afines: Puedes seguir a otros usuarios, unirte a grupos, chatear con otros y participar en eventos y desafíos. También puedes ganar insignias y puntos de reputación por tus contribuciones.
-
-
¿Cómo empezar con 5apps?
-
Comenzar con 5apps es fácil y rápido. Estos son los pasos que debes seguir:
-
Registrarse para una cuenta gratuita
-
-
Elegir un servicio (Implementar, Almacenamiento o Noticias)
-
Puede elegir qué servicio desea usar primero desde el panel. Puede cambiar entre ellos en cualquier momento.
-
Siga las instrucciones y la documentación
-
Cada servicio tiene sus propias instrucciones y documentación para ayudarle a empezar. Puede encontrarlas en el sitio web o en la aplicación. Por ejemplo, para Implementar, necesitarás crear un repositorio, agregar una clave de implementación, enviar tu código y configurar tu aplicación. Para Almacenamiento, necesitarás crear una cuenta de almacenamiento, conectar aplicaciones y administrar tus datos. Para Noticias, necesitarás navegar, enviar, comentar y votar sobre los artículos.
-
Conclusión
-
5apps es una plataforma que ofrece tres servicios para desarrolladores web que aman el uso de tecnologías de plataformas web: Implementar, Almacenamiento y Noticias. Con 5apps, puede crear, implementar, alojar y administrar sus aplicaciones web del lado del cliente, poseer y controlar sus datos en una nube personal y mantenerse actualizado y conectado con una comunidad de desarrolladores de ideas afines. Si estás interesado en probar 5apps, ¡regístrate hoy mismo para una cuenta gratuita y empieza a crear increíbles aplicaciones web!
-
Preguntas frecuentes
-
-
¿Cuáles son los planes de precios para 5apps?
-
5apps ofrece un plan gratuito para aplicaciones de código abierto y almacenamiento de datos personales. También ofrece planes de pago para aplicaciones privadas y mayor espacio de almacenamiento. Puede consultar los precios en el sitio web.
-
-
¿Cuáles son los requisitos técnicos para usar 5apps?
-
Necesitarás un navegador web moderno que admita funciones HTML5, JS y CSS. También necesitarás un cliente Git para enviar tu código a Deploy. Para almacenamiento, necesitarás aplicaciones que admitan el protocolo remoteStorage.
-
¿Cuáles son algunos ejemplos de aplicaciones que utilizan 5apps?
-
-
¿Cómo puedo contactar el soporte de 5apps?
-
Puede ponerse en contacto con el soporte de 5apps por correo electrónico en support@5apps.com o por Twitter en @5apps. También puede consultar la sección de preguntas frecuentes en el sitio web o la documentación de cada servicio.
-
¿Cómo puedo contribuir a 5apps?
-
Puedes contribuir a 5apps usándolo, compartiéndolo con otros, dando retroalimentación, reportando errores, sugiriendo características, escribiendo artículos, creando aplicaciones, etc. También puedes unirte a la comunidad de 5apps en Noticias o GitHub.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Capcut Video Editor Apk Free Download.md b/spaces/Benson/text-generation/Examples/Capcut Video Editor Apk Free Download.md
deleted file mode 100644
index 3370ba6361b15a28dc7fa7869cb12a725cd1d40a..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Capcut Video Editor Apk Free Download.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
Editor de vídeo CapCut APK descarga gratuita: Una herramienta potente y fácil de usar para TikTok y más
-
Si usted está buscando un editor de vídeo gratuito y aplicación fabricante de vídeo que puede ayudar a crear impresionantes, vídeos de alta calidad para TikTok, Instagram, YouTube, o cualquier otra plataforma de medios sociales, es posible que desee echa un vistazo CapCut Video Editor APK. CapCut es la aplicación oficial de edición de video de TikTok, que ofrece una amplia gama de características y funciones para hacer que sus videos se destaquen. En este artículo, le diremos qué es CapCut Video Editor APK, qué características tiene, cómo descargarlo e instalarlo, y cuáles son sus pros y contras.
CapCut Video Editor APK es una aplicación para Android que le permite editar y hacer vídeos con facilidad. Es desarrollado por Bytedance Pte. Ltd., la misma empresa que posee TikTok, y está diseñado para ser compatible con el formato y el estilo de TikTok. Puede utilizar CapCut para recortar, cortar, combinar, acelerar, reducir la velocidad, acercar, alejar, invertir, congelar, animar, agregar transiciones, efectos, filtros, pegatinas, texto, música, efectos de sonido y más a sus videos. También puede usar las funciones avanzadas de CapCut, como animación de fotograma clave, cámara lenta suave, clave de croma, imagen en imagen (PIP), estabilización, subtítulos automáticos, eliminación de fondo, estilos de tendencia y más para crear videos de aspecto profesional. Puedes exportar tus vídeos en calidad HD (hasta 4K 60fps) y compartirlos directamente en TikTok u otras plataformas de redes sociales.
-
Características de CapCut Editor de vídeo APK
-
CapCut Video Editor APK tiene muchas características que lo convierten en una herramienta de edición de vídeo versátil y potente. Estas son algunas de las principales características que puedes disfrutar con CapCut:
-
Edición básica de vídeo
-
-
Recortar y ajustar clips y dividir o combinar vídeos.
-
Ajuste la velocidad del vídeo de 0.1x a 100x, y aplique curvas de velocidad a los clips.
-
Animar clips de vídeo con increíbles efectos de zoom de entrada/salida.
-
-
Resalte los mejores momentos para clips y vlogs con la función de congelación.
-
Explora opciones de transición con efectos impresionantes en puntos de corte entre clips.
-
-
Edición avanzada de vídeo
-
-
Animación de vídeo de fotograma clave está disponible para todos los ajustes.
-
Editar vídeos para crear cámara lenta suave con la función de flujo óptico y la herramienta de curva de velocidad.
-
Utilice la tecla de croma para eliminar colores específicos de los vídeos.
-
Aplique la función Imagen en imagen (PIP) para agregar capas de video y fotos por encima del clip y empalmarlas fácilmente.
-
La función de estabilización mantiene el material de vídeo constante.
-
-
Características especiales
-
-
Subtítulos automáticos: automatiza el reconocimiento de voz y los subtítulos en los vídeos.
-
Eliminación de fondo: elimina automáticamente a las personas de los vídeos de forma gratuita.
-
Estilos de tendencia: disfrute de opciones creativas y constantemente actualizadas como zoom 3D, velocidad automática y más.
-
-
Texto y pegatinas
-
-
Añadir texto a vídeos con diferentes fuentes y estilos, encontrar la mejor fuente de subtítulos con plantillas de texto únicas. Formatos de fuente de subtítulos se pueden importar.
-
Los subtítulos se pueden añadir a la línea de tiempo de las pistas de vídeo y se pueden mover y ajustar en un solo paso.
-
Añadir pegatinas a los vídeos de una enorme biblioteca de pegatinas o importar sus propias pegatinas.
-
-
Efectos y filtros de tendencias
-
-
Combina contenido de video con diversos filtros que se actualizan semanalmente con las últimas tendencias y temporadas.
-
Añadir efectos a vídeos con varias opciones, tales como glitch, VHS, retro, neón, y más.
-
-
Música y efectos de sonido
-
-
Añadir música a vídeos de una enorme biblioteca de canciones o importar su propia música.
-
Ajuste el volumen de la música y el sonido original del video.
-
Añadir efectos de sonido a vídeos de una variedad de categorías, tales como animales, dibujos animados, explosiones, y más.
-
-
-
Cómo descargar e instalar CapCut Editor de vídeo APK?
-
Si desea descargar e instalar CapCut Video Editor APK en su dispositivo Android, puede seguir estos sencillos pasos:
-
-
Ir al sitio web oficial de CapCut o cualquier otra fuente de confianza que proporciona el archivo APK de CapCut.
-
Descargar el archivo APK de CapCut en su dispositivo.
-
Habilita la instalación de aplicaciones de fuentes desconocidas en tu dispositivo. Puede hacer esto yendo a Configuración > Seguridad > Fuentes desconocidas y activando.
-
Busque el archivo APK descargado en su dispositivo y toque en él para iniciar el proceso de instalación.
-
Siga las instrucciones en la pantalla y espere a que se complete la instalación.
-
Lanzamiento CapCut Editor de vídeo APK y disfrutar de la edición y la creación de vídeos.
-
-
Pros y contras de CapCut Editor de vídeo APK
-
CapCut Video Editor APK es una gran aplicación para la edición de vídeo y la toma de vídeo, pero también tiene algunos pros y contras que usted debe ser consciente de. Estos son algunos de ellos:
-
-
Pros
Contras
-
- Libre de usar y descargar
- Puede contener anuncios y compras en la aplicación
-
- Compatible con TikTok y otras plataformas de redes sociales
- Puede que no funcione en algunos dispositivos o regiones
-
- Ofrece una amplia gama de características y funciones
- Puede consumir mucho espacio de almacenamiento y energía de la batería
-
- Soporta calidad HD (hasta 4K 60fps) exportación de vídeo
- Puede tener algunos errores o problemas técnicos
-
- Interfaz fácil de usar y fácil de usar
- Puede requerir conexión a Internet para algunas características
-
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre CapCut Video Editor APK:
-
-
¿Es CapCut Video Editor APK seguro de usar?
-
Sí, CapCut Video Editor APK es seguro de usar, siempre y cuando se descarga desde el sitio web oficial o cualquier otra fuente de confianza. Sin embargo, siempre debes tener cuidado al instalar aplicaciones de fuentes desconocidas y verificar los permisos que requieren.
-
-
¿Está CapCut Video Editor APK disponible para dispositivos iOS?
-
No, CapCut Video Editor APK solo está disponible para dispositivos Android. Sin embargo, hay una versión para iOS de CapCut que puedes descargar desde la App Store.
-
¿Puedo usar CapCut Video Editor APK sin conexión?
-
Sí, puede utilizar CapCut Video Editor APK sin conexión para la mayoría de sus características. Sin embargo, algunas características pueden requerir una conexión a Internet, como descargar música, pegatinas, efectos, filtros, etc.
-
¿Cómo puedo compartir mis videos hechos con CapCut Video Editor APK?
-
Puede compartir sus vídeos realizados con CapCut Video Editor APK directamente en TikTok o cualquier otra plataforma de medios sociales. También puede guardar sus vídeos en su dispositivo o subirlos a los servicios de almacenamiento en la nube.
-
¿Cómo puedo contactar a los desarrolladores de CapCut Video Editor APK?
-
Puede ponerse en contacto con los desarrolladores de CapCut Video Editor APK enviando un correo electrónico a feedback@capcut.com o visitando su sitio web oficial en https://www.capcut.com/.
", unsafe_allow_html=True)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/ema.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/ema.py
deleted file mode 100644
index 66d101120c7e3f719a1db5d4cf8f79a4f5016052..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/ema.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# --------------------------------------------------------
-# Stable-Diffusion-Torch
-# Based on Stable-Diffusion (https://github.com/CompVis/stable-diffusion)
-# Modified by Zigang Geng (zigang@mail.ustc.edu.cn) and Tiankai Hang
-# --------------------------------------------------------
-
-import torch
-from torch import nn
-
-
-class LitEma(nn.Module):
- def __init__(self, model, decay=0.9999, decay_resume=0.9999, use_num_upates=True):
- super().__init__()
- if decay < 0.0 or decay > 1.0:
- raise ValueError('Decay must be between 0 and 1')
-
- self.m_name2s_name = {}
- self.decay_resume = decay_resume
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
- self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates
- else torch.tensor(-1,dtype=torch.int))
-
- for name, p in model.named_parameters():
- if p.requires_grad:
- #remove as '.'-character is not allowed in buffers
- s_name = name.replace('.','')
- self.m_name2s_name.update({name:s_name})
- self.register_buffer(s_name,p.clone().float().detach().data)
-
- self.collected_params = []
-
- def forward(self, model):
- decay = self.decay
-
- if self.num_updates >= 0:
- self.num_updates += 1
- decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates))
-
- if self.decay_resume != 0.9999:
- decay = min(self.decay_resume,(1 + self.num_updates) / (10 + self.num_updates))
-
- one_minus_decay = 1.0 - decay
-
- with torch.no_grad():
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
-
- for key in m_param:
- if m_param[key].requires_grad:
- sname = self.m_name2s_name[key[7:]]
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
- else:
- assert not key in self.m_name2s_name
-
- def copy_to(self, model, test=False):
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
- for key in m_param:
- if m_param[key].requires_grad:
- if test:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
- else:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key[7:]]].data)
- else:
- assert not key in self.m_name2s_name
-
- def store(self, parameters):
- """
- Save the current parameters for restoring later.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
- """
- # self.collected_params = [param.clone() for param in parameters]
- self.collected_params = [param.clone().detach().cpu() for param in parameters]
-
- def restore(self, parameters):
- """
- Restore the parameters stored with the `store` method.
- Useful to validate the model with EMA parameters without affecting the
- original optimization process. Store the parameters before the
- `copy_to` method. After validation (or model saving), use this to
- restore the former parameters.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- updated with the stored parameters.
- """
- for c_param, param in zip(self.collected_params, parameters):
- # param.data.copy_(c_param.data)
- param.data.copy_(c_param.data.clone().to(param.device))
\ No newline at end of file
diff --git a/spaces/Kevin676/Voice-Cloning-with-Voice-Fixer/README.md b/spaces/Kevin676/Voice-Cloning-with-Voice-Fixer/README.md
deleted file mode 100644
index 84a45cda207b3a0da73829b49bf21b11bf23a460..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Voice-Cloning-with-Voice-Fixer/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Voice Cloning
-emoji: ⚡
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: true
-license: mit
-duplicated_from: BilalSardar/Voice-Cloning
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/audio.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/audio.py
deleted file mode 100644
index f4a1c18b2888947ece8b15594ead0c4c5166cb57..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/audio.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import librosa
-import numpy as np
-import av
-from io import BytesIO
-import ffmpeg
-import os
-import sys
-
-import random
-from lib.infer.infer_libs.csvutil import CSVutil
-#import csv
-
-platform_stft_mapping = {
- 'linux': 'stftpitchshift',
- 'darwin': 'stftpitchshift',
- 'win32': 'stftpitchshift.exe',
-}
-
-stft = platform_stft_mapping.get(sys.platform)
-
-def wav2(i, o, format):
- inp = av.open(i, 'rb')
- if format == "m4a": format = "mp4"
- out = av.open(o, 'wb', format=format)
- if format == "ogg": format = "libvorbis"
- if format == "mp4": format = "aac"
-
- ostream = out.add_stream(format)
-
- for frame in inp.decode(audio=0):
- for p in ostream.encode(frame): out.mux(p)
-
- for p in ostream.encode(None): out.mux(p)
-
- out.close()
- inp.close()
-
-def audio2(i, o, format, sr):
- inp = av.open(i, 'rb')
- out = av.open(o, 'wb', format=format)
- if format == "ogg": format = "libvorbis"
- if format == "f32le": format = "pcm_f32le"
-
- ostream = out.add_stream(format, channels=1)
- ostream.sample_rate = sr
-
- for frame in inp.decode(audio=0):
- for p in ostream.encode(frame): out.mux(p)
-
- out.close()
- inp.close()
-
-def load_audion(file, sr):
- try:
- file = (
- file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- with open(file, "rb") as f:
- with BytesIO() as out:
- audio2(f, out, "f32le", sr)
- return np.frombuffer(out.getvalue(), np.float32).flatten()
-
- except AttributeError:
- audio = file[1] / 32768.0
- if len(audio.shape) == 2:
- audio = np.mean(audio, -1)
- return librosa.resample(audio, orig_sr=file[0], target_sr=16000)
-
- except Exception as e:
- raise RuntimeError(f"Failed to load audio: {e}")
-
-
-
-
-def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0):
- converted = False
- DoFormant, Quefrency, Timbre = CSVutil("lib/csvdb/formanting.csv", "r", "formanting")
- DoFormant, Quefrency, Timbre = bool(DoFormant), float(Quefrency), float(Timbre)
-
- try:
- file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
-
- if not file.endswith(".wav"):
- converted = True
- # Conversión de formato usando ffmpeg
- converting = (
- ffmpeg.input(file, threads=0)
- .output(f"{file}.wav")
- .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
- )
- file = f"{file}.wav"
- print(f" · File converted to Wav format: {file}\n")
-
- if DoFormant == False:
- # Procesamiento de formantes usando stftpitchshift
- command = (
- f'{stft} -i "{file}" -q "{Quefrency}" '
- f'-t "{Timbre}" -o "{file}FORMANTED.wav"'
- )
- os.system(command)
- file = f"{file}FORMANTED.wav"
- print(f" · Formanted {file}!\n")
-
- with open(file, "rb") as f:
- with BytesIO() as out:
- audio2(f, out, "f32le", sr)
- audio_data = np.frombuffer(out.getvalue(), np.float32).flatten()
-
- if converted:
- try: os.remove(file)
- except Exception as e: pass; print(f"Couldn't remove converted type of file due to {e}")
- converted = False
-
- return audio_data
- except AttributeError:
- audio = file[1] / 32768.0
- if len(audio.shape) == 2:
- audio = np.mean(audio, -1)
- return librosa.resample(audio, orig_sr=file[0], target_sr=16000)
- except Exception as e:
- raise RuntimeError(f"Failed to load audio: {e}")
-
-
-def check_audio_duration(file):
- try:
- file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
-
- probe = ffmpeg.probe(file)
-
- duration = float(probe['streams'][0]['duration'])
-
- if duration < 0.76:
- print(
- f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results."
- )
- return False
-
- return True
- except Exception as e:
- raise RuntimeError(f"Failed to check audio duration: {e}")
\ No newline at end of file
diff --git a/spaces/Letheoricien/MLPC_2023_NATHEO/app.py b/spaces/Letheoricien/MLPC_2023_NATHEO/app.py
deleted file mode 100644
index 9ddb3fc3dfe9af5e13cbc1ca54c445e06a62774a..0000000000000000000000000000000000000000
--- a/spaces/Letheoricien/MLPC_2023_NATHEO/app.py
+++ /dev/null
@@ -1,261 +0,0 @@
-#Import all packages we will need.
-import json
-import numpy as np
-import random
-import nltk
-#import utils as u
-nltk.download('punkt')
-nltk.download('wordnet')
-nltk.download('omw-1.4')
-from keras.models import Sequential
-from keras.layers import Dense, Activation, Dropout,LSTM,Attention
-#from keras.optimizers import gradient_descent_v2
-from tensorflow.keras.optimizers.legacy import SGD
-
-from keras.models import load_model
-import json
-import random
-
-#import utils as u
-import pickle
-
-#Creation of pickle files to store all Python objects which well use to the prediction process
-def create_pickle(list, pkl_url):
- return pickle.dump(list, open(pkl_url,'wb'))
-
-
-def load_pickle(pkl_url):
- return pickle.load(open(pkl_url,'rb'))
-class ChatModel:
-
- def __init__(self):
- #Call tokenizing procedure
- w, words, documents, classes, self._intents = self.tokenizing('po.json')
-
- #Call lemmatizing procedure
- w, words, documents, classes, lemmatizer = self.lemmatizing(w, words, documents, classes)
-
- #Call training_data procedure
- self._train_x, self._train_y = self.training_data(w, words, documents, classes, lemmatizer)
-
- #Call tokenizing procedure
- self._model = self.training(self._train_x, self._train_y)
-
-
- def tokenizing(self,url):
- words=[]
- classes = []
- documents = []
- intents = json.loads(open(url).read())
-
- for intent in intents['intents']:
- for pattern in intent['patterns']:
- #tokenize each word
- w = nltk.word_tokenize(pattern)
- words.extend(w)
- #add documents in the corpus
- documents.append((w, intent['tag']))
- # add to our classes list
- if intent['tag'] not in classes:
- classes.append(intent['tag'])
-
- return w, words, documents, classes, intents
-
- def lemmatizing(self, w, words, documents, classes):
- ignore_words = ['?', '!']
- lemmatizer = nltk.stem.WordNetLemmatizer()
-
- # lemmatize, lower each word and remove duplicates
- words = [lemmatizer.lemmatize(w.lower()) for w in words if w not in ignore_words]
-
- # sort classes and words
- classes = sorted(list(set(classes)))
- words = sorted(list(set(words)))
- # documents = combination between patterns and intents
- print (len(documents), "documents")
-
- # classes = intents
- print (len(classes), "classes", classes)
-
- # words = all words, vocabulary
- print (len(words), "unique lemmatized words", words)
-
- create_pickle(words, 'words_po.pkl')
- create_pickle(classes, 'classes_po.pkl')
- return w, words, documents, classes, lemmatizer
-
- def training_data(self, w, words, documents, classes, lemmatizer):
- # create our training data
- training = []
- train_x = []
- train_y = []
- # create an empty array for our output
- output_empty = [0] * len(classes)
-
- # training set, bag of words for each sentence
- for doc in documents:
- # initialize our bag of words
- bag = []
- # list of tokenized words for the pattern
- pattern_words = doc[0]
- # lemmatize each word - create base word, in attempt to represent related words
- pattern_words = [lemmatizer.lemmatize(word.lower()) for word in pattern_words]
- # create our bag of words array with 1, if word match found in current pattern
-
- for w in words:
- bag.append(1) if w in pattern_words else bag.append(0)
-
- # output is a '0' for each tag and '1' for current tag (for each pattern)
- output_row = list(output_empty)
- output_row[classes.index(doc[1])] = 1
- training.append([bag, output_row])
-
- # shuffle our features and turn into np.array
- random.shuffle(training)
- training = np.array(training)
- # create train and test lists. X - patterns, Y - intents
- train_x = list(training[:,0])
- train_y = list(training[:,1])
-
-
- return train_x, train_y
-
- def training(self,train_x, train_y):
- #Sequential from Keras
- # Create model - 3 layers. First layer 128 neurons, second layer 64 neurons and 3rd output layer contains number of neurons
- # equal to number of intents to predict output intent with softmax
- model = Sequential()
- #model.add(LSTM(64, return_sequences=True))
- #model.add(Dropout(0.5))
-
- model.add(Dense(128, input_shape=(len(train_x[0]),), activation='relu'))
- model.add(Dropout(0.5))
- model.add(Dense(64, activation='relu'))
- model.add(Dropout(0.5))
- model.add(Dense(len(train_y[0]), activation='softmax'))
- #input_layer = Input(shape=(len(train_x[0]),))
- #embedding_layer = Embedding(len(train_x), 128)(input_layer)
- #lstm_layer = LSTM(128)(embedding_layer)
- #dropout_layer = Dropout(0.5)(lstm_layer)
- #attention_layer = Attention()([dropout_layer, lstm_layer])
- #dense_layer = Dense(64, activation='relu')(attention_layer)
- #output_layer = Dense(len(train_y[0]), activation='softmax')(dense_layer)
- #model = Model(inputs=input_layer, outputs=output_layer)
-
- # Compile model. Stochastic gradient descent with Nesterov accelerated gradient gives good results for this model
- #sgd = gradient_descent_v2.SGD(learning_rate=0.01, decay=1e-6, momentum=0.9, nesterov=True)
- sgd = SGD(learning_rate=0.01, decay=1e-6, momentum=0.9, nesterov=True)
- model.compile(loss='categorical_crossentropy', metrics=['accuracy'])
- #fitting and saving the model
- hist = model.fit(np.array(train_x), np.array(train_y), epochs=150, batch_size=5, verbose=1)
- model.save('chatbotpo_model.h5', hist)
-
-
- return model
-
- def get_train_x(self):
- return self._train_x
-
- def get_train_y(self):
- return self._train_y
-
- def get_model(self):
- return self._model
-
- def get_intents(self):
- return self._intents
-class ChatApp:
-
- def __init__(self):
- self.cM = ChatModel()
- self._lemmatizer = nltk.stem.WordNetLemmatizer()
- self._model = load_model('chatbotpo_model.h5')
- self._intents = self.cM.get_intents()
- self._words = load_pickle('words_po.pkl')
- self._classes = load_pickle('classes_po.pkl')
-
- def clean_up_sentence(self,sentence):
- # tokenize the pattern - split words into array
- sentence_words = nltk.word_tokenize(sentence)
- # stem each word - create short form for word
- sentence_words = [self._lemmatizer.lemmatize(word.lower()) for word in sentence_words]
- return sentence_words
-
- # return bag of words array: 0 or 1 for each word in the bag that exists in the sentence
- def bow(self, sentence, words, show_details=True):
- # tokenize the pattern
- sentence_words = self.clean_up_sentence(sentence)
- # bag of words - matrix of N words, vocabulary matrix
- bag = [0]*len(words)
- for s in sentence_words:
- for i,w in enumerate(words):
- if w == s:
- # assign 1 if current word is in the vocabulary position
- bag[i] = 1
- if show_details:
- print ("found in bag: %s" % w)
- return(np.array(bag))
-
- def predict_class(self, sentence, model):
- ERROR_THRESHOLD = 0.35
- # filter out predictions below a threshold
- p = self.bow(sentence, self._words, show_details=False)
- res = self._model.predict(np.array([p]))[0]
-
- results = [[i,r] for i,r in enumerate(res) if r>ERROR_THRESHOLD]
- # sort by strength of probability
- results.sort(key=lambda x: x[1], reverse=True)
- return_list = []
- for r in results:
- return_list.append({"intent": self._classes[r[0]], "probability": str(r[1])})
- return return_list
-
-
- def getResponse(self, ints, intents_json):
- tag = ints[0]['intent']
- list_of_intents = intents_json['intents']
- for i in list_of_intents:
- if(i['tag']== tag):
- result = random.choice(i['responses'])
- break
- return result
-
- def chatbot_response(self, text):
- ints = self.predict_class(text, self._model)
- res = self.getResponse(ints, self._intents)
- return res
-
-myChat = ChatApp()
-import os
-
-import gradio as gr
-
-
-prompt = "The following is a conversation with an NATHEO . How can I help you today?\nHuman: "
-
-def chatgpt_clone(input, history):
- history = history or []
- s = list(sum(history, ()))
- s.append(input)
- inp = ' '.join(s)
-
- output = myChat.chatbot_response(input)
- history.append((input, output))
- return history, history
-
-
-block = gr.Blocks()
-
-
-with block:
- gr.Markdown("""
NATHEO : An intelligent chatbot for ENSPY students
- """)
-
- css = """ Chatbot {background-color : "pink"}"""
- chatbot = gr.Chatbot()
- message = gr.Textbox(placeholder=prompt)
- state = gr.State()
- submit = gr.Button("SEND")
- submit.click(chatgpt_clone, inputs=[message, state], outputs=[chatbot, state])
-
-block.launch(debug = True)
diff --git a/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/demucs3/transformer.py b/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/demucs3/transformer.py
deleted file mode 100644
index 56a465b861d7d018d0eca2779bbd392f07e411a9..0000000000000000000000000000000000000000
--- a/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/demucs3/transformer.py
+++ /dev/null
@@ -1,839 +0,0 @@
-# Copyright (c) 2019-present, Meta, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# First author is Simon Rouard.
-
-import random
-import typing as tp
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import numpy as np
-import math
-from einops import rearrange
-
-
-def create_sin_embedding(
- length: int, dim: int, shift: int = 0, device="cpu", max_period=10000
-):
- # We aim for TBC format
- assert dim % 2 == 0
- pos = shift + torch.arange(length, device=device).view(-1, 1, 1)
- half_dim = dim // 2
- adim = torch.arange(dim // 2, device=device).view(1, 1, -1)
- phase = pos / (max_period ** (adim / (half_dim - 1)))
- return torch.cat(
- [
- torch.cos(phase),
- torch.sin(phase),
- ],
- dim=-1,
- )
-
-
-def create_2d_sin_embedding(d_model, height, width, device="cpu", max_period=10000):
- """
- :param d_model: dimension of the model
- :param height: height of the positions
- :param width: width of the positions
- :return: d_model*height*width position matrix
- """
- if d_model % 4 != 0:
- raise ValueError(
- "Cannot use sin/cos positional encoding with "
- "odd dimension (got dim={:d})".format(d_model)
- )
- pe = torch.zeros(d_model, height, width)
- # Each dimension use half of d_model
- d_model = int(d_model / 2)
- div_term = torch.exp(
- torch.arange(0.0, d_model, 2) * -(math.log(max_period) / d_model)
- )
- pos_w = torch.arange(0.0, width).unsqueeze(1)
- pos_h = torch.arange(0.0, height).unsqueeze(1)
- pe[0:d_model:2, :, :] = (
- torch.sin(pos_w * div_term).transpose(0, 1).unsqueeze(1).repeat(1, height, 1)
- )
- pe[1:d_model:2, :, :] = (
- torch.cos(pos_w * div_term).transpose(0, 1).unsqueeze(1).repeat(1, height, 1)
- )
- pe[d_model::2, :, :] = (
- torch.sin(pos_h * div_term).transpose(0, 1).unsqueeze(2).repeat(1, 1, width)
- )
- pe[d_model + 1:: 2, :, :] = (
- torch.cos(pos_h * div_term).transpose(0, 1).unsqueeze(2).repeat(1, 1, width)
- )
-
- return pe[None, :].to(device)
-
-
-def create_sin_embedding_cape(
- length: int,
- dim: int,
- batch_size: int,
- mean_normalize: bool,
- augment: bool, # True during training
- max_global_shift: float = 0.0, # delta max
- max_local_shift: float = 0.0, # epsilon max
- max_scale: float = 1.0,
- device: str = "cpu",
- max_period: float = 10000.0,
-):
- # We aim for TBC format
- assert dim % 2 == 0
- pos = 1.0 * torch.arange(length).view(-1, 1, 1) # (length, 1, 1)
- pos = pos.repeat(1, batch_size, 1) # (length, batch_size, 1)
- if mean_normalize:
- pos -= torch.nanmean(pos, dim=0, keepdim=True)
-
- if augment:
- delta = np.random.uniform(
- -max_global_shift, +max_global_shift, size=[1, batch_size, 1]
- )
- delta_local = np.random.uniform(
- -max_local_shift, +max_local_shift, size=[length, batch_size, 1]
- )
- log_lambdas = np.random.uniform(
- -np.log(max_scale), +np.log(max_scale), size=[1, batch_size, 1]
- )
- pos = (pos + delta + delta_local) * np.exp(log_lambdas)
-
- pos = pos.to(device)
-
- half_dim = dim // 2
- adim = torch.arange(dim // 2, device=device).view(1, 1, -1)
- phase = pos / (max_period ** (adim / (half_dim - 1)))
- return torch.cat(
- [
- torch.cos(phase),
- torch.sin(phase),
- ],
- dim=-1,
- ).float()
-
-
-def get_causal_mask(length):
- pos = torch.arange(length)
- return pos > pos[:, None]
-
-
-def get_elementary_mask(
- T1,
- T2,
- mask_type,
- sparse_attn_window,
- global_window,
- mask_random_seed,
- sparsity,
- device,
-):
- """
- When the input of the Decoder has length T1 and the output T2
- The mask matrix has shape (T2, T1)
- """
- assert mask_type in ["diag", "jmask", "random", "global"]
-
- if mask_type == "global":
- mask = torch.zeros(T2, T1, dtype=torch.bool)
- mask[:, :global_window] = True
- line_window = int(global_window * T2 / T1)
- mask[:line_window, :] = True
-
- if mask_type == "diag":
-
- mask = torch.zeros(T2, T1, dtype=torch.bool)
- rows = torch.arange(T2)[:, None]
- cols = (
- (T1 / T2 * rows + torch.arange(-sparse_attn_window, sparse_attn_window + 1))
- .long()
- .clamp(0, T1 - 1)
- )
- mask.scatter_(1, cols, torch.ones(1, dtype=torch.bool).expand_as(cols))
-
- elif mask_type == "jmask":
- mask = torch.zeros(T2 + 2, T1 + 2, dtype=torch.bool)
- rows = torch.arange(T2 + 2)[:, None]
- t = torch.arange(0, int((2 * T1) ** 0.5 + 1))
- t = (t * (t + 1) / 2).int()
- t = torch.cat([-t.flip(0)[:-1], t])
- cols = (T1 / T2 * rows + t).long().clamp(0, T1 + 1)
- mask.scatter_(1, cols, torch.ones(1, dtype=torch.bool).expand_as(cols))
- mask = mask[1:-1, 1:-1]
-
- elif mask_type == "random":
- gene = torch.Generator(device=device)
- gene.manual_seed(mask_random_seed)
- mask = (
- torch.rand(T1 * T2, generator=gene, device=device).reshape(T2, T1)
- > sparsity
- )
-
- mask = mask.to(device)
- return mask
-
-
-def get_mask(
- T1,
- T2,
- mask_type,
- sparse_attn_window,
- global_window,
- mask_random_seed,
- sparsity,
- device,
-):
- """
- Return a SparseCSRTensor mask that is a combination of elementary masks
- mask_type can be a combination of multiple masks: for instance "diag_jmask_random"
- """
- from xformers.sparse import SparseCSRTensor
- # create a list
- mask_types = mask_type.split("_")
-
- all_masks = [
- get_elementary_mask(
- T1,
- T2,
- mask,
- sparse_attn_window,
- global_window,
- mask_random_seed,
- sparsity,
- device,
- )
- for mask in mask_types
- ]
-
- final_mask = torch.stack(all_masks).sum(axis=0) > 0
-
- return SparseCSRTensor.from_dense(final_mask[None])
-
-
-class ScaledEmbedding(nn.Module):
- def __init__(
- self,
- num_embeddings: int,
- embedding_dim: int,
- scale: float = 1.0,
- boost: float = 3.0,
- ):
- super().__init__()
- self.embedding = nn.Embedding(num_embeddings, embedding_dim)
- self.embedding.weight.data *= scale / boost
- self.boost = boost
-
- @property
- def weight(self):
- return self.embedding.weight * self.boost
-
- def forward(self, x):
- return self.embedding(x) * self.boost
-
-
-class LayerScale(nn.Module):
- """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf).
- This rescales diagonaly residual outputs close to 0 initially, then learnt.
- """
-
- def __init__(self, channels: int, init: float = 0, channel_last=False):
- """
- channel_last = False corresponds to (B, C, T) tensors
- channel_last = True corresponds to (T, B, C) tensors
- """
- super().__init__()
- self.channel_last = channel_last
- self.scale = nn.Parameter(torch.zeros(channels, requires_grad=True))
- self.scale.data[:] = init
-
- def forward(self, x):
- if self.channel_last:
- return self.scale * x
- else:
- return self.scale[:, None] * x
-
-
-class MyGroupNorm(nn.GroupNorm):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- def forward(self, x):
- """
- x: (B, T, C)
- if num_groups=1: Normalisation on all T and C together for each B
- """
- x = x.transpose(1, 2)
- return super().forward(x).transpose(1, 2)
-
-
-class MyTransformerEncoderLayer(nn.TransformerEncoderLayer):
- def __init__(
- self,
- d_model,
- nhead,
- dim_feedforward=2048,
- dropout=0.1,
- activation=F.relu,
- group_norm=0,
- norm_first=False,
- norm_out=False,
- layer_norm_eps=1e-5,
- layer_scale=False,
- init_values=1e-4,
- device=None,
- dtype=None,
- sparse=False,
- mask_type="diag",
- mask_random_seed=42,
- sparse_attn_window=500,
- global_window=50,
- auto_sparsity=False,
- sparsity=0.95,
- batch_first=False,
- ):
- factory_kwargs = {"device": device, "dtype": dtype}
- super().__init__(
- d_model=d_model,
- nhead=nhead,
- dim_feedforward=dim_feedforward,
- dropout=dropout,
- activation=activation,
- layer_norm_eps=layer_norm_eps,
- batch_first=batch_first,
- norm_first=norm_first,
- device=device,
- dtype=dtype,
- )
- self.sparse = sparse
- self.auto_sparsity = auto_sparsity
- if sparse:
- if not auto_sparsity:
- self.mask_type = mask_type
- self.sparse_attn_window = sparse_attn_window
- self.global_window = global_window
- self.sparsity = sparsity
- if group_norm:
- self.norm1 = MyGroupNorm(int(group_norm), d_model, eps=layer_norm_eps, **factory_kwargs)
- self.norm2 = MyGroupNorm(int(group_norm), d_model, eps=layer_norm_eps, **factory_kwargs)
-
- self.norm_out = None
- if self.norm_first & norm_out:
- self.norm_out = MyGroupNorm(num_groups=int(norm_out), num_channels=d_model)
- self.gamma_1 = (
- LayerScale(d_model, init_values, True) if layer_scale else nn.Identity()
- )
- self.gamma_2 = (
- LayerScale(d_model, init_values, True) if layer_scale else nn.Identity()
- )
-
- if sparse:
- self.self_attn = MultiheadAttention(
- d_model, nhead, dropout=dropout, batch_first=batch_first,
- auto_sparsity=sparsity if auto_sparsity else 0,
- )
- self.__setattr__("src_mask", torch.zeros(1, 1))
- self.mask_random_seed = mask_random_seed
-
- def forward(self, src, src_mask=None, src_key_padding_mask=None):
- """
- if batch_first = False, src shape is (T, B, C)
- the case where batch_first=True is not covered
- """
- device = src.device
- x = src
- T, B, C = x.shape
- if self.sparse and not self.auto_sparsity:
- assert src_mask is None
- src_mask = self.src_mask
- if src_mask.shape[-1] != T:
- src_mask = get_mask(
- T,
- T,
- self.mask_type,
- self.sparse_attn_window,
- self.global_window,
- self.mask_random_seed,
- self.sparsity,
- device,
- )
- self.__setattr__("src_mask", src_mask)
-
- if self.norm_first:
- x = x + self.gamma_1(
- self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)
- )
- x = x + self.gamma_2(self._ff_block(self.norm2(x)))
-
- if self.norm_out:
- x = self.norm_out(x)
- else:
- x = self.norm1(
- x + self.gamma_1(self._sa_block(x, src_mask, src_key_padding_mask))
- )
- x = self.norm2(x + self.gamma_2(self._ff_block(x)))
-
- return x
-
-
-class CrossTransformerEncoderLayer(nn.Module):
- def __init__(
- self,
- d_model: int,
- nhead: int,
- dim_feedforward: int = 2048,
- dropout: float = 0.1,
- activation=F.relu,
- layer_norm_eps: float = 1e-5,
- layer_scale: bool = False,
- init_values: float = 1e-4,
- norm_first: bool = False,
- group_norm: bool = False,
- norm_out: bool = False,
- sparse=False,
- mask_type="diag",
- mask_random_seed=42,
- sparse_attn_window=500,
- global_window=50,
- sparsity=0.95,
- auto_sparsity=None,
- device=None,
- dtype=None,
- batch_first=False,
- ):
- factory_kwargs = {"device": device, "dtype": dtype}
- super().__init__()
-
- self.sparse = sparse
- self.auto_sparsity = auto_sparsity
- if sparse:
- if not auto_sparsity:
- self.mask_type = mask_type
- self.sparse_attn_window = sparse_attn_window
- self.global_window = global_window
- self.sparsity = sparsity
-
- self.cross_attn: nn.Module
- self.cross_attn = nn.MultiheadAttention(
- d_model, nhead, dropout=dropout, batch_first=batch_first)
- # Implementation of Feedforward model
- self.linear1 = nn.Linear(d_model, dim_feedforward, **factory_kwargs)
- self.dropout = nn.Dropout(dropout)
- self.linear2 = nn.Linear(dim_feedforward, d_model, **factory_kwargs)
-
- self.norm_first = norm_first
- self.norm1: nn.Module
- self.norm2: nn.Module
- self.norm3: nn.Module
- if group_norm:
- self.norm1 = MyGroupNorm(int(group_norm), d_model, eps=layer_norm_eps, **factory_kwargs)
- self.norm2 = MyGroupNorm(int(group_norm), d_model, eps=layer_norm_eps, **factory_kwargs)
- self.norm3 = MyGroupNorm(int(group_norm), d_model, eps=layer_norm_eps, **factory_kwargs)
- else:
- self.norm1 = nn.LayerNorm(d_model, eps=layer_norm_eps, **factory_kwargs)
- self.norm2 = nn.LayerNorm(d_model, eps=layer_norm_eps, **factory_kwargs)
- self.norm3 = nn.LayerNorm(d_model, eps=layer_norm_eps, **factory_kwargs)
-
- self.norm_out = None
- if self.norm_first & norm_out:
- self.norm_out = MyGroupNorm(num_groups=int(norm_out), num_channels=d_model)
-
- self.gamma_1 = (
- LayerScale(d_model, init_values, True) if layer_scale else nn.Identity()
- )
- self.gamma_2 = (
- LayerScale(d_model, init_values, True) if layer_scale else nn.Identity()
- )
-
- self.dropout1 = nn.Dropout(dropout)
- self.dropout2 = nn.Dropout(dropout)
-
- # Legacy string support for activation function.
- if isinstance(activation, str):
- self.activation = self._get_activation_fn(activation)
- else:
- self.activation = activation
-
- if sparse:
- self.cross_attn = MultiheadAttention(
- d_model, nhead, dropout=dropout, batch_first=batch_first,
- auto_sparsity=sparsity if auto_sparsity else 0)
- if not auto_sparsity:
- self.__setattr__("mask", torch.zeros(1, 1))
- self.mask_random_seed = mask_random_seed
-
- def forward(self, q, k, mask=None):
- """
- Args:
- q: tensor of shape (T, B, C)
- k: tensor of shape (S, B, C)
- mask: tensor of shape (T, S)
-
- """
- device = q.device
- T, B, C = q.shape
- S, B, C = k.shape
- if self.sparse and not self.auto_sparsity:
- assert mask is None
- mask = self.mask
- if mask.shape[-1] != S or mask.shape[-2] != T:
- mask = get_mask(
- S,
- T,
- self.mask_type,
- self.sparse_attn_window,
- self.global_window,
- self.mask_random_seed,
- self.sparsity,
- device,
- )
- self.__setattr__("mask", mask)
-
- if self.norm_first:
- x = q + self.gamma_1(self._ca_block(self.norm1(q), self.norm2(k), mask))
- x = x + self.gamma_2(self._ff_block(self.norm3(x)))
- if self.norm_out:
- x = self.norm_out(x)
- else:
- x = self.norm1(q + self.gamma_1(self._ca_block(q, k, mask)))
- x = self.norm2(x + self.gamma_2(self._ff_block(x)))
-
- return x
-
- # self-attention block
- def _ca_block(self, q, k, attn_mask=None):
- x = self.cross_attn(q, k, k, attn_mask=attn_mask, need_weights=False)[0]
- return self.dropout1(x)
-
- # feed forward block
- def _ff_block(self, x):
- x = self.linear2(self.dropout(self.activation(self.linear1(x))))
- return self.dropout2(x)
-
- def _get_activation_fn(self, activation):
- if activation == "relu":
- return F.relu
- elif activation == "gelu":
- return F.gelu
-
- raise RuntimeError("activation should be relu/gelu, not {}".format(activation))
-
-
-# ----------------- MULTI-BLOCKS MODELS: -----------------------
-
-
-class CrossTransformerEncoder(nn.Module):
- def __init__(
- self,
- dim: int,
- emb: str = "sin",
- hidden_scale: float = 4.0,
- num_heads: int = 8,
- num_layers: int = 6,
- cross_first: bool = False,
- dropout: float = 0.0,
- max_positions: int = 1000,
- norm_in: bool = True,
- norm_in_group: bool = False,
- group_norm: int = False,
- norm_first: bool = False,
- norm_out: bool = False,
- max_period: float = 10000.0,
- weight_decay: float = 0.0,
- lr: tp.Optional[float] = None,
- layer_scale: bool = False,
- gelu: bool = True,
- sin_random_shift: int = 0,
- weight_pos_embed: float = 1.0,
- cape_mean_normalize: bool = True,
- cape_augment: bool = True,
- cape_glob_loc_scale: list = [5000.0, 1.0, 1.4],
- sparse_self_attn: bool = False,
- sparse_cross_attn: bool = False,
- mask_type: str = "diag",
- mask_random_seed: int = 42,
- sparse_attn_window: int = 500,
- global_window: int = 50,
- auto_sparsity: bool = False,
- sparsity: float = 0.95,
- ):
- super().__init__()
- """
- """
- assert dim % num_heads == 0
-
- hidden_dim = int(dim * hidden_scale)
-
- self.num_layers = num_layers
- # classic parity = 1 means that if idx%2 == 1 there is a
- # classical encoder else there is a cross encoder
- self.classic_parity = 1 if cross_first else 0
- self.emb = emb
- self.max_period = max_period
- self.weight_decay = weight_decay
- self.weight_pos_embed = weight_pos_embed
- self.sin_random_shift = sin_random_shift
- if emb == "cape":
- self.cape_mean_normalize = cape_mean_normalize
- self.cape_augment = cape_augment
- self.cape_glob_loc_scale = cape_glob_loc_scale
- if emb == "scaled":
- self.position_embeddings = ScaledEmbedding(max_positions, dim, scale=0.2)
-
- self.lr = lr
-
- activation: tp.Any = F.gelu if gelu else F.relu
-
- self.norm_in: nn.Module
- self.norm_in_t: nn.Module
- if norm_in:
- self.norm_in = nn.LayerNorm(dim)
- self.norm_in_t = nn.LayerNorm(dim)
- elif norm_in_group:
- self.norm_in = MyGroupNorm(int(norm_in_group), dim)
- self.norm_in_t = MyGroupNorm(int(norm_in_group), dim)
- else:
- self.norm_in = nn.Identity()
- self.norm_in_t = nn.Identity()
-
- # spectrogram layers
- self.layers = nn.ModuleList()
- # temporal layers
- self.layers_t = nn.ModuleList()
-
- kwargs_common = {
- "d_model": dim,
- "nhead": num_heads,
- "dim_feedforward": hidden_dim,
- "dropout": dropout,
- "activation": activation,
- "group_norm": group_norm,
- "norm_first": norm_first,
- "norm_out": norm_out,
- "layer_scale": layer_scale,
- "mask_type": mask_type,
- "mask_random_seed": mask_random_seed,
- "sparse_attn_window": sparse_attn_window,
- "global_window": global_window,
- "sparsity": sparsity,
- "auto_sparsity": auto_sparsity,
- "batch_first": True,
- }
-
- kwargs_classic_encoder = dict(kwargs_common)
- kwargs_classic_encoder.update({
- "sparse": sparse_self_attn,
- })
- kwargs_cross_encoder = dict(kwargs_common)
- kwargs_cross_encoder.update({
- "sparse": sparse_cross_attn,
- })
-
- for idx in range(num_layers):
- if idx % 2 == self.classic_parity:
-
- self.layers.append(MyTransformerEncoderLayer(**kwargs_classic_encoder))
- self.layers_t.append(
- MyTransformerEncoderLayer(**kwargs_classic_encoder)
- )
-
- else:
- self.layers.append(CrossTransformerEncoderLayer(**kwargs_cross_encoder))
-
- self.layers_t.append(
- CrossTransformerEncoderLayer(**kwargs_cross_encoder)
- )
-
- def forward(self, x, xt):
- B, C, Fr, T1 = x.shape
- pos_emb_2d = create_2d_sin_embedding(
- C, Fr, T1, x.device, self.max_period
- ) # (1, C, Fr, T1)
- pos_emb_2d = rearrange(pos_emb_2d, "b c fr t1 -> b (t1 fr) c")
- x = rearrange(x, "b c fr t1 -> b (t1 fr) c")
- x = self.norm_in(x)
- x = x + self.weight_pos_embed * pos_emb_2d
-
- B, C, T2 = xt.shape
- xt = rearrange(xt, "b c t2 -> b t2 c") # now T2, B, C
- pos_emb = self._get_pos_embedding(T2, B, C, x.device)
- pos_emb = rearrange(pos_emb, "t2 b c -> b t2 c")
- xt = self.norm_in_t(xt)
- xt = xt + self.weight_pos_embed * pos_emb
-
- for idx in range(self.num_layers):
- if idx % 2 == self.classic_parity:
- x = self.layers[idx](x)
- xt = self.layers_t[idx](xt)
- else:
- old_x = x
- x = self.layers[idx](x, xt)
- xt = self.layers_t[idx](xt, old_x)
-
- x = rearrange(x, "b (t1 fr) c -> b c fr t1", t1=T1)
- xt = rearrange(xt, "b t2 c -> b c t2")
- return x, xt
-
- def _get_pos_embedding(self, T, B, C, device):
- if self.emb == "sin":
- shift = random.randrange(self.sin_random_shift + 1)
- pos_emb = create_sin_embedding(
- T, C, shift=shift, device=device, max_period=self.max_period
- )
- elif self.emb == "cape":
- if self.training:
- pos_emb = create_sin_embedding_cape(
- T,
- C,
- B,
- device=device,
- max_period=self.max_period,
- mean_normalize=self.cape_mean_normalize,
- augment=self.cape_augment,
- max_global_shift=self.cape_glob_loc_scale[0],
- max_local_shift=self.cape_glob_loc_scale[1],
- max_scale=self.cape_glob_loc_scale[2],
- )
- else:
- pos_emb = create_sin_embedding_cape(
- T,
- C,
- B,
- device=device,
- max_period=self.max_period,
- mean_normalize=self.cape_mean_normalize,
- augment=False,
- )
-
- elif self.emb == "scaled":
- pos = torch.arange(T, device=device)
- pos_emb = self.position_embeddings(pos)[:, None]
-
- return pos_emb
-
- def make_optim_group(self):
- group = {"params": list(self.parameters()), "weight_decay": self.weight_decay}
- if self.lr is not None:
- group["lr"] = self.lr
- return group
-
-
-# Attention Modules
-
-
-class MultiheadAttention(nn.Module):
- def __init__(
- self,
- embed_dim,
- num_heads,
- dropout=0.0,
- bias=True,
- add_bias_kv=False,
- add_zero_attn=False,
- kdim=None,
- vdim=None,
- batch_first=False,
- auto_sparsity=None,
- ):
- super().__init__()
- assert auto_sparsity is not None, "sanity check"
- self.num_heads = num_heads
- self.q = torch.nn.Linear(embed_dim, embed_dim, bias=bias)
- self.k = torch.nn.Linear(embed_dim, embed_dim, bias=bias)
- self.v = torch.nn.Linear(embed_dim, embed_dim, bias=bias)
- self.attn_drop = torch.nn.Dropout(dropout)
- self.proj = torch.nn.Linear(embed_dim, embed_dim, bias)
- self.proj_drop = torch.nn.Dropout(dropout)
- self.batch_first = batch_first
- self.auto_sparsity = auto_sparsity
-
- def forward(
- self,
- query,
- key,
- value,
- key_padding_mask=None,
- need_weights=True,
- attn_mask=None,
- average_attn_weights=True,
- ):
-
- if not self.batch_first: # N, B, C
- query = query.permute(1, 0, 2) # B, N_q, C
- key = key.permute(1, 0, 2) # B, N_k, C
- value = value.permute(1, 0, 2) # B, N_k, C
- B, N_q, C = query.shape
- B, N_k, C = key.shape
-
- q = (
- self.q(query)
- .reshape(B, N_q, self.num_heads, C // self.num_heads)
- .permute(0, 2, 1, 3)
- )
- q = q.flatten(0, 1)
- k = (
- self.k(key)
- .reshape(B, N_k, self.num_heads, C // self.num_heads)
- .permute(0, 2, 1, 3)
- )
- k = k.flatten(0, 1)
- v = (
- self.v(value)
- .reshape(B, N_k, self.num_heads, C // self.num_heads)
- .permute(0, 2, 1, 3)
- )
- v = v.flatten(0, 1)
-
- if self.auto_sparsity:
- assert attn_mask is None
- x = dynamic_sparse_attention(q, k, v, sparsity=self.auto_sparsity)
- else:
- x = scaled_dot_product_attention(q, k, v, attn_mask, dropout=self.attn_drop)
- x = x.reshape(B, self.num_heads, N_q, C // self.num_heads)
-
- x = x.transpose(1, 2).reshape(B, N_q, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- if not self.batch_first:
- x = x.permute(1, 0, 2)
- return x, None
-
-
-def scaled_query_key_softmax(q, k, att_mask):
- from xformers.ops import masked_matmul
- q = q / (k.size(-1)) ** 0.5
- att = masked_matmul(q, k.transpose(-2, -1), att_mask)
- att = torch.nn.functional.softmax(att, -1)
- return att
-
-
-def scaled_dot_product_attention(q, k, v, att_mask, dropout):
- att = scaled_query_key_softmax(q, k, att_mask=att_mask)
- att = dropout(att)
- y = att @ v
- return y
-
-
-def _compute_buckets(x, R):
- qq = torch.einsum('btf,bfhi->bhti', x, R)
- qq = torch.cat([qq, -qq], dim=-1)
- buckets = qq.argmax(dim=-1)
-
- return buckets.permute(0, 2, 1).byte().contiguous()
-
-
-def dynamic_sparse_attention(query, key, value, sparsity, infer_sparsity=True, attn_bias=None):
- # assert False, "The code for the custom sparse kernel is not ready for release yet."
- from xformers.ops import find_locations, sparse_memory_efficient_attention
- n_hashes = 32
- proj_size = 4
- query, key, value = [x.contiguous() for x in [query, key, value]]
- with torch.no_grad():
- R = torch.randn(1, query.shape[-1], n_hashes, proj_size // 2, device=query.device)
- bucket_query = _compute_buckets(query, R)
- bucket_key = _compute_buckets(key, R)
- row_offsets, column_indices = find_locations(
- bucket_query, bucket_key, sparsity, infer_sparsity)
- return sparse_memory_efficient_attention(
- query, key, value, row_offsets, column_indices, attn_bias)
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/util/misc.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/util/misc.py
deleted file mode 100644
index d64b84ef24bea0c98e76824feb1903f6bfebe7a5..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/util/misc.py
+++ /dev/null
@@ -1,717 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Misc functions, including distributed helpers.
-
-Mostly copy-paste from torchvision references.
-"""
-import colorsys
-import datetime
-import functools
-import io
-import json
-import os
-import pickle
-import subprocess
-import time
-from collections import OrderedDict, defaultdict, deque
-from typing import List, Optional
-
-import numpy as np
-import torch
-import torch.distributed as dist
-
-# needed due to empty tensor bug in pytorch and torchvision 0.5
-import torchvision
-from torch import Tensor
-
-__torchvision_need_compat_flag = float(torchvision.__version__.split(".")[1]) < 7
-if __torchvision_need_compat_flag:
- from torchvision.ops import _new_empty_tensor
- from torchvision.ops.misc import _output_size
-
-
-class SmoothedValue(object):
- """Track a series of values and provide access to smoothed values over a
- window or the global series average.
- """
-
- def __init__(self, window_size=20, fmt=None):
- if fmt is None:
- fmt = "{median:.4f} ({global_avg:.4f})"
- self.deque = deque(maxlen=window_size)
- self.total = 0.0
- self.count = 0
- self.fmt = fmt
-
- def update(self, value, n=1):
- self.deque.append(value)
- self.count += n
- self.total += value * n
-
- def synchronize_between_processes(self):
- """
- Warning: does not synchronize the deque!
- """
- if not is_dist_avail_and_initialized():
- return
- t = torch.tensor([self.count, self.total], dtype=torch.float64, device="cuda")
- dist.barrier()
- dist.all_reduce(t)
- t = t.tolist()
- self.count = int(t[0])
- self.total = t[1]
-
- @property
- def median(self):
- d = torch.tensor(list(self.deque))
- if d.shape[0] == 0:
- return 0
- return d.median().item()
-
- @property
- def avg(self):
- d = torch.tensor(list(self.deque), dtype=torch.float32)
- return d.mean().item()
-
- @property
- def global_avg(self):
- if os.environ.get("SHILONG_AMP", None) == "1":
- eps = 1e-4
- else:
- eps = 1e-6
- return self.total / (self.count + eps)
-
- @property
- def max(self):
- return max(self.deque)
-
- @property
- def value(self):
- return self.deque[-1]
-
- def __str__(self):
- return self.fmt.format(
- median=self.median,
- avg=self.avg,
- global_avg=self.global_avg,
- max=self.max,
- value=self.value,
- )
-
-
-@functools.lru_cache()
-def _get_global_gloo_group():
- """
- Return a process group based on gloo backend, containing all the ranks
- The result is cached.
- """
-
- if dist.get_backend() == "nccl":
- return dist.new_group(backend="gloo")
-
- return dist.group.WORLD
-
-
-def all_gather_cpu(data):
- """
- Run all_gather on arbitrary picklable data (not necessarily tensors)
- Args:
- data: any picklable object
- Returns:
- list[data]: list of data gathered from each rank
- """
-
- world_size = get_world_size()
- if world_size == 1:
- return [data]
-
- cpu_group = _get_global_gloo_group()
-
- buffer = io.BytesIO()
- torch.save(data, buffer)
- data_view = buffer.getbuffer()
- device = "cuda" if cpu_group is None else "cpu"
- tensor = torch.ByteTensor(data_view).to(device)
-
- # obtain Tensor size of each rank
- local_size = torch.tensor([tensor.numel()], device=device, dtype=torch.long)
- size_list = [torch.tensor([0], device=device, dtype=torch.long) for _ in range(world_size)]
- if cpu_group is None:
- dist.all_gather(size_list, local_size)
- else:
- print("gathering on cpu")
- dist.all_gather(size_list, local_size, group=cpu_group)
- size_list = [int(size.item()) for size in size_list]
- max_size = max(size_list)
- assert isinstance(local_size.item(), int)
- local_size = int(local_size.item())
-
- # receiving Tensor from all ranks
- # we pad the tensor because torch all_gather does not support
- # gathering tensors of different shapes
- tensor_list = []
- for _ in size_list:
- tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device=device))
- if local_size != max_size:
- padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device=device)
- tensor = torch.cat((tensor, padding), dim=0)
- if cpu_group is None:
- dist.all_gather(tensor_list, tensor)
- else:
- dist.all_gather(tensor_list, tensor, group=cpu_group)
-
- data_list = []
- for size, tensor in zip(size_list, tensor_list):
- tensor = torch.split(tensor, [size, max_size - size], dim=0)[0]
- buffer = io.BytesIO(tensor.cpu().numpy())
- obj = torch.load(buffer)
- data_list.append(obj)
-
- return data_list
-
-
-def all_gather(data):
- """
- Run all_gather on arbitrary picklable data (not necessarily tensors)
- Args:
- data: any picklable object
- Returns:
- list[data]: list of data gathered from each rank
- """
-
- if os.getenv("CPU_REDUCE") == "1":
- return all_gather_cpu(data)
-
- world_size = get_world_size()
- if world_size == 1:
- return [data]
-
- # serialized to a Tensor
- buffer = pickle.dumps(data)
- storage = torch.ByteStorage.from_buffer(buffer)
- tensor = torch.ByteTensor(storage).to("cuda")
-
- # obtain Tensor size of each rank
- local_size = torch.tensor([tensor.numel()], device="cuda")
- size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)]
- dist.all_gather(size_list, local_size)
- size_list = [int(size.item()) for size in size_list]
- max_size = max(size_list)
-
- # receiving Tensor from all ranks
- # we pad the tensor because torch all_gather does not support
- # gathering tensors of different shapes
- tensor_list = []
- for _ in size_list:
- tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda"))
- if local_size != max_size:
- padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device="cuda")
- tensor = torch.cat((tensor, padding), dim=0)
- dist.all_gather(tensor_list, tensor)
-
- data_list = []
- for size, tensor in zip(size_list, tensor_list):
- buffer = tensor.cpu().numpy().tobytes()[:size]
- data_list.append(pickle.loads(buffer))
-
- return data_list
-
-
-def reduce_dict(input_dict, average=True):
- """
- Args:
- input_dict (dict): all the values will be reduced
- average (bool): whether to do average or sum
- Reduce the values in the dictionary from all processes so that all processes
- have the averaged results. Returns a dict with the same fields as
- input_dict, after reduction.
- """
- world_size = get_world_size()
- if world_size < 2:
- return input_dict
- with torch.no_grad():
- names = []
- values = []
- # sort the keys so that they are consistent across processes
- for k in sorted(input_dict.keys()):
- names.append(k)
- values.append(input_dict[k])
- values = torch.stack(values, dim=0)
- dist.all_reduce(values)
- if average:
- values /= world_size
- reduced_dict = {k: v for k, v in zip(names, values)}
- return reduced_dict
-
-
-class MetricLogger(object):
- def __init__(self, delimiter="\t"):
- self.meters = defaultdict(SmoothedValue)
- self.delimiter = delimiter
-
- def update(self, **kwargs):
- for k, v in kwargs.items():
- if isinstance(v, torch.Tensor):
- v = v.item()
- assert isinstance(v, (float, int))
- self.meters[k].update(v)
-
- def __getattr__(self, attr):
- if attr in self.meters:
- return self.meters[attr]
- if attr in self.__dict__:
- return self.__dict__[attr]
- raise AttributeError("'{}' object has no attribute '{}'".format(type(self).__name__, attr))
-
- def __str__(self):
- loss_str = []
- for name, meter in self.meters.items():
- # print(name, str(meter))
- # import ipdb;ipdb.set_trace()
- if meter.count > 0:
- loss_str.append("{}: {}".format(name, str(meter)))
- return self.delimiter.join(loss_str)
-
- def synchronize_between_processes(self):
- for meter in self.meters.values():
- meter.synchronize_between_processes()
-
- def add_meter(self, name, meter):
- self.meters[name] = meter
-
- def log_every(self, iterable, print_freq, header=None, logger=None):
- if logger is None:
- print_func = print
- else:
- print_func = logger.info
-
- i = 0
- if not header:
- header = ""
- start_time = time.time()
- end = time.time()
- iter_time = SmoothedValue(fmt="{avg:.4f}")
- data_time = SmoothedValue(fmt="{avg:.4f}")
- space_fmt = ":" + str(len(str(len(iterable)))) + "d"
- if torch.cuda.is_available():
- log_msg = self.delimiter.join(
- [
- header,
- "[{0" + space_fmt + "}/{1}]",
- "eta: {eta}",
- "{meters}",
- "time: {time}",
- "data: {data}",
- "max mem: {memory:.0f}",
- ]
- )
- else:
- log_msg = self.delimiter.join(
- [
- header,
- "[{0" + space_fmt + "}/{1}]",
- "eta: {eta}",
- "{meters}",
- "time: {time}",
- "data: {data}",
- ]
- )
- MB = 1024.0 * 1024.0
- for obj in iterable:
- data_time.update(time.time() - end)
- yield obj
- # import ipdb; ipdb.set_trace()
- iter_time.update(time.time() - end)
- if i % print_freq == 0 or i == len(iterable) - 1:
- eta_seconds = iter_time.global_avg * (len(iterable) - i)
- eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
- if torch.cuda.is_available():
- print_func(
- log_msg.format(
- i,
- len(iterable),
- eta=eta_string,
- meters=str(self),
- time=str(iter_time),
- data=str(data_time),
- memory=torch.cuda.max_memory_allocated() / MB,
- )
- )
- else:
- print_func(
- log_msg.format(
- i,
- len(iterable),
- eta=eta_string,
- meters=str(self),
- time=str(iter_time),
- data=str(data_time),
- )
- )
- i += 1
- end = time.time()
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print_func(
- "{} Total time: {} ({:.4f} s / it)".format(
- header, total_time_str, total_time / len(iterable)
- )
- )
-
-
-def get_sha():
- cwd = os.path.dirname(os.path.abspath(__file__))
-
- def _run(command):
- return subprocess.check_output(command, cwd=cwd).decode("ascii").strip()
-
- sha = "N/A"
- diff = "clean"
- branch = "N/A"
- try:
- sha = _run(["git", "rev-parse", "HEAD"])
- subprocess.check_output(["git", "diff"], cwd=cwd)
- diff = _run(["git", "diff-index", "HEAD"])
- diff = "has uncommited changes" if diff else "clean"
- branch = _run(["git", "rev-parse", "--abbrev-ref", "HEAD"])
- except Exception:
- pass
- message = f"sha: {sha}, status: {diff}, branch: {branch}"
- return message
-
-
-def collate_fn(batch):
- # import ipdb; ipdb.set_trace()
- batch = list(zip(*batch))
- batch[0] = nested_tensor_from_tensor_list(batch[0])
- return tuple(batch)
-
-
-def _max_by_axis(the_list):
- # type: (List[List[int]]) -> List[int]
- maxes = the_list[0]
- for sublist in the_list[1:]:
- for index, item in enumerate(sublist):
- maxes[index] = max(maxes[index], item)
- return maxes
-
-
-class NestedTensor(object):
- def __init__(self, tensors, mask: Optional[Tensor]):
- self.tensors = tensors
- self.mask = mask
- if mask == "auto":
- self.mask = torch.zeros_like(tensors).to(tensors.device)
- if self.mask.dim() == 3:
- self.mask = self.mask.sum(0).to(bool)
- elif self.mask.dim() == 4:
- self.mask = self.mask.sum(1).to(bool)
- else:
- raise ValueError(
- "tensors dim must be 3 or 4 but {}({})".format(
- self.tensors.dim(), self.tensors.shape
- )
- )
-
- def imgsize(self):
- res = []
- for i in range(self.tensors.shape[0]):
- mask = self.mask[i]
- maxH = (~mask).sum(0).max()
- maxW = (~mask).sum(1).max()
- res.append(torch.Tensor([maxH, maxW]))
- return res
-
- def to(self, device):
- # type: (Device) -> NestedTensor # noqa
- cast_tensor = self.tensors.to(device)
- mask = self.mask
- if mask is not None:
- assert mask is not None
- cast_mask = mask.to(device)
- else:
- cast_mask = None
- return NestedTensor(cast_tensor, cast_mask)
-
- def to_img_list_single(self, tensor, mask):
- assert tensor.dim() == 3, "dim of tensor should be 3 but {}".format(tensor.dim())
- maxH = (~mask).sum(0).max()
- maxW = (~mask).sum(1).max()
- img = tensor[:, :maxH, :maxW]
- return img
-
- def to_img_list(self):
- """remove the padding and convert to img list
-
- Returns:
- [type]: [description]
- """
- if self.tensors.dim() == 3:
- return self.to_img_list_single(self.tensors, self.mask)
- else:
- res = []
- for i in range(self.tensors.shape[0]):
- tensor_i = self.tensors[i]
- mask_i = self.mask[i]
- res.append(self.to_img_list_single(tensor_i, mask_i))
- return res
-
- @property
- def device(self):
- return self.tensors.device
-
- def decompose(self):
- return self.tensors, self.mask
-
- def __repr__(self):
- return str(self.tensors)
-
- @property
- def shape(self):
- return {"tensors.shape": self.tensors.shape, "mask.shape": self.mask.shape}
-
-
-def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
- # TODO make this more general
- if tensor_list[0].ndim == 3:
- if torchvision._is_tracing():
- # nested_tensor_from_tensor_list() does not export well to ONNX
- # call _onnx_nested_tensor_from_tensor_list() instead
- return _onnx_nested_tensor_from_tensor_list(tensor_list)
-
- # TODO make it support different-sized images
- max_size = _max_by_axis([list(img.shape) for img in tensor_list])
- # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
- batch_shape = [len(tensor_list)] + max_size
- b, c, h, w = batch_shape
- dtype = tensor_list[0].dtype
- device = tensor_list[0].device
- tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
- mask = torch.ones((b, h, w), dtype=torch.bool, device=device)
- for img, pad_img, m in zip(tensor_list, tensor, mask):
- pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- m[: img.shape[1], : img.shape[2]] = False
- else:
- raise ValueError("not supported")
- return NestedTensor(tensor, mask)
-
-
-# _onnx_nested_tensor_from_tensor_list() is an implementation of
-# nested_tensor_from_tensor_list() that is supported by ONNX tracing.
-@torch.jit.unused
-def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor:
- max_size = []
- for i in range(tensor_list[0].dim()):
- max_size_i = torch.max(
- torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32)
- ).to(torch.int64)
- max_size.append(max_size_i)
- max_size = tuple(max_size)
-
- # work around for
- # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- # m[: img.shape[1], :img.shape[2]] = False
- # which is not yet supported in onnx
- padded_imgs = []
- padded_masks = []
- for img in tensor_list:
- padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]
- padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))
- padded_imgs.append(padded_img)
-
- m = torch.zeros_like(img[0], dtype=torch.int, device=img.device)
- padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1)
- padded_masks.append(padded_mask.to(torch.bool))
-
- tensor = torch.stack(padded_imgs)
- mask = torch.stack(padded_masks)
-
- return NestedTensor(tensor, mask=mask)
-
-
-def setup_for_distributed(is_master):
- """
- This function disables printing when not in master process
- """
- import builtins as __builtin__
-
- builtin_print = __builtin__.print
-
- def print(*args, **kwargs):
- force = kwargs.pop("force", False)
- if is_master or force:
- builtin_print(*args, **kwargs)
-
- __builtin__.print = print
-
-
-def is_dist_avail_and_initialized():
- if not dist.is_available():
- return False
- if not dist.is_initialized():
- return False
- return True
-
-
-def get_world_size():
- if not is_dist_avail_and_initialized():
- return 1
- return dist.get_world_size()
-
-
-def get_rank():
- if not is_dist_avail_and_initialized():
- return 0
- return dist.get_rank()
-
-
-def is_main_process():
- return get_rank() == 0
-
-
-def save_on_master(*args, **kwargs):
- if is_main_process():
- torch.save(*args, **kwargs)
-
-
-def init_distributed_mode(args):
- if "WORLD_SIZE" in os.environ and os.environ["WORLD_SIZE"] != "": # 'RANK' in os.environ and
- args.rank = int(os.environ["RANK"])
- args.world_size = int(os.environ["WORLD_SIZE"])
- args.gpu = args.local_rank = int(os.environ["LOCAL_RANK"])
-
- # launch by torch.distributed.launch
- # Single node
- # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 1 --rank 0 ...
- # Multi nodes
- # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 0 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ...
- # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 1 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ...
- # args.rank = int(os.environ.get('OMPI_COMM_WORLD_RANK'))
- # local_world_size = int(os.environ['GPU_PER_NODE_COUNT'])
- # args.world_size = args.world_size * local_world_size
- # args.gpu = args.local_rank = int(os.environ['LOCAL_RANK'])
- # args.rank = args.rank * local_world_size + args.local_rank
- print(
- "world size: {}, rank: {}, local rank: {}".format(
- args.world_size, args.rank, args.local_rank
- )
- )
- print(json.dumps(dict(os.environ), indent=2))
- elif "SLURM_PROCID" in os.environ:
- args.rank = int(os.environ["SLURM_PROCID"])
- args.gpu = args.local_rank = int(os.environ["SLURM_LOCALID"])
- args.world_size = int(os.environ["SLURM_NPROCS"])
-
- print(
- "world size: {}, world rank: {}, local rank: {}, device_count: {}".format(
- args.world_size, args.rank, args.local_rank, torch.cuda.device_count()
- )
- )
- else:
- print("Not using distributed mode")
- args.distributed = False
- args.world_size = 1
- args.rank = 0
- args.local_rank = 0
- return
-
- print("world_size:{} rank:{} local_rank:{}".format(args.world_size, args.rank, args.local_rank))
- args.distributed = True
- torch.cuda.set_device(args.local_rank)
- args.dist_backend = "nccl"
- print("| distributed init (rank {}): {}".format(args.rank, args.dist_url), flush=True)
-
- torch.distributed.init_process_group(
- backend=args.dist_backend,
- world_size=args.world_size,
- rank=args.rank,
- init_method=args.dist_url,
- )
-
- print("Before torch.distributed.barrier()")
- torch.distributed.barrier()
- print("End torch.distributed.barrier()")
- setup_for_distributed(args.rank == 0)
-
-
-@torch.no_grad()
-def accuracy(output, target, topk=(1,)):
- """Computes the precision@k for the specified values of k"""
- if target.numel() == 0:
- return [torch.zeros([], device=output.device)]
- maxk = max(topk)
- batch_size = target.size(0)
-
- _, pred = output.topk(maxk, 1, True, True)
- pred = pred.t()
- correct = pred.eq(target.view(1, -1).expand_as(pred))
-
- res = []
- for k in topk:
- correct_k = correct[:k].view(-1).float().sum(0)
- res.append(correct_k.mul_(100.0 / batch_size))
- return res
-
-
-@torch.no_grad()
-def accuracy_onehot(pred, gt):
- """_summary_
-
- Args:
- pred (_type_): n, c
- gt (_type_): n, c
- """
- tp = ((pred - gt).abs().sum(-1) < 1e-4).float().sum()
- acc = tp / gt.shape[0] * 100
- return acc
-
-
-def interpolate(input, size=None, scale_factor=None, mode="nearest", align_corners=None):
- # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor
- """
- Equivalent to nn.functional.interpolate, but with support for empty batch sizes.
- This will eventually be supported natively by PyTorch, and this
- class can go away.
- """
- if __torchvision_need_compat_flag < 0.7:
- if input.numel() > 0:
- return torch.nn.functional.interpolate(input, size, scale_factor, mode, align_corners)
-
- output_shape = _output_size(2, input, size, scale_factor)
- output_shape = list(input.shape[:-2]) + list(output_shape)
- return _new_empty_tensor(input, output_shape)
- else:
- return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners)
-
-
-class color_sys:
- def __init__(self, num_colors) -> None:
- self.num_colors = num_colors
- colors = []
- for i in np.arange(0.0, 360.0, 360.0 / num_colors):
- hue = i / 360.0
- lightness = (50 + np.random.rand() * 10) / 100.0
- saturation = (90 + np.random.rand() * 10) / 100.0
- colors.append(
- tuple([int(j * 255) for j in colorsys.hls_to_rgb(hue, lightness, saturation)])
- )
- self.colors = colors
-
- def __call__(self, idx):
- return self.colors[idx]
-
-
-def inverse_sigmoid(x, eps=1e-3):
- x = x.clamp(min=0, max=1)
- x1 = x.clamp(min=eps)
- x2 = (1 - x).clamp(min=eps)
- return torch.log(x1 / x2)
-
-
-def clean_state_dict(state_dict):
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- if k[:7] == "module.":
- k = k[7:] # remove `module.`
- new_state_dict[k] = v
- return new_state_dict
diff --git a/spaces/Makiing/coolb-in-gtest/src/components/tailwind-indicator.tsx b/spaces/Makiing/coolb-in-gtest/src/components/tailwind-indicator.tsx
deleted file mode 100644
index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/components/tailwind-indicator.tsx
+++ /dev/null
@@ -1,14 +0,0 @@
-export function TailwindIndicator() {
- if (process.env.NODE_ENV === 'production') return null
-
- return (
-
-
xs
-
sm
-
md
-
lg
-
xl
-
2xl
-
- )
-}
diff --git a/spaces/Manjushri/MusicGen/audiocraft/models/loaders.py b/spaces/Manjushri/MusicGen/audiocraft/models/loaders.py
deleted file mode 100644
index 19837d4cc98189bd38fdce0f46f51acacb893947..0000000000000000000000000000000000000000
--- a/spaces/Manjushri/MusicGen/audiocraft/models/loaders.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utility functions to load from the checkpoints.
-Each checkpoint is a torch.saved dict with the following keys:
-- 'xp.cfg': the hydra config as dumped during training. This should be used
- to rebuild the object using the audiocraft.models.builders functions,
-- 'model_best_state': a readily loadable best state for the model, including
- the conditioner. The model obtained from `xp.cfg` should be compatible
- with this state dict. In the case of a LM, the encodec model would not be
- bundled along but instead provided separately.
-
-Those functions also support loading from a remote location with the Torch Hub API.
-They also support overriding some parameters, in particular the device and dtype
-of the returned model.
-"""
-
-from pathlib import Path
-from huggingface_hub import hf_hub_download
-import typing as tp
-import os
-
-from omegaconf import OmegaConf
-import torch
-
-from . import builders
-
-
-HF_MODEL_CHECKPOINTS_MAP = {
- "small": "facebook/musicgen-small",
- "medium": "facebook/musicgen-medium",
- "large": "facebook/musicgen-large",
- "melody": "facebook/musicgen-melody",
-}
-
-
-def _get_state_dict(
- file_or_url_or_id: tp.Union[Path, str],
- filename: tp.Optional[str] = None,
- device='cpu',
- cache_dir: tp.Optional[str] = None,
-):
- # Return the state dict either from a file or url
- file_or_url_or_id = str(file_or_url_or_id)
- assert isinstance(file_or_url_or_id, str)
-
- if os.path.isfile(file_or_url_or_id):
- return torch.load(file_or_url_or_id, map_location=device)
-
- elif file_or_url_or_id.startswith('https://'):
- return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True)
-
- elif file_or_url_or_id in HF_MODEL_CHECKPOINTS_MAP:
- assert filename is not None, "filename needs to be defined if using HF checkpoints"
-
- repo_id = HF_MODEL_CHECKPOINTS_MAP[file_or_url_or_id]
- file = hf_hub_download(repo_id=repo_id, filename=filename, cache_dir=cache_dir)
- return torch.load(file, map_location=device)
-
- else:
- raise ValueError(f"{file_or_url_or_id} is not a valid name, path or link that can be loaded.")
-
-
-def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None):
- pkg = _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir)
- cfg = OmegaConf.create(pkg['xp.cfg'])
- cfg.device = str(device)
- model = builders.get_compression_model(cfg)
- model.load_state_dict(pkg['best_state'])
- model.eval()
- return model
-
-
-def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None):
- pkg = _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir)
- cfg = OmegaConf.create(pkg['xp.cfg'])
- cfg.device = str(device)
- if cfg.device == 'cpu':
- cfg.dtype = 'float32'
- else:
- cfg.dtype = 'float16'
- model = builders.get_lm_model(cfg)
- model.load_state_dict(pkg['best_state'])
- model.eval()
- model.cfg = cfg
- return model
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/data_loader_cache.py b/spaces/Mellow-ai/PhotoAI_Mellow/data_loader_cache.py
deleted file mode 100644
index 75dfb518be94c1ce9eb4d5271abaa9711a7613c3..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/data_loader_cache.py
+++ /dev/null
@@ -1,385 +0,0 @@
-## data loader
-## Ackownledgement:
-## We would like to thank Dr. Ibrahim Almakky (https://scholar.google.co.uk/citations?user=T9MTcK0AAAAJ&hl=en)
-## for his helps in implementing cache machanism of our DIS dataloader.
-from __future__ import print_function, division
-
-import numpy as np
-import random
-from copy import deepcopy
-import json
-from tqdm import tqdm
-from skimage import io
-import os
-from glob import glob
-
-import torch
-from torch.utils.data import Dataset, DataLoader
-from torchvision import transforms, utils
-from torchvision.transforms.functional import normalize
-import torch.nn.functional as F
-
-#### --------------------- DIS dataloader cache ---------------------####
-
-def get_im_gt_name_dict(datasets, flag='valid'):
- print("------------------------------", flag, "--------------------------------")
- name_im_gt_list = []
- for i in range(len(datasets)):
- print("--->>>", flag, " dataset ",i,"/",len(datasets)," ",datasets[i]["name"],"<<<---")
- tmp_im_list, tmp_gt_list = [], []
- tmp_im_list = glob(datasets[i]["im_dir"]+os.sep+'*'+datasets[i]["im_ext"])
-
- # img_name_dict[im_dirs[i][0]] = tmp_im_list
- print('-im-',datasets[i]["name"],datasets[i]["im_dir"], ': ',len(tmp_im_list))
-
- if(datasets[i]["gt_dir"]==""):
- print('-gt-', datasets[i]["name"], datasets[i]["gt_dir"], ': ', 'No Ground Truth Found')
- tmp_gt_list = []
- else:
- tmp_gt_list = [datasets[i]["gt_dir"]+os.sep+x.split(os.sep)[-1].split(datasets[i]["im_ext"])[0]+datasets[i]["gt_ext"] for x in tmp_im_list]
-
- # lbl_name_dict[im_dirs[i][0]] = tmp_gt_list
- print('-gt-', datasets[i]["name"],datasets[i]["gt_dir"], ': ',len(tmp_gt_list))
-
-
- if flag=="train": ## combine multiple training sets into one dataset
- if len(name_im_gt_list)==0:
- name_im_gt_list.append({"dataset_name":datasets[i]["name"],
- "im_path":tmp_im_list,
- "gt_path":tmp_gt_list,
- "im_ext":datasets[i]["im_ext"],
- "gt_ext":datasets[i]["gt_ext"],
- "cache_dir":datasets[i]["cache_dir"]})
- else:
- name_im_gt_list[0]["dataset_name"] = name_im_gt_list[0]["dataset_name"] + "_" + datasets[i]["name"]
- name_im_gt_list[0]["im_path"] = name_im_gt_list[0]["im_path"] + tmp_im_list
- name_im_gt_list[0]["gt_path"] = name_im_gt_list[0]["gt_path"] + tmp_gt_list
- if datasets[i]["im_ext"]!=".jpg" or datasets[i]["gt_ext"]!=".png":
- print("Error: Please make sure all you images and ground truth masks are in jpg and png format respectively !!!")
- exit()
- name_im_gt_list[0]["im_ext"] = ".jpg"
- name_im_gt_list[0]["gt_ext"] = ".png"
- name_im_gt_list[0]["cache_dir"] = os.sep.join(datasets[i]["cache_dir"].split(os.sep)[0:-1])+os.sep+name_im_gt_list[0]["dataset_name"]
- else: ## keep different validation or inference datasets as separate ones
- name_im_gt_list.append({"dataset_name":datasets[i]["name"],
- "im_path":tmp_im_list,
- "gt_path":tmp_gt_list,
- "im_ext":datasets[i]["im_ext"],
- "gt_ext":datasets[i]["gt_ext"],
- "cache_dir":datasets[i]["cache_dir"]})
-
- return name_im_gt_list
-
-def create_dataloaders(name_im_gt_list, cache_size=[], cache_boost=True, my_transforms=[], batch_size=1, shuffle=False):
- ## model="train": return one dataloader for training
- ## model="valid": return a list of dataloaders for validation or testing
-
- gos_dataloaders = []
- gos_datasets = []
-
- if(len(name_im_gt_list)==0):
- return gos_dataloaders, gos_datasets
-
- num_workers_ = 1
- if(batch_size>1):
- num_workers_ = 2
- if(batch_size>4):
- num_workers_ = 4
- if(batch_size>8):
- num_workers_ = 8
-
- for i in range(0,len(name_im_gt_list)):
- gos_dataset = GOSDatasetCache([name_im_gt_list[i]],
- cache_size = cache_size,
- cache_path = name_im_gt_list[i]["cache_dir"],
- cache_boost = cache_boost,
- transform = transforms.Compose(my_transforms))
- gos_dataloaders.append(DataLoader(gos_dataset, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers_))
- gos_datasets.append(gos_dataset)
-
- return gos_dataloaders, gos_datasets
-
-def im_reader(im_path):
- return io.imread(im_path)
-
-def im_preprocess(im,size):
- if len(im.shape) < 3:
- im = im[:, :, np.newaxis]
- if im.shape[2] == 1:
- im = np.repeat(im, 3, axis=2)
- im_tensor = torch.tensor(im.copy(), dtype=torch.float32)
- im_tensor = torch.transpose(torch.transpose(im_tensor,1,2),0,1)
- if(len(size)<2):
- return im_tensor, im.shape[0:2]
- else:
- im_tensor = torch.unsqueeze(im_tensor,0)
- im_tensor = F.upsample(im_tensor, size, mode="bilinear")
- im_tensor = torch.squeeze(im_tensor,0)
-
- return im_tensor.type(torch.uint8), im.shape[0:2]
-
-def gt_preprocess(gt,size):
- if len(gt.shape) > 2:
- gt = gt[:, :, 0]
-
- gt_tensor = torch.unsqueeze(torch.tensor(gt, dtype=torch.uint8),0)
-
- if(len(size)<2):
- return gt_tensor.type(torch.uint8), gt.shape[0:2]
- else:
- gt_tensor = torch.unsqueeze(torch.tensor(gt_tensor, dtype=torch.float32),0)
- gt_tensor = F.upsample(gt_tensor, size, mode="bilinear")
- gt_tensor = torch.squeeze(gt_tensor,0)
-
- return gt_tensor.type(torch.uint8), gt.shape[0:2]
- # return gt_tensor, gt.shape[0:2]
-
-class GOSRandomHFlip(object):
- def __init__(self,prob=0.5):
- self.prob = prob
- def __call__(self,sample):
- imidx, image, label, shape = sample['imidx'], sample['image'], sample['label'], sample['shape']
-
- # random horizontal flip
- if random.random() >= self.prob:
- image = torch.flip(image,dims=[2])
- label = torch.flip(label,dims=[2])
-
- return {'imidx':imidx,'image':image, 'label':label, 'shape':shape}
-
-class GOSResize(object):
- def __init__(self,size=[320,320]):
- self.size = size
- def __call__(self,sample):
- imidx, image, label, shape = sample['imidx'], sample['image'], sample['label'], sample['shape']
-
- # import time
- # start = time.time()
-
- image = torch.squeeze(F.upsample(torch.unsqueeze(image,0),self.size,mode='bilinear'),dim=0)
- label = torch.squeeze(F.upsample(torch.unsqueeze(label,0),self.size,mode='bilinear'),dim=0)
-
- # print("time for resize: ", time.time()-start)
-
- return {'imidx':imidx,'image':image, 'label':label, 'shape':shape}
-
-class GOSRandomCrop(object):
- def __init__(self,size=[288,288]):
- self.size = size
- def __call__(self,sample):
- imidx, image, label, shape = sample['imidx'], sample['image'], sample['label'], sample['shape']
-
- h, w = image.shape[1:]
- new_h, new_w = self.size
-
- top = np.random.randint(0, h - new_h)
- left = np.random.randint(0, w - new_w)
-
- image = image[:,top:top+new_h,left:left+new_w]
- label = label[:,top:top+new_h,left:left+new_w]
-
- return {'imidx':imidx,'image':image, 'label':label, 'shape':shape}
-
-
-class GOSNormalize(object):
- def __init__(self, mean=[0.485,0.456,0.406], std=[0.229,0.224,0.225]):
- self.mean = mean
- self.std = std
-
- def __call__(self,sample):
-
- imidx, image, label, shape = sample['imidx'], sample['image'], sample['label'], sample['shape']
- image = normalize(image,self.mean,self.std)
-
- return {'imidx':imidx,'image':image, 'label':label, 'shape':shape}
-
-
-class GOSDatasetCache(Dataset):
-
- def __init__(self, name_im_gt_list, cache_size=[], cache_path='./cache', cache_file_name='dataset.json', cache_boost=False, transform=None):
-
-
- self.cache_size = cache_size
- self.cache_path = cache_path
- self.cache_file_name = cache_file_name
- self.cache_boost_name = ""
-
- self.cache_boost = cache_boost
- # self.ims_npy = None
- # self.gts_npy = None
-
- ## cache all the images and ground truth into a single pytorch tensor
- self.ims_pt = None
- self.gts_pt = None
-
- ## we will cache the npy as well regardless of the cache_boost
- # if(self.cache_boost):
- self.cache_boost_name = cache_file_name.split('.json')[0]
-
- self.transform = transform
-
- self.dataset = {}
-
- ## combine different datasets into one
- dataset_names = []
- dt_name_list = [] # dataset name per image
- im_name_list = [] # image name
- im_path_list = [] # im path
- gt_path_list = [] # gt path
- im_ext_list = [] # im ext
- gt_ext_list = [] # gt ext
- for i in range(0,len(name_im_gt_list)):
- dataset_names.append(name_im_gt_list[i]["dataset_name"])
- # dataset name repeated based on the number of images in this dataset
- dt_name_list.extend([name_im_gt_list[i]["dataset_name"] for x in name_im_gt_list[i]["im_path"]])
- im_name_list.extend([x.split(os.sep)[-1].split(name_im_gt_list[i]["im_ext"])[0] for x in name_im_gt_list[i]["im_path"]])
- im_path_list.extend(name_im_gt_list[i]["im_path"])
- gt_path_list.extend(name_im_gt_list[i]["gt_path"])
- im_ext_list.extend([name_im_gt_list[i]["im_ext"] for x in name_im_gt_list[i]["im_path"]])
- gt_ext_list.extend([name_im_gt_list[i]["gt_ext"] for x in name_im_gt_list[i]["gt_path"]])
-
-
- self.dataset["data_name"] = dt_name_list
- self.dataset["im_name"] = im_name_list
- self.dataset["im_path"] = im_path_list
- self.dataset["ori_im_path"] = deepcopy(im_path_list)
- self.dataset["gt_path"] = gt_path_list
- self.dataset["ori_gt_path"] = deepcopy(gt_path_list)
- self.dataset["im_shp"] = []
- self.dataset["gt_shp"] = []
- self.dataset["im_ext"] = im_ext_list
- self.dataset["gt_ext"] = gt_ext_list
-
-
- self.dataset["ims_pt_dir"] = ""
- self.dataset["gts_pt_dir"] = ""
-
- self.dataset = self.manage_cache(dataset_names)
-
- def manage_cache(self,dataset_names):
- if not os.path.exists(self.cache_path): # create the folder for cache
- os.makedirs(self.cache_path)
- cache_folder = os.path.join(self.cache_path, "_".join(dataset_names)+"_"+"x".join([str(x) for x in self.cache_size]))
- if not os.path.exists(cache_folder): # check if the cache files are there, if not then cache
- return self.cache(cache_folder)
- return self.load_cache(cache_folder)
-
- def cache(self,cache_folder):
- os.mkdir(cache_folder)
- cached_dataset = deepcopy(self.dataset)
-
- # ims_list = []
- # gts_list = []
- ims_pt_list = []
- gts_pt_list = []
- for i, im_path in tqdm(enumerate(self.dataset["im_path"]), total=len(self.dataset["im_path"])):
-
- im_id = cached_dataset["im_name"][i]
- print("im_path: ", im_path)
- im = im_reader(im_path)
- im, im_shp = im_preprocess(im,self.cache_size)
- im_cache_file = os.path.join(cache_folder,self.dataset["data_name"][i]+"_"+im_id + "_im.pt")
- torch.save(im,im_cache_file)
-
- cached_dataset["im_path"][i] = im_cache_file
- if(self.cache_boost):
- ims_pt_list.append(torch.unsqueeze(im,0))
- # ims_list.append(im.cpu().data.numpy().astype(np.uint8))
-
- gt = np.zeros(im.shape[0:2])
- if len(self.dataset["gt_path"])!=0:
- gt = im_reader(self.dataset["gt_path"][i])
- gt, gt_shp = gt_preprocess(gt,self.cache_size)
- gt_cache_file = os.path.join(cache_folder,self.dataset["data_name"][i]+"_"+im_id + "_gt.pt")
- torch.save(gt,gt_cache_file)
- if len(self.dataset["gt_path"])>0:
- cached_dataset["gt_path"][i] = gt_cache_file
- else:
- cached_dataset["gt_path"].append(gt_cache_file)
- if(self.cache_boost):
- gts_pt_list.append(torch.unsqueeze(gt,0))
- # gts_list.append(gt.cpu().data.numpy().astype(np.uint8))
-
- # im_shp_cache_file = os.path.join(cache_folder,im_id + "_im_shp.pt")
- # torch.save(gt_shp, shp_cache_file)
- cached_dataset["im_shp"].append(im_shp)
- # self.dataset["im_shp"].append(im_shp)
-
- # shp_cache_file = os.path.join(cache_folder,im_id + "_gt_shp.pt")
- # torch.save(gt_shp, shp_cache_file)
- cached_dataset["gt_shp"].append(gt_shp)
- # self.dataset["gt_shp"].append(gt_shp)
-
- if(self.cache_boost):
- cached_dataset["ims_pt_dir"] = os.path.join(cache_folder, self.cache_boost_name+'_ims.pt')
- cached_dataset["gts_pt_dir"] = os.path.join(cache_folder, self.cache_boost_name+'_gts.pt')
- self.ims_pt = torch.cat(ims_pt_list,dim=0)
- self.gts_pt = torch.cat(gts_pt_list,dim=0)
- torch.save(torch.cat(ims_pt_list,dim=0),cached_dataset["ims_pt_dir"])
- torch.save(torch.cat(gts_pt_list,dim=0),cached_dataset["gts_pt_dir"])
-
- try:
- json_file = open(os.path.join(cache_folder, self.cache_file_name),"w")
- json.dump(cached_dataset, json_file)
- json_file.close()
- except Exception:
- raise FileNotFoundError("Cannot create JSON")
- return cached_dataset
-
- def load_cache(self, cache_folder):
- json_file = open(os.path.join(cache_folder,self.cache_file_name),"r")
- dataset = json.load(json_file)
- json_file.close()
- ## if cache_boost is true, we will load the image npy and ground truth npy into the RAM
- ## otherwise the pytorch tensor will be loaded
- if(self.cache_boost):
- # self.ims_npy = np.load(dataset["ims_npy_dir"])
- # self.gts_npy = np.load(dataset["gts_npy_dir"])
- self.ims_pt = torch.load(dataset["ims_pt_dir"], map_location='cpu')
- self.gts_pt = torch.load(dataset["gts_pt_dir"], map_location='cpu')
- return dataset
-
- def __len__(self):
- return len(self.dataset["im_path"])
-
- def __getitem__(self, idx):
-
- im = None
- gt = None
- if(self.cache_boost and self.ims_pt is not None):
-
- # start = time.time()
- im = self.ims_pt[idx]#.type(torch.float32)
- gt = self.gts_pt[idx]#.type(torch.float32)
- # print(idx, 'time for pt loading: ', time.time()-start)
-
- else:
- # import time
- # start = time.time()
- # print("tensor***")
- im_pt_path = os.path.join(self.cache_path,os.sep.join(self.dataset["im_path"][idx].split(os.sep)[-2:]))
- im = torch.load(im_pt_path)#(self.dataset["im_path"][idx])
- gt_pt_path = os.path.join(self.cache_path,os.sep.join(self.dataset["gt_path"][idx].split(os.sep)[-2:]))
- gt = torch.load(gt_pt_path)#(self.dataset["gt_path"][idx])
- # print(idx,'time for tensor loading: ', time.time()-start)
-
-
- im_shp = self.dataset["im_shp"][idx]
- # print("time for loading im and gt: ", time.time()-start)
-
- # start_time = time.time()
- im = torch.divide(im,255.0)
- gt = torch.divide(gt,255.0)
- # print(idx, 'time for normalize torch divide: ', time.time()-start_time)
-
- sample = {
- "imidx": torch.from_numpy(np.array(idx)),
- "image": im,
- "label": gt,
- "shape": torch.from_numpy(np.array(im_shp)),
- }
-
- if self.transform:
- sample = self.transform(sample)
-
- return sample
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/session_factory.py b/spaces/Mellow-ai/PhotoAI_Mellow/rembg/session_factory.py
deleted file mode 100644
index b1e1cb3f4d2669b1ff8a91a4d46544b839466902..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/session_factory.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import os
-from typing import Type
-
-import onnxruntime as ort
-
-from .sessions import sessions_class
-from .sessions.base import BaseSession
-from .sessions.u2net import U2netSession
-
-
-def new_session(model_name: str = "u2net", *args, **kwargs) -> BaseSession:
- session_class: Type[BaseSession] = U2netSession
-
- for sc in sessions_class:
- if sc.name() == model_name:
- session_class = sc
- break
-
- sess_opts = ort.SessionOptions()
-
- if "OMP_NUM_THREADS" in os.environ:
- sess_opts.inter_op_num_threads = int(os.environ["OMP_NUM_THREADS"])
-
- return session_class(model_name, sess_opts, *args, **kwargs)
diff --git a/spaces/Michale1017/xray/index.js b/spaces/Michale1017/xray/index.js
deleted file mode 100644
index 720c4f8b30199ee5df97bd1135d01728bd6b8068..0000000000000000000000000000000000000000
--- a/spaces/Michale1017/xray/index.js
+++ /dev/null
@@ -1,38 +0,0 @@
-const express = require("express");
-const app = express();
-const { spawn } = require('child_process');
-const { createProxyMiddleware } = require("http-proxy-middleware");
-const port= process.env.PORT||7860;
-const shellFilePath = './start.sh';
-const childProcess = spawn('sh', [shellFilePath]);
-
-// 监听子进程输出
-childProcess.stdout.on('data', (data) => {
- console.log(`stdout: ${data}`);
-});
-
-childProcess.stderr.on('data', (data) => {
- console.error(`stderr: ${data}`);
-});
-
-childProcess.on('close', (code) => {
- console.log(`Child process exit, exit code:${code}`);
-});
-// http
-app.get("/", function(req, res) {
- res.send("Hello world!");
-});
-app.use(
- "/",
- createProxyMiddleware({
- changeOrigin: true,
- onProxyReq: function onProxyReq(proxyReq, req, res) { },
- pathRewrite: {
- "^/": "/",
- },
- target: "http://127.0.0.1:8080/",
- ws: true,
- })
-);
-
-app.listen(port, () => console.log(`server is running on port:${port}!`));
\ No newline at end of file
diff --git a/spaces/MikeTrizna/bhl_flickr_search/README.md b/spaces/MikeTrizna/bhl_flickr_search/README.md
deleted file mode 100644
index 086d0b372b652583accf74db7b617b77d8255106..0000000000000000000000000000000000000000
--- a/spaces/MikeTrizna/bhl_flickr_search/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: BHL Flickr Search
-emoji: 🐢
-colorFrom: indigo
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-This is a Hugging Face implementation of https://github.com/miketrizna/bhl_flickr_search
diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/resnet_imagenet_test.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/resnet_imagenet_test.py
deleted file mode 100644
index 45c35d539ce2d7fcd0df30ed1d520e47e51312fa..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/resnet_imagenet_test.py
+++ /dev/null
@@ -1,249 +0,0 @@
-# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Test the keras ResNet model with ImageNet data."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-from absl.testing import parameterized
-import tensorflow as tf
-
-from tensorflow.python.eager import context
-from official.benchmark.models import resnet_imagenet_main
-from official.utils.testing import integration
-from official.vision.image_classification.resnet import imagenet_preprocessing
-
-
-@parameterized.parameters(
- "resnet",
- # "resnet_polynomial_decay", b/151854314
- "mobilenet",
- # "mobilenet_polynomial_decay" b/151854314
-)
-class KerasImagenetTest(tf.test.TestCase):
- """Unit tests for Keras Models with ImageNet."""
- _default_flags_dict = [
- "-batch_size", "4",
- "-train_steps", "1",
- "-use_synthetic_data", "true",
- "-data_format", "channels_last",
- ]
- _extra_flags_dict = {
- "resnet": [
- "-model", "resnet50_v1.5",
- "-optimizer", "resnet50_default",
- ],
- "resnet_polynomial_decay": [
- "-model", "resnet50_v1.5",
- "-optimizer", "resnet50_default",
- "-pruning_method", "polynomial_decay",
- ],
- "mobilenet": [
- "-model", "mobilenet",
- "-optimizer", "mobilenet_default",
- ],
- "mobilenet_polynomial_decay": [
- "-model", "mobilenet",
- "-optimizer", "mobilenet_default",
- "-pruning_method", "polynomial_decay",
- ],
- }
- _tempdir = None
-
- @classmethod
- def setUpClass(cls): # pylint: disable=invalid-name
- super(KerasImagenetTest, cls).setUpClass()
- resnet_imagenet_main.define_imagenet_keras_flags()
-
- def setUp(self):
- super(KerasImagenetTest, self).setUp()
- imagenet_preprocessing.NUM_IMAGES["validation"] = 4
- self.policy = \
- tf.keras.mixed_precision.experimental.global_policy()
-
- def tearDown(self):
- super(KerasImagenetTest, self).tearDown()
- tf.io.gfile.rmtree(self.get_temp_dir())
- tf.keras.mixed_precision.experimental.set_policy(self.policy)
-
- def get_extra_flags_dict(self, flags_key):
- return self._extra_flags_dict[flags_key] + self._default_flags_dict
-
- def test_end_to_end_no_dist_strat(self, flags_key):
- """Test Keras model with 1 GPU, no distribution strategy."""
-
- extra_flags = [
- "-distribution_strategy", "off",
- ]
- extra_flags = extra_flags + self.get_extra_flags_dict(flags_key)
-
- integration.run_synthetic(
- main=resnet_imagenet_main.run,
- tmp_root=self.get_temp_dir(),
- extra_flags=extra_flags
- )
-
- def test_end_to_end_graph_no_dist_strat(self, flags_key):
- """Test Keras model in legacy graph mode with 1 GPU, no dist strat."""
- extra_flags = [
- "-enable_eager", "false",
- "-distribution_strategy", "off",
- ]
- extra_flags = extra_flags + self.get_extra_flags_dict(flags_key)
-
- integration.run_synthetic(
- main=resnet_imagenet_main.run,
- tmp_root=self.get_temp_dir(),
- extra_flags=extra_flags
- )
-
- def test_end_to_end_1_gpu(self, flags_key):
- """Test Keras model with 1 GPU."""
-
- if context.num_gpus() < 1:
- self.skipTest(
- "{} GPUs are not available for this test. {} GPUs are available".
- format(1, context.num_gpus()))
-
- extra_flags = [
- "-num_gpus", "1",
- "-distribution_strategy", "mirrored",
- "-enable_checkpoint_and_export", "1",
- ]
- extra_flags = extra_flags + self.get_extra_flags_dict(flags_key)
-
- integration.run_synthetic(
- main=resnet_imagenet_main.run,
- tmp_root=self.get_temp_dir(),
- extra_flags=extra_flags
- )
-
- def test_end_to_end_1_gpu_fp16(self, flags_key):
- """Test Keras model with 1 GPU and fp16."""
-
- if context.num_gpus() < 1:
- self.skipTest(
- "{} GPUs are not available for this test. {} GPUs are available"
- .format(1, context.num_gpus()))
-
- extra_flags = [
- "-num_gpus", "1",
- "-dtype", "fp16",
- "-distribution_strategy", "mirrored",
- ]
- extra_flags = extra_flags + self.get_extra_flags_dict(flags_key)
-
- if "polynomial_decay" in extra_flags:
- self.skipTest("Pruning with fp16 is not currently supported.")
-
- integration.run_synthetic(
- main=resnet_imagenet_main.run,
- tmp_root=self.get_temp_dir(),
- extra_flags=extra_flags
- )
-
- def test_end_to_end_2_gpu(self, flags_key):
- """Test Keras model with 2 GPUs."""
-
- if context.num_gpus() < 2:
- self.skipTest(
- "{} GPUs are not available for this test. {} GPUs are available".
- format(2, context.num_gpus()))
-
- extra_flags = [
- "-num_gpus", "2",
- "-distribution_strategy", "mirrored",
- ]
- extra_flags = extra_flags + self.get_extra_flags_dict(flags_key)
-
- integration.run_synthetic(
- main=resnet_imagenet_main.run,
- tmp_root=self.get_temp_dir(),
- extra_flags=extra_flags
- )
-
- def test_end_to_end_xla_2_gpu(self, flags_key):
- """Test Keras model with XLA and 2 GPUs."""
-
- if context.num_gpus() < 2:
- self.skipTest(
- "{} GPUs are not available for this test. {} GPUs are available".
- format(2, context.num_gpus()))
-
- extra_flags = [
- "-num_gpus", "2",
- "-enable_xla", "true",
- "-distribution_strategy", "mirrored",
- ]
- extra_flags = extra_flags + self.get_extra_flags_dict(flags_key)
-
- integration.run_synthetic(
- main=resnet_imagenet_main.run,
- tmp_root=self.get_temp_dir(),
- extra_flags=extra_flags
- )
-
- def test_end_to_end_2_gpu_fp16(self, flags_key):
- """Test Keras model with 2 GPUs and fp16."""
-
- if context.num_gpus() < 2:
- self.skipTest(
- "{} GPUs are not available for this test. {} GPUs are available".
- format(2, context.num_gpus()))
-
- extra_flags = [
- "-num_gpus", "2",
- "-dtype", "fp16",
- "-distribution_strategy", "mirrored",
- ]
- extra_flags = extra_flags + self.get_extra_flags_dict(flags_key)
-
- if "polynomial_decay" in extra_flags:
- self.skipTest("Pruning with fp16 is not currently supported.")
-
- integration.run_synthetic(
- main=resnet_imagenet_main.run,
- tmp_root=self.get_temp_dir(),
- extra_flags=extra_flags
- )
-
- def test_end_to_end_xla_2_gpu_fp16(self, flags_key):
- """Test Keras model with XLA, 2 GPUs and fp16."""
- if context.num_gpus() < 2:
- self.skipTest(
- "{} GPUs are not available for this test. {} GPUs are available".
- format(2, context.num_gpus()))
-
- extra_flags = [
- "-num_gpus", "2",
- "-dtype", "fp16",
- "-enable_xla", "true",
- "-distribution_strategy", "mirrored",
- ]
- extra_flags = extra_flags + self.get_extra_flags_dict(flags_key)
-
- if "polynomial_decay" in extra_flags:
- self.skipTest("Pruning with fp16 is not currently supported.")
-
- integration.run_synthetic(
- main=resnet_imagenet_main.run,
- tmp_root=self.get_temp_dir(),
- extra_flags=extra_flags
- )
-
-
-if __name__ == "__main__":
- tf.test.main()
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/xlnet/squad_utils.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/xlnet/squad_utils.py
deleted file mode 100644
index efab6da6f80658213317e13dee86b09b2cb94c63..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/nlp/xlnet/squad_utils.py
+++ /dev/null
@@ -1,973 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-# coding=utf-8
-"""Utilities used in SQUAD task."""
-from __future__ import absolute_import
-from __future__ import division
-# from __future__ import google_type_annotations
-from __future__ import print_function
-
-import collections
-import gc
-import json
-import math
-import os
-import pickle
-import re
-import string
-
-from absl import logging
-import numpy as np
-import six
-import tensorflow as tf
-
-from official.nlp.xlnet import data_utils
-from official.nlp.xlnet import preprocess_utils
-
-SPIECE_UNDERLINE = u"▁"
-
-
-class InputFeatures(object):
- """A single set of features of data."""
-
- def __init__(self,
- unique_id,
- example_index,
- doc_span_index,
- tok_start_to_orig_index,
- tok_end_to_orig_index,
- token_is_max_context,
- input_ids,
- input_mask,
- p_mask,
- segment_ids,
- paragraph_len,
- cls_index,
- start_position=None,
- end_position=None,
- is_impossible=None):
- self.unique_id = unique_id
- self.example_index = example_index
- self.doc_span_index = doc_span_index
- self.tok_start_to_orig_index = tok_start_to_orig_index
- self.tok_end_to_orig_index = tok_end_to_orig_index
- self.token_is_max_context = token_is_max_context
- self.input_ids = input_ids
- self.input_mask = input_mask
- self.p_mask = p_mask
- self.segment_ids = segment_ids
- self.paragraph_len = paragraph_len
- self.cls_index = cls_index
- self.start_position = start_position
- self.end_position = end_position
- self.is_impossible = is_impossible
-
-
-def make_qid_to_has_ans(dataset):
- qid_to_has_ans = {}
- for article in dataset:
- for p in article["paragraphs"]:
- for qa in p["qas"]:
- qid_to_has_ans[qa["id"]] = bool(qa["answers"])
- return qid_to_has_ans
-
-
-def get_raw_scores(dataset, preds):
- """Gets exact scores and f1 scores."""
- exact_scores = {}
- f1_scores = {}
- for article in dataset:
- for p in article["paragraphs"]:
- for qa in p["qas"]:
- qid = qa["id"]
- gold_answers = [
- a["text"] for a in qa["answers"] if normalize_answer(a["text"])
- ]
- if not gold_answers:
- # For unanswerable questions, only correct answer is empty string
- gold_answers = [""]
- if qid not in preds:
- print("Missing prediction for %s" % qid)
- continue
- a_pred = preds[qid]
- # Take max over all gold answers
- exact_scores[qid] = max(compute_exact(a, a_pred) for a in gold_answers)
- f1_scores[qid] = max(compute_f1(a, a_pred) for a in gold_answers)
- return exact_scores, f1_scores
-
-
-def normalize_answer(s):
- """Lower text and remove punctuation, articles and extra whitespace."""
-
- def remove_articles(text):
- regex = re.compile(r"\b(a|an|the)\b", re.UNICODE)
- return re.sub(regex, " ", text)
-
- def white_space_fix(text):
- return " ".join(text.split())
-
- def remove_punc(text):
- exclude = set(string.punctuation)
- return "".join(ch for ch in text if ch not in exclude)
-
- def lower(text):
- return text.lower()
-
- return white_space_fix(remove_articles(remove_punc(lower(s))))
-
-
-def compute_exact(a_gold, a_pred):
- return int(normalize_answer(a_gold) == normalize_answer(a_pred))
-
-
-def get_tokens(s):
- if not s:
- return []
- return normalize_answer(s).split()
-
-
-def compute_f1(a_gold, a_pred):
- """Computes f1 score."""
- gold_toks = get_tokens(a_gold)
- pred_toks = get_tokens(a_pred)
- common = collections.Counter(gold_toks) & collections.Counter(pred_toks)
- num_same = sum(common.values())
- # pylint: disable=g-explicit-length-test
- if len(gold_toks) == 0 or len(pred_toks) == 0:
- # If either is no-answer, then F1 is 1 if they agree, 0 otherwise
- return int(gold_toks == pred_toks)
- if num_same == 0:
- return 0
- precision = 1.0 * num_same / len(pred_toks)
- recall = 1.0 * num_same / len(gold_toks)
- f1 = (2 * precision * recall) / (precision + recall)
- return f1
-
-
-def find_best_thresh(preds, scores, na_probs, qid_to_has_ans):
- """Finds best threshold."""
- num_no_ans = sum(1 for k in qid_to_has_ans if not qid_to_has_ans[k])
- cur_score = num_no_ans
- best_score = cur_score
- best_thresh = 0.0
- qid_list = sorted(na_probs, key=lambda k: na_probs[k])
- for qid in qid_list:
- if qid not in scores:
- continue
- if qid_to_has_ans[qid]:
- diff = scores[qid]
- else:
- if preds[qid]:
- diff = -1
- else:
- diff = 0
- cur_score += diff
- if cur_score > best_score:
- best_score = cur_score
- best_thresh = na_probs[qid]
-
- has_ans_score, has_ans_cnt = 0, 0
- for qid in qid_list:
- if not qid_to_has_ans[qid]:
- continue
- has_ans_cnt += 1
-
- if qid not in scores:
- continue
- has_ans_score += scores[qid]
-
- return 100.0 * best_score / len(
- scores), best_thresh, 1.0 * has_ans_score / has_ans_cnt
-
-
-def find_all_best_thresh(main_eval, preds, exact_raw, f1_raw, na_probs,
- qid_to_has_ans):
- """Finds all best threshold."""
- best_exact, exact_thresh, has_ans_exact = find_best_thresh(
- preds, exact_raw, na_probs, qid_to_has_ans)
- best_f1, f1_thresh, has_ans_f1 = find_best_thresh(preds, f1_raw, na_probs,
- qid_to_has_ans)
- main_eval["best_exact"] = best_exact
- main_eval["best_exact_thresh"] = exact_thresh
- main_eval["best_f1"] = best_f1
- main_eval["best_f1_thresh"] = f1_thresh
- main_eval["has_ans_exact"] = has_ans_exact
- main_eval["has_ans_f1"] = has_ans_f1
-
-
-_PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name
- "PrelimPrediction", [
- "feature_index", "start_index", "end_index", "start_log_prob",
- "end_log_prob"
- ])
-
-_NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name
- "NbestPrediction", ["text", "start_log_prob", "end_log_prob"])
-RawResult = collections.namedtuple("RawResult", [
- "unique_id", "start_top_log_probs", "start_top_index", "end_top_log_probs",
- "end_top_index", "cls_logits"
-])
-
-
-def _compute_softmax(scores):
- """Computes softmax probability over raw logits."""
- if not scores:
- return []
-
- max_score = None
- for score in scores:
- if max_score is None or score > max_score:
- max_score = score
-
- exp_scores = []
- total_sum = 0.0
- for score in scores:
- x = math.exp(score - max_score)
- exp_scores.append(x)
- total_sum += x
-
- probs = []
- for score in exp_scores:
- probs.append(score / total_sum)
- return probs
-
-
-class SquadExample(object):
- """A single training/test example for simple sequence classification.
-
- For examples without an answer, the start and end position are -1.
- """
-
- def __init__(self,
- qas_id,
- question_text,
- paragraph_text,
- orig_answer_text=None,
- start_position=None,
- is_impossible=False):
- self.qas_id = qas_id
- self.question_text = question_text
- self.paragraph_text = paragraph_text
- self.orig_answer_text = orig_answer_text
- self.start_position = start_position
- self.is_impossible = is_impossible
-
- def __str__(self):
- return self.__repr__()
-
- def __repr__(self):
- s = ""
- s += "qas_id: %s" % (preprocess_utils.printable_text(self.qas_id))
- s += ", question_text: %s" % (
- preprocess_utils.printable_text(self.question_text))
- s += ", paragraph_text: [%s]" % (" ".join(self.paragraph_text))
- if self.start_position:
- s += ", start_position: %d" % (self.start_position)
- if self.start_position:
- s += ", is_impossible: %r" % (self.is_impossible)
- return s
-
-
-def write_predictions(all_examples, all_features, all_results, n_best_size,
- max_answer_length, output_prediction_file,
- output_nbest_file, output_null_log_odds_file, orig_data,
- start_n_top, end_n_top):
- """Writes final predictions to the json file and log-odds of null if needed."""
- logging.info("Writing predictions to: %s", (output_prediction_file))
-
- example_index_to_features = collections.defaultdict(list)
- for feature in all_features:
- example_index_to_features[feature.example_index].append(feature)
-
- unique_id_to_result = {}
- for result in all_results:
- unique_id_to_result[result.unique_id] = result
-
- all_predictions = collections.OrderedDict()
- all_nbest_json = collections.OrderedDict()
- scores_diff_json = collections.OrderedDict()
-
- for (example_index, example) in enumerate(all_examples):
- features = example_index_to_features[example_index]
-
- prelim_predictions = []
- # keep track of the minimum score of null start+end of position 0
- score_null = 1000000 # large and positive
-
- for (feature_index, feature) in enumerate(features):
- result = unique_id_to_result[feature.unique_id]
-
- cur_null_score = result.cls_logits
-
- # if we could have irrelevant answers, get the min score of irrelevant
- score_null = min(score_null, cur_null_score)
-
- for i in range(start_n_top):
- for j in range(end_n_top):
- start_log_prob = result.start_top_log_probs[i]
- start_index = result.start_top_index[i]
-
- j_index = i * end_n_top + j
-
- end_log_prob = result.end_top_log_probs[j_index]
- end_index = result.end_top_index[j_index]
-
- # We could hypothetically create invalid predictions, e.g., predict
- # that the start of the span is in the question. We throw out all
- # invalid predictions.
- if start_index >= feature.paragraph_len - 1:
- continue
- if end_index >= feature.paragraph_len - 1:
- continue
-
- if not feature.token_is_max_context.get(start_index, False):
- continue
- if end_index < start_index:
- continue
- length = end_index - start_index + 1
- if length > max_answer_length:
- continue
-
- prelim_predictions.append(
- _PrelimPrediction(
- feature_index=feature_index,
- start_index=start_index,
- end_index=end_index,
- start_log_prob=start_log_prob,
- end_log_prob=end_log_prob))
-
- prelim_predictions = sorted(
- prelim_predictions,
- key=lambda x: (x.start_log_prob + x.end_log_prob),
- reverse=True)
-
- seen_predictions = {}
- nbest = []
- for pred in prelim_predictions:
- if len(nbest) >= n_best_size:
- break
- feature = features[pred.feature_index]
-
- tok_start_to_orig_index = feature.tok_start_to_orig_index
- tok_end_to_orig_index = feature.tok_end_to_orig_index
- start_orig_pos = tok_start_to_orig_index[pred.start_index]
- end_orig_pos = tok_end_to_orig_index[pred.end_index]
-
- paragraph_text = example.paragraph_text
- final_text = paragraph_text[start_orig_pos:end_orig_pos + 1].strip()
-
- if final_text in seen_predictions:
- continue
-
- seen_predictions[final_text] = True
-
- nbest.append(
- _NbestPrediction(
- text=final_text,
- start_log_prob=pred.start_log_prob,
- end_log_prob=pred.end_log_prob))
-
- # In very rare edge cases we could have no valid predictions. So we
- # just create a nonce prediction in this case to avoid failure.
- if not nbest:
- nbest.append(
- _NbestPrediction(text="", start_log_prob=-1e6, end_log_prob=-1e6))
-
- total_scores = []
- best_non_null_entry = None
- for entry in nbest:
- total_scores.append(entry.start_log_prob + entry.end_log_prob)
- if not best_non_null_entry:
- best_non_null_entry = entry
-
- probs = _compute_softmax(total_scores)
-
- nbest_json = []
- for (i, entry) in enumerate(nbest):
- output = collections.OrderedDict()
- output["text"] = entry.text
- output["probability"] = probs[i]
- output["start_log_prob"] = entry.start_log_prob
- output["end_log_prob"] = entry.end_log_prob
- nbest_json.append(output)
-
- assert len(nbest_json) >= 1
- assert best_non_null_entry is not None
-
- score_diff = score_null
- scores_diff_json[example.qas_id] = score_diff
-
- all_predictions[example.qas_id] = best_non_null_entry.text
-
- all_nbest_json[example.qas_id] = nbest_json
-
- with tf.io.gfile.GFile(output_prediction_file, "w") as writer:
- writer.write(json.dumps(all_predictions, indent=4) + "\n")
-
- with tf.io.gfile.GFile(output_nbest_file, "w") as writer:
- writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
-
- with tf.io.gfile.GFile(output_null_log_odds_file, "w") as writer:
- writer.write(json.dumps(scores_diff_json, indent=4) + "\n")
-
- qid_to_has_ans = make_qid_to_has_ans(orig_data)
- exact_raw, f1_raw = get_raw_scores(orig_data, all_predictions)
- out_eval = {}
-
- find_all_best_thresh(out_eval, all_predictions, exact_raw, f1_raw,
- scores_diff_json, qid_to_has_ans)
-
- return out_eval
-
-
-def read_squad_examples(input_file, is_training):
- """Reads a SQuAD json file into a list of SquadExample."""
- with tf.io.gfile.GFile(input_file, "r") as reader:
- input_data = json.load(reader)["data"]
-
- examples = []
- for entry in input_data:
- for paragraph in entry["paragraphs"]:
- paragraph_text = paragraph["context"]
-
- for qa in paragraph["qas"]:
- qas_id = qa["id"]
- question_text = qa["question"]
- start_position = None
- orig_answer_text = None
- is_impossible = False
-
- if is_training:
- is_impossible = qa["is_impossible"]
- if (len(qa["answers"]) != 1) and (not is_impossible):
- raise ValueError(
- "For training, each question should have exactly 1 answer.")
- if not is_impossible:
- answer = qa["answers"][0]
- orig_answer_text = answer["text"]
- start_position = answer["answer_start"]
- else:
- start_position = -1
- orig_answer_text = ""
-
- example = SquadExample(
- qas_id=qas_id,
- question_text=question_text,
- paragraph_text=paragraph_text,
- orig_answer_text=orig_answer_text,
- start_position=start_position,
- is_impossible=is_impossible)
- examples.append(example)
-
- return examples
-
-
-# pylint: disable=invalid-name
-def _convert_index(index, pos, M=None, is_start=True):
- """Converts index."""
- if index[pos] is not None:
- return index[pos]
- N = len(index)
- rear = pos
- while rear < N - 1 and index[rear] is None:
- rear += 1
- front = pos
- while front > 0 and index[front] is None:
- front -= 1
- assert index[front] is not None or index[rear] is not None
- if index[front] is None:
- if index[rear] >= 1:
- if is_start:
- return 0
- else:
- return index[rear] - 1
- return index[rear]
- if index[rear] is None:
- if M is not None and index[front] < M - 1:
- if is_start:
- return index[front] + 1
- else:
- return M - 1
- return index[front]
- if is_start:
- if index[rear] > index[front] + 1:
- return index[front] + 1
- else:
- return index[rear]
- else:
- if index[rear] > index[front] + 1:
- return index[rear] - 1
- else:
- return index[front]
-
-
-def convert_examples_to_features(examples, sp_model, max_seq_length, doc_stride,
- max_query_length, is_training, output_fn,
- uncased):
- """Loads a data file into a list of `InputBatch`s."""
-
- cnt_pos, cnt_neg = 0, 0
- unique_id = 1000000000
- max_N, max_M = 1024, 1024
- f = np.zeros((max_N, max_M), dtype=np.float32)
-
- for (example_index, example) in enumerate(examples):
- # pylint: disable=logging-format-interpolation
- if example_index % 100 == 0:
- logging.info("Converting {}/{} pos {} neg {}".format(
- example_index, len(examples), cnt_pos, cnt_neg))
-
- query_tokens = preprocess_utils.encode_ids(
- sp_model,
- preprocess_utils.preprocess_text(example.question_text, lower=uncased))
-
- if len(query_tokens) > max_query_length:
- query_tokens = query_tokens[0:max_query_length]
-
- paragraph_text = example.paragraph_text
- para_tokens = preprocess_utils.encode_pieces(
- sp_model,
- preprocess_utils.preprocess_text(example.paragraph_text, lower=uncased))
-
- chartok_to_tok_index = []
- tok_start_to_chartok_index = []
- tok_end_to_chartok_index = []
- char_cnt = 0
- for i, token in enumerate(para_tokens):
- chartok_to_tok_index.extend([i] * len(token))
- tok_start_to_chartok_index.append(char_cnt)
- char_cnt += len(token)
- tok_end_to_chartok_index.append(char_cnt - 1)
-
- tok_cat_text = "".join(para_tokens).replace(SPIECE_UNDERLINE, " ")
- N, M = len(paragraph_text), len(tok_cat_text)
-
- if N > max_N or M > max_M:
- max_N = max(N, max_N)
- max_M = max(M, max_M)
- f = np.zeros((max_N, max_M), dtype=np.float32)
- gc.collect()
-
- g = {}
-
- # pylint: disable=cell-var-from-loop
- def _lcs_match(max_dist):
- """LCS match."""
- f.fill(0)
- g.clear()
-
- ### longest common sub sequence
- # f[i, j] = max(f[i - 1, j], f[i, j - 1], f[i - 1, j - 1] + match(i, j))
- for i in range(N):
-
- # note(zhiliny):
- # unlike standard LCS, this is specifically optimized for the setting
- # because the mismatch between sentence pieces and original text will
- # be small
- for j in range(i - max_dist, i + max_dist):
- if j >= M or j < 0:
- continue
-
- if i > 0:
- g[(i, j)] = 0
- f[i, j] = f[i - 1, j]
-
- if j > 0 and f[i, j - 1] > f[i, j]:
- g[(i, j)] = 1
- f[i, j] = f[i, j - 1]
-
- f_prev = f[i - 1, j - 1] if i > 0 and j > 0 else 0
- if (preprocess_utils.preprocess_text(
- paragraph_text[i], lower=uncased,
- remove_space=False) == tok_cat_text[j] and f_prev + 1 > f[i, j]):
- g[(i, j)] = 2
- f[i, j] = f_prev + 1
-
- max_dist = abs(N - M) + 5
- for _ in range(2):
- _lcs_match(max_dist)
- if f[N - 1, M - 1] > 0.8 * N:
- break
- max_dist *= 2
-
- orig_to_chartok_index = [None] * N
- chartok_to_orig_index = [None] * M
- i, j = N - 1, M - 1
- while i >= 0 and j >= 0:
- if (i, j) not in g:
- break
- if g[(i, j)] == 2:
- orig_to_chartok_index[i] = j
- chartok_to_orig_index[j] = i
- i, j = i - 1, j - 1
- elif g[(i, j)] == 1:
- j = j - 1
- else:
- i = i - 1
-
- if all(
- v is None for v in orig_to_chartok_index) or f[N - 1, M - 1] < 0.8 * N:
- print("MISMATCH DETECTED!")
- continue
-
- tok_start_to_orig_index = []
- tok_end_to_orig_index = []
- for i in range(len(para_tokens)):
- start_chartok_pos = tok_start_to_chartok_index[i]
- end_chartok_pos = tok_end_to_chartok_index[i]
- start_orig_pos = _convert_index(
- chartok_to_orig_index, start_chartok_pos, N, is_start=True)
- end_orig_pos = _convert_index(
- chartok_to_orig_index, end_chartok_pos, N, is_start=False)
-
- tok_start_to_orig_index.append(start_orig_pos)
- tok_end_to_orig_index.append(end_orig_pos)
-
- if not is_training:
- tok_start_position = tok_end_position = None
-
- if is_training and example.is_impossible:
- tok_start_position = -1
- tok_end_position = -1
-
- if is_training and not example.is_impossible:
- start_position = example.start_position
- end_position = start_position + len(example.orig_answer_text) - 1
-
- start_chartok_pos = _convert_index(
- orig_to_chartok_index, start_position, is_start=True)
- tok_start_position = chartok_to_tok_index[start_chartok_pos]
-
- end_chartok_pos = _convert_index(
- orig_to_chartok_index, end_position, is_start=False)
- tok_end_position = chartok_to_tok_index[end_chartok_pos]
- assert tok_start_position <= tok_end_position
-
- def _piece_to_id(x):
- if six.PY2 and isinstance(x, unicode):
- x = x.encode("utf-8")
- return sp_model.PieceToId(x)
-
- all_doc_tokens = list(map(_piece_to_id, para_tokens))
-
- # The -3 accounts for [CLS], [SEP] and [SEP]
- max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
-
- # We can have documents that are longer than the maximum sequence length.
- # To deal with this we do a sliding window approach, where we take chunks
- # of the up to our max length with a stride of `doc_stride`.
- _DocSpan = collections.namedtuple( # pylint: disable=invalid-name
- "DocSpan", ["start", "length"])
- doc_spans = []
- start_offset = 0
- while start_offset < len(all_doc_tokens):
- length = len(all_doc_tokens) - start_offset
- if length > max_tokens_for_doc:
- length = max_tokens_for_doc
- doc_spans.append(_DocSpan(start=start_offset, length=length))
- if start_offset + length == len(all_doc_tokens):
- break
- start_offset += min(length, doc_stride)
-
- for (doc_span_index, doc_span) in enumerate(doc_spans):
- tokens = []
- token_is_max_context = {}
- segment_ids = []
- p_mask = []
-
- cur_tok_start_to_orig_index = []
- cur_tok_end_to_orig_index = []
-
- for i in range(doc_span.length):
- split_token_index = doc_span.start + i
-
- cur_tok_start_to_orig_index.append(
- tok_start_to_orig_index[split_token_index])
- cur_tok_end_to_orig_index.append(
- tok_end_to_orig_index[split_token_index])
-
- is_max_context = _check_is_max_context(doc_spans, doc_span_index,
- split_token_index)
- token_is_max_context[len(tokens)] = is_max_context
- tokens.append(all_doc_tokens[split_token_index])
- segment_ids.append(data_utils.SEG_ID_P)
- p_mask.append(0)
-
- paragraph_len = len(tokens)
-
- tokens.append(data_utils.SEP_ID)
- segment_ids.append(data_utils.SEG_ID_P)
- p_mask.append(1)
-
- # note(zhiliny): we put P before Q
- # because during pretraining, B is always shorter than A
- for token in query_tokens:
- tokens.append(token)
- segment_ids.append(data_utils.SEG_ID_Q)
- p_mask.append(1)
- tokens.append(data_utils.SEP_ID)
- segment_ids.append(data_utils.SEG_ID_Q)
- p_mask.append(1)
-
- cls_index = len(segment_ids)
- tokens.append(data_utils.CLS_ID)
- segment_ids.append(data_utils.SEG_ID_CLS)
- p_mask.append(0)
-
- input_ids = tokens
-
- # The mask has 0 for real tokens and 1 for padding tokens. Only real
- # tokens are attended to.
- input_mask = [0] * len(input_ids)
-
- # Zero-pad up to the sequence length.
- while len(input_ids) < max_seq_length:
- input_ids.append(0)
- input_mask.append(1)
- segment_ids.append(data_utils.SEG_ID_PAD)
- p_mask.append(1)
-
- assert len(input_ids) == max_seq_length
- assert len(input_mask) == max_seq_length
- assert len(segment_ids) == max_seq_length
- assert len(p_mask) == max_seq_length
-
- span_is_impossible = example.is_impossible
- start_position = None
- end_position = None
- if is_training and not span_is_impossible:
- # For training, if our document chunk does not contain an annotation
- # we throw it out, since there is nothing to predict.
- doc_start = doc_span.start
- doc_end = doc_span.start + doc_span.length - 1
- out_of_span = False
- if not (tok_start_position >= doc_start and
- tok_end_position <= doc_end):
- out_of_span = True
- if out_of_span:
- # continue
- start_position = 0
- end_position = 0
- span_is_impossible = True
- else:
- # note: we put P before Q, so doc_offset should be zero.
- # doc_offset = len(query_tokens) + 2
- doc_offset = 0
- start_position = tok_start_position - doc_start + doc_offset
- end_position = tok_end_position - doc_start + doc_offset
-
- if is_training and span_is_impossible:
- start_position = cls_index
- end_position = cls_index
-
- if example_index < 20:
- logging.info("*** Example ***")
- logging.info("unique_id: %s", unique_id)
- logging.info("example_index: %s", example_index)
- logging.info("doc_span_index: %s", doc_span_index)
- logging.info("tok_start_to_orig_index: %s",
- " ".join([str(x) for x in cur_tok_start_to_orig_index]))
- logging.info("tok_end_to_orig_index: %s",
- " ".join([str(x) for x in cur_tok_end_to_orig_index]))
- logging.info(
- "token_is_max_context: %s", " ".join([
- "%d:%s" % (x, y)
- for (x, y) in six.iteritems(token_is_max_context)
- ]))
- logging.info("input_ids: %s", " ".join([str(x) for x in input_ids]))
- logging.info("input_mask: %s", " ".join([str(x) for x in input_mask]))
- logging.info("segment_ids: %s", " ".join([str(x) for x in segment_ids]))
-
- if is_training and span_is_impossible:
- logging.info("impossible example span")
-
- if is_training and not span_is_impossible:
- pieces = [
- sp_model.IdToPiece(token)
- for token in tokens[start_position:(end_position + 1)]
- ]
- answer_text = sp_model.DecodePieces(pieces)
- logging.info("start_position: %d", start_position)
- logging.info("end_position: %d", end_position)
- logging.info("answer: %s",
- preprocess_utils.printable_text(answer_text))
-
- # With multi processing, the example_index is actually the index
- # within the current process therefore we use example_index=None to
- # avoid being used in the future. # The current code does not use
- # example_index of training data.
- if is_training:
- feat_example_index = None
- else:
- feat_example_index = example_index
-
- feature = InputFeatures(
- unique_id=unique_id,
- example_index=feat_example_index,
- doc_span_index=doc_span_index,
- tok_start_to_orig_index=cur_tok_start_to_orig_index,
- tok_end_to_orig_index=cur_tok_end_to_orig_index,
- token_is_max_context=token_is_max_context,
- input_ids=input_ids,
- input_mask=input_mask,
- p_mask=p_mask,
- segment_ids=segment_ids,
- paragraph_len=paragraph_len,
- cls_index=cls_index,
- start_position=start_position,
- end_position=end_position,
- is_impossible=span_is_impossible)
-
- # Run callback
- output_fn(feature)
-
- unique_id += 1
- if span_is_impossible:
- cnt_neg += 1
- else:
- cnt_pos += 1
-
- logging.info("Total number of instances: %d = pos %d + neg %d",
- cnt_pos + cnt_neg, cnt_pos, cnt_neg)
-
-
-def _check_is_max_context(doc_spans, cur_span_index, position):
- """Check if this is the "max context" doc span for the token."""
-
- # Because of the sliding window approach taken to scoring documents, a single
- # token can appear in multiple documents. E.g.
- # Doc: the man went to the store and bought a gallon of milk
- # Span A: the man went to the
- # Span B: to the store and bought
- # Span C: and bought a gallon of
- # ...
- #
- # Now the word "bought" will have two scores from spans B and C. We only
- # want to consider the score with "maximum context", which we define as
- # the *minimum* of its left and right context (the *sum* of left and
- # right context will always be the same, of course).
- #
- # In the example the maximum context for "bought" would be span C since
- # it has 1 left context and 3 right context, while span B has 4 left context
- # and 0 right context.
- best_score = None
- best_span_index = None
- for (span_index, doc_span) in enumerate(doc_spans):
- end = doc_span.start + doc_span.length - 1
- if position < doc_span.start:
- continue
- if position > end:
- continue
- num_left_context = position - doc_span.start
- num_right_context = end - position
- score = min(num_left_context, num_right_context) + 0.01 * doc_span.length
- if best_score is None or score > best_score:
- best_score = score
- best_span_index = span_index
-
- return cur_span_index == best_span_index
-
-
-class FeatureWriter(object):
- """Writes InputFeature to TF example file."""
-
- def __init__(self, filename, is_training):
- self.filename = filename
- self.is_training = is_training
- self.num_features = 0
- self._writer = tf.io.TFRecordWriter(filename)
-
- def process_feature(self, feature):
- """Write a InputFeature to the TFRecordWriter as a tf.train.Example."""
- self.num_features += 1
-
- def create_int_feature(values):
- feature = tf.train.Feature(
- int64_list=tf.train.Int64List(value=list(values)))
- return feature
-
- def create_float_feature(values):
- f = tf.train.Feature(float_list=tf.train.FloatList(value=list(values)))
- return f
-
- features = collections.OrderedDict()
- features["unique_ids"] = create_int_feature([feature.unique_id])
- features["input_ids"] = create_int_feature(feature.input_ids)
- features["input_mask"] = create_float_feature(feature.input_mask)
- features["p_mask"] = create_float_feature(feature.p_mask)
- features["segment_ids"] = create_int_feature(feature.segment_ids)
-
- features["cls_index"] = create_int_feature([feature.cls_index])
-
- if self.is_training:
- features["start_positions"] = create_int_feature([feature.start_position])
- features["end_positions"] = create_int_feature([feature.end_position])
- impossible = 0
- if feature.is_impossible:
- impossible = 1
- features["is_impossible"] = create_float_feature([impossible])
-
- tf_example = tf.train.Example(features=tf.train.Features(feature=features))
- self._writer.write(tf_example.SerializeToString())
-
- def close(self):
- self._writer.close()
-
-
-def create_eval_data(spm_basename,
- sp_model,
- eval_examples,
- max_seq_length,
- max_query_length,
- doc_stride,
- uncased,
- output_dir=None):
- """Creates evaluation tfrecords."""
- eval_features = []
- eval_writer = None
- if output_dir:
- eval_rec_file = os.path.join(
- output_dir,
- "{}.slen-{}.qlen-{}.eval.tf_record".format(spm_basename, max_seq_length,
- max_query_length))
- eval_feature_file = os.path.join(
- output_dir,
- "{}.slen-{}.qlen-{}.eval.features.pkl".format(spm_basename,
- max_seq_length,
- max_query_length))
-
- eval_writer = FeatureWriter(filename=eval_rec_file, is_training=False)
-
- def append_feature(feature):
- eval_features.append(feature)
- if eval_writer:
- eval_writer.process_feature(feature)
-
- convert_examples_to_features(
- examples=eval_examples,
- sp_model=sp_model,
- max_seq_length=max_seq_length,
- doc_stride=doc_stride,
- max_query_length=max_query_length,
- is_training=False,
- output_fn=append_feature,
- uncased=uncased)
-
- if eval_writer:
- eval_writer.close()
- with tf.io.gfile.GFile(eval_feature_file, "wb") as fout:
- pickle.dump(eval_features, fout)
-
- return eval_features
diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/class_utils.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/class_utils.py
deleted file mode 100644
index cce9cf982bbbce7b90ee44e67ebe65997b7a91da..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/class_utils.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Utility functions for handling dataset object categories."""
-
-
-def coco_split_class_ids(split_name):
- """Return the COCO class split ids based on split name and training mode.
-
- Args:
- split_name: The name of dataset split.
-
- Returns:
- class_ids: a python list of integer.
- """
- if split_name == 'all':
- return []
-
- elif split_name == 'voc':
- return [
- 1, 2, 3, 4, 5, 6, 7, 9, 16, 17, 18, 19, 20, 21, 44, 62, 63, 64, 67, 72
- ]
-
- elif split_name == 'nonvoc':
- return [
- 8, 10, 11, 13, 14, 15, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, 35, 36,
- 37, 38, 39, 40, 41, 42, 43, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56,
- 57, 58, 59, 60, 61, 65, 70, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84,
- 85, 86, 87, 88, 89, 90
- ]
-
- else:
- raise ValueError('Invalid split name {}!!!'.format(split_name))
diff --git a/spaces/Nee001/bing0/src/lib/storage.ts b/spaces/Nee001/bing0/src/lib/storage.ts
deleted file mode 100644
index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/lib/storage.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import { getMany, set, del, clear } from 'idb-keyval';
-
-export const Storage = {
- async get(key: string | string[] | null): Promise {
- if (key === null) return null;
- if (typeof key === 'string') {
- key = [key]
- }
- const returnData: Record = {}
- const values = await getMany(key)
- key.forEach((k, idx)=> {
- returnData[k] = values[idx]
- })
- return returnData;
- },
- async set(object: any) {
- for (let key of Object.keys(object)) {
- await set(key, object[key])
- }
- },
- async remove(key: string) {
- return del(key);
- },
- async clear() {
- return clear();
- }
-}
diff --git a/spaces/NeuML/similarity/app.py b/spaces/NeuML/similarity/app.py
deleted file mode 100644
index 8bf12e0f7b6119f0c1bd112cc54f8659f7f900d1..0000000000000000000000000000000000000000
--- a/spaces/NeuML/similarity/app.py
+++ /dev/null
@@ -1,70 +0,0 @@
-"""
-Basic similarity search example. Used in the original txtai demo.
-"""
-
-import os
-
-import streamlit as st
-
-from txtai.embeddings import Embeddings
-
-
-class Application:
- """
- Main application.
- """
-
- def __init__(self):
- """
- Creates a new application.
- """
-
- # Create embeddings model, backed by sentence-transformers & transformers
- self.embeddings = Embeddings({"path": "sentence-transformers/nli-mpnet-base-v2"})
-
- def run(self):
- """
- Runs a Streamlit application.
- """
-
- st.title("Similarity Search")
- st.markdown("This application runs a basic similarity search that identifies the best matching row for a query.")
-
- data = [
- "US tops 5 million confirmed virus cases",
- "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
- "Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
- "The National Park Service warns against sacrificing slower friends in a bear attack",
- "Maine man wins $1M from $25 lottery ticket",
- "Make huge profits without work, earn up to $100,000 a day",
- ]
-
- data = st.text_area("Data", value="\n".join(data))
- query = st.text_input("Query")
-
- data = data.split("\n")
-
- if query:
- # Get index of best section that best matches query
- uid = self.embeddings.similarity(query, data)[0][0]
- st.write(data[uid])
-
-
-@st.cache(allow_output_mutation=True)
-def create():
- """
- Creates and caches a Streamlit application.
-
- Returns:
- Application
- """
-
- return Application()
-
-
-if __name__ == "__main__":
- os.environ["TOKENIZERS_PARALLELISM"] = "false"
-
- # Create and run application
- app = create()
- app.run()
diff --git a/spaces/NoriZC/vits-models/attentions.py b/spaces/NoriZC/vits-models/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/NoriZC/vits-models/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Nyashi/rvc-models-epic/README.md b/spaces/Nyashi/rvc-models-epic/README.md
deleted file mode 100644
index bc769dc406fa0c8c46849a02f2f32987c29aae8b..0000000000000000000000000000000000000000
--- a/spaces/Nyashi/rvc-models-epic/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Rvc Model Runner
-emoji: 🎤
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: Nyashi/rvc-gura
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/constrained_decoding/tok.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/constrained_decoding/tok.py
deleted file mode 100644
index b1f888a8c0d1b8ec7174859476cc3222456e0d2c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/constrained_decoding/tok.py
+++ /dev/null
@@ -1,34 +0,0 @@
-#!/usr/bin/env python3
-#
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-
-import sacremoses
-
-
-def main(args):
- """Tokenizes, preserving tabs"""
- mt = sacremoses.MosesTokenizer(lang=args.lang)
-
- def tok(s):
- return mt.tokenize(s, return_str=True)
-
- for line in sys.stdin:
- parts = list(map(tok, line.split("\t")))
- print(*parts, sep="\t", flush=True)
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--lang", "-l", default="en")
- parser.add_argument("--penn", "-p", action="store_true")
- parser.add_argument("--fields", "-f", help="fields to tokenize")
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py
deleted file mode 100644
index a5dd7ae6c15b358206e067385be260c94021bf20..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py
+++ /dev/null
@@ -1,128 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os
-import os.path as osp
-import numpy as np
-import tqdm
-import torch
-import sys
-
-import faiss
-import torch.nn.functional as F
-
-from wav2vec_cluster_faiss import parse_faiss_specs, Wav2VecFeatureReader
-
-
-def get_parser():
- parser = argparse.ArgumentParser(description="apply clusters")
- # fmt: off
- parser.add_argument('data', help='location of tsv files')
- parser.add_argument('--split', help='split to process', required=True)
- parser.add_argument('--labels', help='split to process', default="phn")
- parser.add_argument('--path', help='path to pca and centroids', required=True)
- parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec model (if using wav2vec features)', required=True)
- parser.add_argument('--layer', '-l', type=int, help='which layer to read', default=14)
- parser.add_argument('--max-tsz', type=int, help='batch kmeans up to this much', default=14)
- # fmt: on
-
- return parser
-
-
-def get_iterator(args):
- label_path = osp.join(args.data, f"{args.split}.{args.labels}")
- if osp.exists(label_path):
- lp = open(label_path, "r")
- else:
- lp = None
-
- with open(osp.join(args.data, f"{args.split}.tsv"), "r") as fp:
- lines = fp.read().split("\n")
- root = lines.pop(0).strip()
- files = [line.rstrip() for line in lines if len(line) > 0]
-
- if lp is not None:
- lbls = [line.rstrip() for line in lp]
- else:
- lbls = [None] * len(files)
-
- num = len(files)
- reader = Wav2VecFeatureReader(args.checkpoint, args.layer)
-
- def iterate():
- for fname, lbl in zip(files, lbls):
- file = osp.join(root, fname.split("\t")[0])
- feats = reader.get_feats(file)
- yield feats.data, fname, lbl
-
- return iterate, num, root
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- spec = osp.basename(args.path)
-
- try:
- faiss_spec = parse_faiss_specs(spec.rstrip("/"))[0]
- except:
- print(spec)
- raise
-
- print("Faiss Spec:", faiss_spec, file=sys.stderr)
-
- if faiss_spec.pca:
- A = torch.from_numpy(np.load(osp.join(args.path, "pca_A.npy"))).cuda()
- b = torch.from_numpy(np.load(osp.join(args.path, "pca_b.npy"))).cuda()
- print("Loaded PCA", file=sys.stderr)
-
- centroids = np.load(osp.join(args.path, "centroids.npy"))
- print("Loaded centroids", centroids.shape, file=sys.stderr)
-
- res = faiss.StandardGpuResources()
- index_flat = (
- faiss.IndexFlatL2(centroids.shape[1])
- if not faiss_spec.sphere
- else faiss.IndexFlatIP(centroids.shape[1])
- )
- faiss_index = faiss.index_cpu_to_gpu(res, 0, index_flat)
- faiss_index.add(centroids)
-
- generator, num, root = get_iterator(args)
- iterator = generator()
-
- had_labels = False
- label_path = osp.join(args.path, f"{args.split}.{args.labels}")
-
- with torch.no_grad():
- with open(osp.join(args.path, f"{args.split}.src"), "w") as fp, open(
- osp.join(args.path, f"{args.split}.tsv"), "w"
- ) as pp, open(label_path, "w") as lp:
- print(root, file=pp)
- for f, fname, lbl in tqdm.tqdm(iterator, total=num):
- if faiss_spec.pca:
- f = torch.mm(f, A) + b
- if faiss_spec.norm:
- f = F.normalize(f, p=2, dim=-1)
-
- f = f.cpu().numpy()
-
- _, z = faiss_index.search(f, 1)
-
- print(" ".join(str(x.item()) for x in z), file=fp)
- print(fname, file=pp)
-
- if lbl is not None:
- print(lbl, file=lp)
- had_labels = True
- if not had_labels:
- os.remove(label_path)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/configs.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/configs.py
deleted file mode 100644
index 8e8cec92814f55a504d36f80fb79c3e0f8280eee..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/configs.py
+++ /dev/null
@@ -1,1058 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-from dataclasses import _MISSING_TYPE, dataclass, field
-from typing import Any, List, Optional
-
-import torch
-
-from fairseq.dataclass.constants import (
- DATASET_IMPL_CHOICES,
- DDP_BACKEND_CHOICES,
- DDP_COMM_HOOK_CHOICES,
- GENERATION_CONSTRAINTS_CHOICES,
- GENERATION_DECODING_FORMAT_CHOICES,
- LOG_FORMAT_CHOICES,
- PIPELINE_CHECKPOINT_CHOICES,
- PRINT_ALIGNMENT_CHOICES,
- ZERO_SHARDING_CHOICES,
-)
-
-from omegaconf import II, MISSING
-
-
-@dataclass
-class FairseqDataclass:
- """fairseq base dataclass that supported fetching attributes and metas"""
-
- _name: Optional[str] = None
-
- @staticmethod
- def name():
- return None
-
- def _get_all_attributes(self) -> List[str]:
- return [k for k in self.__dataclass_fields__.keys()]
-
- def _get_meta(
- self, attribute_name: str, meta: str, default: Optional[Any] = None
- ) -> Any:
- return self.__dataclass_fields__[attribute_name].metadata.get(meta, default)
-
- def _get_name(self, attribute_name: str) -> str:
- return self.__dataclass_fields__[attribute_name].name
-
- def _get_default(self, attribute_name: str) -> Any:
- if hasattr(self, attribute_name):
- if str(getattr(self, attribute_name)).startswith("${"):
- return str(getattr(self, attribute_name))
- elif str(self.__dataclass_fields__[attribute_name].default).startswith(
- "${"
- ):
- return str(self.__dataclass_fields__[attribute_name].default)
- elif (
- getattr(self, attribute_name)
- != self.__dataclass_fields__[attribute_name].default
- ):
- return getattr(self, attribute_name)
-
- f = self.__dataclass_fields__[attribute_name]
- if not isinstance(f.default_factory, _MISSING_TYPE):
- return f.default_factory()
- return f.default
-
- def _get_type(self, attribute_name: str) -> Any:
- return self.__dataclass_fields__[attribute_name].type
-
- def _get_help(self, attribute_name: str) -> Any:
- return self._get_meta(attribute_name, "help")
-
- def _get_argparse_const(self, attribute_name: str) -> Any:
- return self._get_meta(attribute_name, "argparse_const")
-
- def _get_argparse_alias(self, attribute_name: str) -> Any:
- return self._get_meta(attribute_name, "argparse_alias")
-
- def _get_choices(self, attribute_name: str) -> Any:
- return self._get_meta(attribute_name, "choices")
-
- @classmethod
- def from_namespace(cls, args):
- if isinstance(args, cls):
- return args
- else:
- config = cls()
- for k in config.__dataclass_fields__.keys():
- if k.startswith("_"):
- # private member, skip
- continue
- if hasattr(args, k):
- setattr(config, k, getattr(args, k))
-
- return config
-
-
-
-@dataclass
-class CommonConfig(FairseqDataclass):
- # This is the core dataclass including common parameters shared by all different jobs. Please append your params to other dataclasses if they were
- # used for a particular purpose or task, such as those dedicated for `distributed training`, `optimization`, etc.
- no_progress_bar: bool = field(
- default=False, metadata={"help": "disable progress bar"}
- )
- log_interval: int = field(
- default=100,
- metadata={
- "help": "log progress every N batches (when progress bar is disabled)"
- },
- )
- log_format: Optional[LOG_FORMAT_CHOICES] = field(
- default=None, metadata={"help": "log format to use"}
- )
- log_file: Optional[str] = field(
- default=None, metadata={"help": "log file to copy metrics to."}
- )
- tensorboard_logdir: Optional[str] = field(
- default=None,
- metadata={
- "help": "path to save logs for tensorboard, should match --logdir "
- "of running tensorboard (default: no tensorboard logging)"
- },
- )
- wandb_project: Optional[str] = field(
- default=None,
- metadata={"help": "Weights and Biases project name to use for logging"},
- )
- azureml_logging: Optional[bool] = field(
- default=False, metadata={"help": "Log scalars to AzureML context"},
- )
- seed: int = field(
- default=1, metadata={"help": "pseudo random number generator seed"}
- )
- cpu: bool = field(default=False, metadata={"help": "use CPU instead of CUDA"})
- tpu: bool = field(default=False, metadata={"help": "use TPU instead of CUDA"})
- bf16: bool = field(default=False, metadata={"help": "use bfloat16; implies --tpu"})
- memory_efficient_bf16: bool = field(
- default=False,
- metadata={
- "help": "use a memory-efficient version of BF16 training; implies --bf16"
- },
- )
- fp16: bool = field(default=False, metadata={"help": "use FP16"})
- memory_efficient_fp16: bool = field(
- default=False,
- metadata={
- "help": "use a memory-efficient version of FP16 training; implies --fp16"
- },
- )
- fp16_no_flatten_grads: bool = field(
- default=False, metadata={"help": "don't flatten FP16 grads tensor"}
- )
- fp16_init_scale: int = field(
- default=2 ** 7, metadata={"help": "default FP16 loss scale"}
- )
- fp16_scale_window: Optional[int] = field(
- default=None,
- metadata={"help": "number of updates before increasing loss scale"},
- )
- fp16_scale_tolerance: float = field(
- default=0.0,
- metadata={
- "help": "pct of updates that can overflow before decreasing the loss scale"
- },
- )
- on_cpu_convert_precision: bool = field(
- default=False,
- metadata={
- "help": "if set, the floating point conversion to fp16/bf16 runs on CPU. "
- "This reduces bus transfer time and GPU memory usage."
- }
- )
- min_loss_scale: float = field(
- default=1e-4,
- metadata={"help": "minimum FP16/AMP loss scale, after which training is stopped"},
- )
- threshold_loss_scale: Optional[float] = field(
- default=None, metadata={"help": "threshold FP16 loss scale from below"}
- )
- amp: bool = field(default=False, metadata={"help": "use automatic mixed precision"})
- amp_batch_retries: int = field(
- default=2,
- metadata={"help": "number of retries of same batch after reducing loss scale with AMP"},
- )
- amp_init_scale: int = field(
- default=2 ** 7, metadata={"help": "default AMP loss scale"}
- )
- amp_scale_window: Optional[int] = field(
- default=None,
- metadata={"help": "number of updates before increasing AMP loss scale"},
- )
- user_dir: Optional[str] = field(
- default=None,
- metadata={
- "help": "path to a python module containing custom extensions (tasks and/or architectures)"
- },
- )
- empty_cache_freq: int = field(
- default=0,
- metadata={"help": "how often to clear the PyTorch CUDA cache (0 to disable)"},
- )
- all_gather_list_size: int = field(
- default=16384,
- metadata={"help": "number of bytes reserved for gathering stats from workers"},
- )
- model_parallel_size: int = field(
- default=1, metadata={"help": "total number of GPUs to parallelize model over"}
- )
- quantization_config_path: Optional[str] = field(
- default=None, metadata={"help": "path to quantization config file"}
- )
- profile: bool = field(
- default=False, metadata={"help": "enable autograd profiler emit_nvtx"}
- )
- reset_logging: bool = field(
- default=False,
- metadata={
- "help": "when using Hydra, reset the logging at the beginning of training"
- },
- )
- suppress_crashes: bool = field(
- default=False,
- metadata={
- "help": "suppress crashes when training with the hydra_train entry point so that the "
- "main method can return a value (useful for sweeps)"
- },
- )
- use_plasma_view: bool = field(
- default=False, metadata={"help": "Store indices and sizes in shared memory"}
- )
- plasma_path: Optional[str] = field(
- default="/tmp/plasma",
- metadata={
- "help": "path to run plasma_store, defaults to /tmp/plasma. Paths outside /tmp tend to fail."
- },
- )
-
-
-@dataclass
-class DistributedTrainingConfig(FairseqDataclass):
- distributed_world_size: int = field(
- default=max(1, torch.cuda.device_count()),
- metadata={
- "help": "total number of GPUs across all nodes (default: all visible GPUs)"
- },
- )
- distributed_num_procs: Optional[int] = field(
- default=max(1, torch.cuda.device_count()),
- metadata={
- "help": "total number of processes to fork (default: all visible GPUs)"
- },
- )
- distributed_rank: Optional[int] = field(
- default=0, metadata={"help": "rank of the current worker"}
- )
- distributed_backend: str = field(
- default="nccl", metadata={"help": "distributed backend"}
- )
- distributed_init_method: Optional[str] = field(
- default=None,
- metadata={
- "help": "typically tcp://hostname:port that will be used to "
- "establish initial connetion"
- },
- )
- distributed_port: int = field(
- default=-1,
- metadata={
- "help": "port number (not required if using --distributed-init-method)"
- },
- )
- device_id: int = field(
- default=0,
- metadata={
- "help": "which GPU to use (usually configured automatically)",
- "argparse_alias": "--local_rank",
- },
- )
- distributed_no_spawn: bool = field(
- default=False,
- metadata={
- "help": "do not spawn multiple processes even if multiple GPUs are visible"
- },
- )
- ddp_backend: DDP_BACKEND_CHOICES = field(
- default="pytorch_ddp", metadata={"help": "DistributedDataParallel backend"}
- )
- ddp_comm_hook: DDP_COMM_HOOK_CHOICES = field(
- default="none", metadata={"help": "communication hook"}
- )
- bucket_cap_mb: int = field(
- default=25, metadata={"help": "bucket size for reduction"}
- )
- fix_batches_to_gpus: bool = field(
- default=False,
- metadata={
- "help": "don't shuffle batches between GPUs; this reduces overall "
- "randomness and may affect precision but avoids the cost of re-reading the data"
- },
- )
- find_unused_parameters: bool = field(
- default=False,
- metadata={
- "help": "disable unused parameter detection (not applicable to "
- "--ddp-backend=legacy_ddp)"
- },
- )
- gradient_as_bucket_view: bool = field(
- default=False,
- metadata={
- "help": "when set to True, gradients will be views pointing to different offsets of allreduce communication buckets. This can reduce peak memory usage, where the saved memory size will be equal to the total gradients size. "
- "--gradient-as-bucket-view=gradient_as_bucket_view)"
- },
- )
- fast_stat_sync: bool = field(
- default=False,
- metadata={"help": "[deprecated] this is now defined per Criterion"},
- )
- heartbeat_timeout: int = field(
- default=-1,
- metadata={
- "help": "kill the job if no progress is made in N seconds; "
- "set to -1 to disable"
- },
- )
- broadcast_buffers: bool = field(
- default=False,
- metadata={
- "help": "Copy non-trainable parameters between GPUs, such as "
- "batchnorm population statistics"
- },
- )
- slowmo_momentum: Optional[float] = field(
- default=None,
- metadata={
- "help": "SlowMo momentum term; by default use 0.0 for 16 GPUs, "
- "0.2 for 32 GPUs; 0.5 for 64 GPUs, 0.6 for > 64 GPUs"
- },
- )
- slowmo_algorithm: str = field(
- default="LocalSGD", metadata={"help": "whether to use LocalSGD or SGP"}
- )
- localsgd_frequency: int = field(
- default=3, metadata={"help": "Local SGD allreduce frequency"}
- )
- nprocs_per_node: int = field(
- default=max(1, torch.cuda.device_count()),
- metadata={
- "help": "number of GPUs in each node. An allreduce operation across GPUs in "
- "a node is very fast. Hence, we do allreduce across GPUs in a node, "
- "and gossip across different nodes"
- },
- )
- pipeline_model_parallel: bool = field(
- default=False,
- metadata={"help": "if set, use pipeline model parallelism across GPUs"},
- )
- pipeline_balance: Optional[str] = field(
- default=None,
- metadata={
- "help": "partition the model into N_K pieces, where each piece "
- "contains N_i layers. The sum(args.pipeline_balance) "
- "should equal the total number of layers in the model"
- },
- )
- pipeline_devices: Optional[str] = field(
- default=None,
- metadata={
- "help": "a list of device indices indicating which device to place "
- "each of the N_K partitions. The length of this list should "
- "equal the length of the --pipeline-balance argument"
- },
- )
- pipeline_chunks: Optional[int] = field(
- default=0, metadata={"help": "microbatch count for pipeline model parallelism"}
- )
- pipeline_encoder_balance: Optional[str] = field(
- default=None,
- metadata={
- "help": "partition the pipeline parallel encoder into N_K pieces, where each piece "
- "contains N_i layers. The sum(args.pipeline_encoder_balance) "
- "should equal the total number of encoder layers in the model"
- },
- )
- pipeline_encoder_devices: Optional[str] = field(
- default=None,
- metadata={
- "help": "a list of device indices indicating which device to place "
- "each of the N_K partitions. The length of this list should "
- "equal the length of the --pipeline-encoder-balance argument"
- },
- )
- pipeline_decoder_balance: Optional[str] = field(
- default=None,
- metadata={
- "help": "partition the pipeline parallel decoder into N_K pieces, where each piece "
- "contains N_i layers. The sum(args.pipeline_decoder_balance) "
- "should equal the total number of decoder layers in the model"
- },
- )
- pipeline_decoder_devices: Optional[str] = field(
- default=None,
- metadata={
- "help": "a list of device indices indicating which device to place "
- "each of the N_K partitions. The length of this list should "
- "equal the length of the --pipeline-decoder-balance argument"
- },
- )
- pipeline_checkpoint: PIPELINE_CHECKPOINT_CHOICES = field(
- default="never",
- metadata={"help": "checkpointing mode for pipeline model parallelism"},
- )
- zero_sharding: ZERO_SHARDING_CHOICES = field(
- default="none", metadata={"help": "ZeRO sharding"}
- )
- fp16: bool = II("common.fp16")
- memory_efficient_fp16: bool = II("common.memory_efficient_fp16")
- tpu: bool = II("common.tpu")
- # configuration for --ddp-backend=fully_sharded
- no_reshard_after_forward: bool = field(
- default=False, metadata={"help": "don't reshard parameters after forward pass"},
- )
- fp32_reduce_scatter: bool = field(
- default=False, metadata={"help": "reduce-scatter grads in FP32"},
- )
- cpu_offload: bool = field(
- default=False, metadata={"help": "offload FP32 params to CPU"}
- )
- use_sharded_state: bool = field(
- default=False, metadata={"help": "use sharded checkpoint files"},
- )
-
-
-@dataclass
-class DatasetConfig(FairseqDataclass):
- num_workers: int = field(
- default=1, metadata={"help": "how many subprocesses to use for data loading"}
- )
- skip_invalid_size_inputs_valid_test: bool = field(
- default=False,
- metadata={"help": "ignore too long or too short lines in valid and test set"},
- )
- max_tokens: Optional[int] = field(
- default=None, metadata={"help": "maximum number of tokens in a batch"}
- )
- batch_size: Optional[int] = field(
- default=None,
- metadata={
- "help": "number of examples in a batch",
- "argparse_alias": "--max-sentences",
- },
- )
- required_batch_size_multiple: int = field(
- default=8, metadata={"help": "batch size will be a multiplier of this value"}
- )
- required_seq_len_multiple: int = field(
- default=1,
- metadata={
- "help": "maximum sequence length in batch will be a multiplier of this value"
- },
- )
- dataset_impl: Optional[DATASET_IMPL_CHOICES] = field(
- default=None, metadata={"help": "output dataset implementation"}
- )
- data_buffer_size: int = field(
- default=10, metadata={"help": "Number of batches to preload"}
- )
- train_subset: str = field(
- default="train",
- metadata={"help": "data subset to use for training (e.g. train, valid, test)"},
- )
- valid_subset: str = field(
- default="valid",
- metadata={
- "help": "comma separated list of data subsets to use for validation"
- " (e.g. train, valid, test)"
- },
- )
- combine_valid_subsets: Optional[bool] = field(
- default=None,
- metadata={
- "help": "comma separated list of data subsets to use for validation"
- " (e.g. train, valid, test)",
- "argparse_alias": "--combine-val",
- },
- )
- ignore_unused_valid_subsets: Optional[bool] = field(
- default=False,
- metadata={"help": "do not raise error if valid subsets are ignored"},
- )
-
- validate_interval: int = field(
- default=1, metadata={"help": "validate every N epochs"}
- )
- validate_interval_updates: int = field(
- default=0, metadata={"help": "validate every N updates"}
- )
- validate_after_updates: int = field(
- default=0, metadata={"help": "dont validate until reaching this many updates"}
- )
- fixed_validation_seed: Optional[int] = field(
- default=None, metadata={"help": "specified random seed for validation"}
- )
- disable_validation: bool = field(
- default=False, metadata={"help": "disable validation"}
- )
- max_tokens_valid: Optional[int] = field(
- default=II("dataset.max_tokens"),
- metadata={
- "help": "maximum number of tokens in a validation batch"
- " (defaults to --max-tokens)"
- },
- )
- batch_size_valid: Optional[int] = field(
- default=II("dataset.batch_size"),
- metadata={
- "help": "batch size of the validation batch (defaults to --batch-size)",
- "argparse_alias": "--max-sentences-valid",
- },
- )
- max_valid_steps: Optional[int] = field(default=None, metadata={'help': 'How many batches to evaluate',
- "argparse_alias": "--nval"})
- curriculum: int = field(
- default=0, metadata={"help": "don't shuffle batches for first N epochs"}
- )
- gen_subset: str = field(
- default="test",
- metadata={"help": "data subset to generate (train, valid, test)"},
- )
- num_shards: int = field(
- default=1, metadata={"help": "shard generation over N shards"}
- )
- shard_id: int = field(
- default=0, metadata={"help": "id of the shard to generate (id < num_shards)"}
- )
-
-
-@dataclass
-class OptimizationConfig(FairseqDataclass):
- max_epoch: int = field(
- default=0, metadata={"help": "force stop training at specified epoch"}
- )
- max_update: int = field(
- default=0, metadata={"help": "force stop training at specified update"}
- )
- stop_time_hours: float = field(
- default=0,
- metadata={
- "help": "force stop training after specified cumulative time (if >0)"
- },
- )
- clip_norm: float = field(
- default=0.0, metadata={"help": "clip threshold of gradients"}
- )
- sentence_avg: bool = field(
- default=False,
- metadata={
- "help": "normalize gradients by the number of sentences in a batch"
- " (default is to normalize by number of tokens)"
- },
- )
- update_freq: List[int] = field(
- default_factory=lambda: [1],
- metadata={"help": "update parameters every N_i batches, when in epoch i"},
- )
- lr: List[float] = field(
- default_factory=lambda: [0.25],
- metadata={
- "help": "learning rate for the first N epochs; all epochs >N using LR_N"
- " (note: this may be interpreted differently depending on --lr-scheduler)"
- },
- )
- stop_min_lr: float = field(
- default=-1.0,
- metadata={"help": "stop training when the learning rate reaches this minimum"},
- )
- use_bmuf: bool = field(
- default=False,
- metadata={
- "help": "specify global optimizer for syncing models on different GPUs/shards"
- },
- )
-
-
-@dataclass
-class CheckpointConfig(FairseqDataclass):
- save_dir: str = field(
- default="checkpoints", metadata={"help": "path to save checkpoints"}
- )
- restore_file: str = field(
- default="checkpoint_last.pt",
- metadata={
- "help": "filename from which to load checkpoint "
- "(default: /checkpoint_last.pt"
- },
- )
- finetune_from_model: Optional[str] = field(
- default=None,
- metadata={
- "help": "finetune from a pretrained model; note that meters and lr scheduler will be reset"
- },
- )
- reset_dataloader: bool = field(
- default=False,
- metadata={
- "help": "if set, does not reload dataloader state from the checkpoint"
- },
- )
- reset_lr_scheduler: bool = field(
- default=False,
- metadata={
- "help": "if set, does not load lr scheduler state from the checkpoint"
- },
- )
- reset_meters: bool = field(
- default=False,
- metadata={"help": "if set, does not load meters from the checkpoint"},
- )
- reset_optimizer: bool = field(
- default=False,
- metadata={"help": "if set, does not load optimizer state from the checkpoint"},
- )
- optimizer_overrides: str = field(
- default="{}",
- metadata={
- "help": "a dictionary used to override optimizer args when loading a checkpoint"
- },
- )
- save_interval: int = field(
- default=1, metadata={"help": "save a checkpoint every N epochs"}
- )
- save_interval_updates: int = field(
- default=0, metadata={"help": "save a checkpoint (and validate) every N updates"}
- )
- keep_interval_updates: int = field(
- default=-1,
- metadata={
- "help": "keep the last N checkpoints saved with --save-interval-updates"
- },
- )
- keep_interval_updates_pattern: int = field(
- default=-1,
- metadata={
- "help": "when used with --keep-interval-updates, skips deleting "
- "any checkpoints with update X where "
- "X %% keep_interval_updates_pattern == 0"
- },
- )
- keep_last_epochs: int = field(
- default=-1, metadata={"help": "keep last N epoch checkpoints"}
- )
- keep_best_checkpoints: int = field(
- default=-1, metadata={"help": "keep best N checkpoints based on scores"}
- )
- no_save: bool = field(
- default=False, metadata={"help": "don't save models or checkpoints"}
- )
- no_epoch_checkpoints: bool = field(
- default=False, metadata={"help": "only store last and best checkpoints"}
- )
- no_last_checkpoints: bool = field(
- default=False, metadata={"help": "don't store last checkpoints"}
- )
- no_save_optimizer_state: bool = field(
- default=False,
- metadata={"help": "don't save optimizer-state as part of checkpoint"},
- )
- best_checkpoint_metric: str = field(
- default="loss", metadata={"help": 'metric to use for saving "best" checkpoints'}
- )
- maximize_best_checkpoint_metric: bool = field(
- default=False,
- metadata={
- "help": 'select the largest metric value for saving "best" checkpoints'
- },
- )
- patience: int = field(
- default=-1,
- metadata={
- "help": (
- "early stop training if valid performance doesn't "
- "improve for N consecutive validation runs; note "
- "that this is influenced by --validate-interval"
- )
- },
- )
- checkpoint_suffix: str = field(
- default="", metadata={"help": "suffix to add to the checkpoint file name"}
- )
- checkpoint_shard_count: int = field(
- default=1,
- metadata={
- "help": "Number of shards containing the checkpoint - "
- "if the checkpoint is over 300GB, it is preferable "
- "to split it into shards to prevent OOM on CPU while loading "
- "the checkpoint"
- },
- )
- load_checkpoint_on_all_dp_ranks: bool = field(
- default=False,
- metadata={
- "help": "load checkpoints on all data parallel devices "
- "(default: only load on rank 0 and broadcast to other devices)"
- },
- )
- write_checkpoints_asynchronously: bool = field(
- default=False,
- metadata={
- "help": (
- "Write checkpoints asynchronously in a separate "
- "thread. NOTE: This feature is currently being tested."
- ),
- "argparse_alias": "--save-async",
- },
- )
- model_parallel_size: int = II("common.model_parallel_size")
- use_ema_weights_to_init_param: bool = field(
- default=False,
- metadata={
- "help": "if the checkpoint has ema weights, then use it to init the model param"
- "(default: false, use noema weights to init the model param)"
- },
- )
- use_latest_weights_to_init_ema: bool = field(
- default=False,
- metadata={
- "help": "if the model has ema params, then force to use the latest weights in the ckpt to init the ema param, even ema weights exist in the ckpt"
- "(default: false, use ema weights (if exist) to init the ema param)"
- },
- )
-
-
-@dataclass
-class FairseqBMUFConfig(FairseqDataclass):
- block_lr: float = field(
- default=1, metadata={"help": "block learning rate for bmuf"}
- )
- block_momentum: float = field(
- default=0.875, metadata={"help": "block momentum for bmuf"}
- )
- global_sync_iter: int = field(
- default=50, metadata={"help": "Iteration for syncing global model"}
- )
- warmup_iterations: int = field(
- default=500, metadata={"help": "warmup iterations for model to broadcast"}
- )
- use_nbm: bool = field(
- default=False,
- metadata={"help": "Specify whether you want to use classical BM / Nesterov BM"},
- )
- average_sync: bool = field(
- default=False,
- metadata={
- "help": "Specify whether you want to average the local momentum after each sync"
- },
- )
- distributed_world_size: int = II("distributed_training.distributed_world_size")
-
-
-@dataclass
-class GenerationConfig(FairseqDataclass):
- beam: int = field(
- default=5, metadata={"help": "beam size"},
- )
- nbest: int = field(
- default=1, metadata={"help": "number of hypotheses to output"},
- )
- max_len_a: float = field(
- default=0,
- metadata={
- "help": "generate sequences of maximum length ax + b, where x is the source length"
- },
- )
- max_len_b: int = field(
- default=200,
- metadata={
- "help": "generate sequences of maximum length ax + b, where x is the source length"
- },
- )
- min_len: int = field(
- default=1, metadata={"help": "minimum generation length"},
- )
- match_source_len: bool = field(
- default=False, metadata={"help": "generations should match the source length"},
- )
- unnormalized: bool = field(
- default=False, metadata={"help": "compare unnormalized hypothesis scores"},
- )
- no_early_stop: bool = field(
- default=False, metadata={"help": "deprecated"},
- )
- no_beamable_mm: bool = field(
- default=False, metadata={"help": "don't use BeamableMM in attention layers"},
- )
- lenpen: float = field(
- default=1,
- metadata={
- "help": "length penalty: <1.0 favors shorter, >1.0 favors longer sentences"
- },
- )
- unkpen: float = field(
- default=0,
- metadata={
- "help": "unknown word penalty: <0 produces more unks, >0 produces fewer"
- },
- )
- replace_unk: Optional[str] = field(
- default=None,
- metadata={
- "help": "perform unknown replacement (optionally with alignment dictionary)",
- "argparse_const": "@@ ",
- },
- )
- sacrebleu: bool = field(
- default=False, metadata={"help": "score with sacrebleu"},
- )
- score_reference: bool = field(
- default=False, metadata={"help": "just score the reference translation"},
- )
- prefix_size: int = field(
- default=0,
- metadata={"help": "initialize generation by target prefix of given length"},
- )
- no_repeat_ngram_size: int = field(
- default=0,
- metadata={
- "help": "ngram blocking such that this size ngram cannot be repeated in the generation"
- },
- )
- sampling: bool = field(
- default=False,
- metadata={"help": "sample hypotheses instead of using beam search"},
- )
- sampling_topk: int = field(
- default=-1,
- metadata={"help": "sample from top K likely next words instead of all words"},
- )
- sampling_topp: float = field(
- default=-1.0,
- metadata={
- "help": "sample from the smallest set whose cumulative probability mass exceeds p for next words"
- },
- )
- constraints: Optional[GENERATION_CONSTRAINTS_CHOICES] = field(
- default=None,
- metadata={
- "help": "enables lexically constrained decoding",
- "argparse_const": "ordered",
- },
- )
- temperature: float = field(
- default=1.0, metadata={"help": "temperature for generation"},
- )
- diverse_beam_groups: int = field(
- default=-1, metadata={"help": "number of groups for Diverse Beam Search"},
- )
- diverse_beam_strength: float = field(
- default=0.5,
- metadata={"help": "strength of diversity penalty for Diverse Beam Search"},
- )
- diversity_rate: float = field(
- default=-1.0,
- metadata={"help": "strength of diversity penalty for Diverse Siblings Search"},
- )
- print_alignment: Optional[PRINT_ALIGNMENT_CHOICES] = field(
- default=None,
- metadata={
- "help": "if set, uses attention feedback to compute and print alignment to source tokens "
- "(valid options are: hard, soft, otherwise treated as hard alignment)",
- "argparse_const": "hard",
- },
- )
- print_step: bool = field(
- default=False, metadata={"help": "print steps"},
- )
- lm_path: Optional[str] = field(
- default=None, metadata={"help": "path to lm checkpoint for lm fusion"},
- )
- lm_weight: float = field(
- default=0.0, metadata={"help": "weight for lm probs for lm fusion"},
- )
-
- # arguments for iterative refinement generator
- iter_decode_eos_penalty: float = field(
- default=0.0,
- metadata={"help": "if > 0.0, it penalized early-stopping in decoding."},
- )
- iter_decode_max_iter: int = field(
- default=10, metadata={"help": "maximum iterations for iterative refinement."},
- )
- iter_decode_force_max_iter: bool = field(
- default=False,
- metadata={
- "help": "if set, run exact the maximum number of iterations without early stop"
- },
- )
- iter_decode_with_beam: int = field(
- default=1,
- metadata={
- "help": "if > 1, model will generate translations varying by the lengths."
- },
- )
- iter_decode_with_external_reranker: bool = field(
- default=False,
- metadata={
- "help": "if set, the last checkpoint are assumed to be a reranker to rescore the translations"
- },
- )
- retain_iter_history: bool = field(
- default=False,
- metadata={
- "help": "if set, decoding returns the whole history of iterative refinement"
- },
- )
- retain_dropout: bool = field(
- default=False, metadata={"help": "Use dropout at inference time"},
- )
- # temporarily set to Any until https://github.com/facebookresearch/hydra/issues/1117 is fixed
- # retain_dropout_modules: Optional[List[str]] = field(
- retain_dropout_modules: Any = field(
- default=None,
- metadata={
- "help": "if set, only retain dropout for the specified modules; "
- "if not set, then dropout will be retained for all modules"
- },
- )
- # special decoding format for advanced decoding.
- decoding_format: Optional[GENERATION_DECODING_FORMAT_CHOICES] = field(
- default=None,
- metadata={"help": "special decoding format for advanced decoding."},
- )
- no_seed_provided: bool = field(
- default=False,
- metadata={"help": "if set, dont use seed for initializing random generators"},
- )
-
-
-@dataclass
-class CommonEvalConfig(FairseqDataclass):
- path: Optional[str] = field(
- default=None, metadata={"help": "path(s) to model file(s), colon separated"},
- )
- post_process: Optional[str] = field(
- default=None,
- metadata={
- "help": (
- "post-process text by removing BPE, letter segmentation, etc. "
- "Valid options can be found in fairseq.data.utils.post_process."
- ),
- "argparse_const": "subword_nmt",
- "argparse_alias": "--remove-bpe",
- },
- )
- quiet: bool = field(default=False, metadata={"help": "only print final scores"})
- model_overrides: str = field(
- default="{}",
- metadata={
- "help": "a dictionary used to override model args at generation that were used during model training"
- },
- )
- results_path: Optional[str] = field(
- default=None, metadata={"help": "path to save eval results (optional)"}
- )
-
-
-@dataclass
-class EvalLMConfig(FairseqDataclass):
- output_word_probs: bool = field(
- default=False,
- metadata={
- "help": "if set, outputs words and their predicted log probabilities to standard output"
- },
- )
- output_word_stats: bool = field(
- default=False,
- metadata={
- "help": "if set, outputs word statistics such as word count, average probability, etc"
- },
- )
- context_window: int = field(
- default=0,
- metadata={
- "help": "ensures that every evaluated token has access to a context of at least this size, if possible"
- },
- )
- softmax_batch: int = field(
- default=sys.maxsize,
- metadata={
- "help": "if BxT is more than this, will batch the softmax over vocab to this amount of tokens, in order to fit into GPU memory"
- },
- )
-
-
-@dataclass
-class InteractiveConfig(FairseqDataclass):
- buffer_size: int = field(
- default=0,
- metadata={
- "help": "read this many sentences into a buffer before processing them"
- },
- )
- input: str = field(
- default="-", metadata={"help": "file to read from; use - for stdin"},
- )
-
-
-@dataclass
-class EMAConfig(FairseqDataclass):
- store_ema: bool = field(
- default=False, metadata={
- help: "store exponential moving average shadow model"
- }
- )
- ema_decay: float = field(
- default=0.9999, metadata={
- "help": 'decay for exponential moving average model'
- }
- )
- ema_start_update : int = field(
- default=0, metadata={"help": "start EMA update after this many model updates"}
- )
- ema_seed_model : Optional[str] = field(
- default=None, metadata={
- "help": "Seed to load EMA model from. "
- "Used to load EMA model separately from the actual model."
- }
- )
- ema_update_freq : int = field(
- default=1, metadata={"help": "Do EMA update every this many model updates"}
- )
- ema_fp32: bool = field(
- default=False,
- metadata={"help": "If true, store EMA model in fp32 even if model is in fp16"},
- )
-
-
-@dataclass
-class FairseqConfig(FairseqDataclass):
- common: CommonConfig = CommonConfig()
- common_eval: CommonEvalConfig = CommonEvalConfig()
- distributed_training: DistributedTrainingConfig = DistributedTrainingConfig()
- dataset: DatasetConfig = DatasetConfig()
- optimization: OptimizationConfig = OptimizationConfig()
- checkpoint: CheckpointConfig = CheckpointConfig()
- bmuf: FairseqBMUFConfig = FairseqBMUFConfig()
- generation: GenerationConfig = GenerationConfig()
- eval_lm: EvalLMConfig = EvalLMConfig()
- interactive: InteractiveConfig = InteractiveConfig()
- model: Any = MISSING
- task: Any = None
- criterion: Any = None
- optimizer: Any = None
- lr_scheduler: Any = None
- scoring: Any = None
- bpe: Any = None
- tokenizer: Any = None
- ema: EMAConfig = EMAConfig()
diff --git a/spaces/ORI-Muchim/BlueArchiveTTS/attentions.py b/spaces/ORI-Muchim/BlueArchiveTTS/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/BlueArchiveTTS/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/ORI-Muchim/RaidenTTS/models.py b/spaces/ORI-Muchim/RaidenTTS/models.py
deleted file mode 100644
index fe004e94bbe9074ec736f14325268f4515a53420..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/RaidenTTS/models.py
+++ /dev/null
@@ -1,540 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
- logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- if self.n_vocab != 0:
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- if self.n_vocab != 0:
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
- gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16,
- gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 1:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2),
- s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 1:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
- 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:, :, :max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 1, "n_speakers have to be larger than 1."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/test_time_augmentation.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/test_time_augmentation.py
deleted file mode 100644
index 373e6bf00a39c040ff1da49d6dcd39a54a0b69a7..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/test_time_augmentation.py
+++ /dev/null
@@ -1,307 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import numpy as np
-from contextlib import contextmanager
-from itertools import count
-from typing import List
-import torch
-from fvcore.transforms import HFlipTransform, NoOpTransform
-from torch import nn
-from torch.nn.parallel import DistributedDataParallel
-
-from detectron2.config import configurable
-from detectron2.data.detection_utils import read_image
-from detectron2.data.transforms import (
- RandomFlip,
- ResizeShortestEdge,
- ResizeTransform,
- apply_augmentations,
-)
-from detectron2.structures import Boxes, Instances
-
-from .meta_arch import GeneralizedRCNN
-from .postprocessing import detector_postprocess
-from .roi_heads.fast_rcnn import fast_rcnn_inference_single_image
-
-__all__ = ["DatasetMapperTTA", "GeneralizedRCNNWithTTA"]
-
-
-class DatasetMapperTTA:
- """
- Implement test-time augmentation for detection data.
- It is a callable which takes a dataset dict from a detection dataset,
- and returns a list of dataset dicts where the images
- are augmented from the input image by the transformations defined in the config.
- This is used for test-time augmentation.
- """
-
- @configurable
- def __init__(self, min_sizes: List[int], max_size: int, flip: bool):
- """
- Args:
- min_sizes: list of short-edge size to resize the image to
- max_size: maximum height or width of resized images
- flip: whether to apply flipping augmentation
- """
- self.min_sizes = min_sizes
- self.max_size = max_size
- self.flip = flip
-
- @classmethod
- def from_config(cls, cfg):
- return {
- "min_sizes": cfg.TEST.AUG.MIN_SIZES,
- "max_size": cfg.TEST.AUG.MAX_SIZE,
- "flip": cfg.TEST.AUG.FLIP,
- }
-
- def __call__(self, dataset_dict):
- """
- Args:
- dict: a dict in standard model input format. See tutorials for details.
-
- Returns:
- list[dict]:
- a list of dicts, which contain augmented version of the input image.
- The total number of dicts is ``len(min_sizes) * (2 if flip else 1)``.
- Each dict has field "transforms" which is a TransformList,
- containing the transforms that are used to generate this image.
- """
- numpy_image = dataset_dict["image"].permute(1, 2, 0).numpy()
- shape = numpy_image.shape
- orig_shape = (dataset_dict["height"], dataset_dict["width"])
- if shape[:2] != orig_shape:
- # It transforms the "original" image in the dataset to the input image
- pre_tfm = ResizeTransform(orig_shape[0], orig_shape[1], shape[0], shape[1])
- else:
- pre_tfm = NoOpTransform()
-
- # Create all combinations of augmentations to use
- aug_candidates = [] # each element is a list[Augmentation]
- for min_size in self.min_sizes:
- resize = ResizeShortestEdge(min_size, self.max_size)
- aug_candidates.append([resize]) # resize only
- if self.flip:
- flip = RandomFlip(prob=1.0)
- aug_candidates.append([resize, flip]) # resize + flip
-
- # Apply all the augmentations
- ret = []
- for aug in aug_candidates:
- new_image, tfms = apply_augmentations(aug, np.copy(numpy_image))
- torch_image = torch.from_numpy(np.ascontiguousarray(new_image.transpose(2, 0, 1)))
-
- dic = copy.deepcopy(dataset_dict)
- dic["transforms"] = pre_tfm + tfms
- dic["image"] = torch_image
- ret.append(dic)
- return ret
-
-
-class GeneralizedRCNNWithTTA(nn.Module):
- """
- A GeneralizedRCNN with test-time augmentation enabled.
- Its :meth:`__call__` method has the same interface as :meth:`GeneralizedRCNN.forward`.
- """
-
- def __init__(self, cfg, model, tta_mapper=None, batch_size=3):
- """
- Args:
- cfg (CfgNode):
- model (GeneralizedRCNN): a GeneralizedRCNN to apply TTA on.
- tta_mapper (callable): takes a dataset dict and returns a list of
- augmented versions of the dataset dict. Defaults to
- `DatasetMapperTTA(cfg)`.
- batch_size (int): batch the augmented images into this batch size for inference.
- """
- super().__init__()
- if isinstance(model, DistributedDataParallel):
- model = model.module
- assert isinstance(
- model, GeneralizedRCNN
- ), "TTA is only supported on GeneralizedRCNN. Got a model of type {}".format(type(model))
- self.cfg = cfg.clone()
- assert not self.cfg.MODEL.KEYPOINT_ON, "TTA for keypoint is not supported yet"
- assert (
- not self.cfg.MODEL.LOAD_PROPOSALS
- ), "TTA for pre-computed proposals is not supported yet"
-
- self.model = model
-
- if tta_mapper is None:
- tta_mapper = DatasetMapperTTA(cfg)
- self.tta_mapper = tta_mapper
- self.batch_size = batch_size
-
- @contextmanager
- def _turn_off_roi_heads(self, attrs):
- """
- Open a context where some heads in `model.roi_heads` are temporarily turned off.
- Args:
- attr (list[str]): the attribute in `model.roi_heads` which can be used
- to turn off a specific head, e.g., "mask_on", "keypoint_on".
- """
- roi_heads = self.model.roi_heads
- old = {}
- for attr in attrs:
- try:
- old[attr] = getattr(roi_heads, attr)
- except AttributeError:
- # The head may not be implemented in certain ROIHeads
- pass
-
- if len(old.keys()) == 0:
- yield
- else:
- for attr in old.keys():
- setattr(roi_heads, attr, False)
- yield
- for attr in old.keys():
- setattr(roi_heads, attr, old[attr])
-
- def _batch_inference(self, batched_inputs, detected_instances=None):
- """
- Execute inference on a list of inputs,
- using batch size = self.batch_size, instead of the length of the list.
-
- Inputs & outputs have the same format as :meth:`GeneralizedRCNN.inference`
- """
- if detected_instances is None:
- detected_instances = [None] * len(batched_inputs)
-
- outputs = []
- inputs, instances = [], []
- for idx, input, instance in zip(count(), batched_inputs, detected_instances):
- inputs.append(input)
- instances.append(instance)
- if len(inputs) == self.batch_size or idx == len(batched_inputs) - 1:
- outputs.extend(
- self.model.inference(
- inputs,
- instances if instances[0] is not None else None,
- do_postprocess=False,
- )
- )
- inputs, instances = [], []
- return outputs
-
- def __call__(self, batched_inputs):
- """
- Same input/output format as :meth:`GeneralizedRCNN.forward`
- """
-
- def _maybe_read_image(dataset_dict):
- ret = copy.copy(dataset_dict)
- if "image" not in ret:
- image = read_image(ret.pop("file_name"), self.model.input_format)
- image = torch.from_numpy(np.ascontiguousarray(image.transpose(2, 0, 1))) # CHW
- ret["image"] = image
- if "height" not in ret and "width" not in ret:
- ret["height"] = image.shape[1]
- ret["width"] = image.shape[2]
- return ret
-
- return [self._inference_one_image(_maybe_read_image(x)) for x in batched_inputs]
-
- def _inference_one_image(self, input):
- """
- Args:
- input (dict): one dataset dict with "image" field being a CHW tensor
-
- Returns:
- dict: one output dict
- """
- orig_shape = (input["height"], input["width"])
- augmented_inputs, tfms = self._get_augmented_inputs(input)
- # Detect boxes from all augmented versions
- with self._turn_off_roi_heads(["mask_on", "keypoint_on"]):
- # temporarily disable roi heads
- all_boxes, all_scores, all_classes = self._get_augmented_boxes(augmented_inputs, tfms)
- # merge all detected boxes to obtain final predictions for boxes
- merged_instances = self._merge_detections(all_boxes, all_scores, all_classes, orig_shape)
-
- if self.cfg.MODEL.MASK_ON:
- # Use the detected boxes to obtain masks
- augmented_instances = self._rescale_detected_boxes(
- augmented_inputs, merged_instances, tfms
- )
- # run forward on the detected boxes
- outputs = self._batch_inference(augmented_inputs, augmented_instances)
- # Delete now useless variables to avoid being out of memory
- del augmented_inputs, augmented_instances
- # average the predictions
- merged_instances.pred_masks = self._reduce_pred_masks(outputs, tfms)
- merged_instances = detector_postprocess(merged_instances, *orig_shape)
- return {"instances": merged_instances}
- else:
- return {"instances": merged_instances}
-
- def _get_augmented_inputs(self, input):
- augmented_inputs = self.tta_mapper(input)
- tfms = [x.pop("transforms") for x in augmented_inputs]
- return augmented_inputs, tfms
-
- def _get_augmented_boxes(self, augmented_inputs, tfms):
- # 1: forward with all augmented images
- outputs = self._batch_inference(augmented_inputs)
- # 2: union the results
- all_boxes = []
- all_scores = []
- all_classes = []
- for output, tfm in zip(outputs, tfms):
- # Need to inverse the transforms on boxes, to obtain results on original image
- pred_boxes = output.pred_boxes.tensor
- original_pred_boxes = tfm.inverse().apply_box(pred_boxes.cpu().numpy())
- all_boxes.append(torch.from_numpy(original_pred_boxes).to(pred_boxes.device))
-
- all_scores.extend(output.scores)
- all_classes.extend(output.pred_classes)
- all_boxes = torch.cat(all_boxes, dim=0)
- return all_boxes, all_scores, all_classes
-
- def _merge_detections(self, all_boxes, all_scores, all_classes, shape_hw):
- # select from the union of all results
- num_boxes = len(all_boxes)
- num_classes = self.cfg.MODEL.ROI_HEADS.NUM_CLASSES
- # +1 because fast_rcnn_inference expects background scores as well
- all_scores_2d = torch.zeros(num_boxes, num_classes + 1, device=all_boxes.device)
- for idx, cls, score in zip(count(), all_classes, all_scores):
- all_scores_2d[idx, cls] = score
-
- merged_instances, _ = fast_rcnn_inference_single_image(
- all_boxes,
- all_scores_2d,
- shape_hw,
- 1e-8,
- self.cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST,
- self.cfg.TEST.DETECTIONS_PER_IMAGE,
- )
-
- return merged_instances
-
- def _rescale_detected_boxes(self, augmented_inputs, merged_instances, tfms):
- augmented_instances = []
- for input, tfm in zip(augmented_inputs, tfms):
- # Transform the target box to the augmented image's coordinate space
- pred_boxes = merged_instances.pred_boxes.tensor.cpu().numpy()
- pred_boxes = torch.from_numpy(tfm.apply_box(pred_boxes))
-
- aug_instances = Instances(
- image_size=input["image"].shape[1:3],
- pred_boxes=Boxes(pred_boxes),
- pred_classes=merged_instances.pred_classes,
- scores=merged_instances.scores,
- )
- augmented_instances.append(aug_instances)
- return augmented_instances
-
- def _reduce_pred_masks(self, outputs, tfms):
- # Should apply inverse transforms on masks.
- # We assume only resize & flip are used. pred_masks is a scale-invariant
- # representation, so we handle flip specially
- for output, tfm in zip(outputs, tfms):
- if any(isinstance(t, HFlipTransform) for t in tfm.transforms):
- output.pred_masks = output.pred_masks.flip(dims=[3])
- all_pred_masks = torch.stack([o.pred_masks for o in outputs], dim=0)
- avg_pred_masks = torch.mean(all_pred_masks, dim=0)
- return avg_pred_masks
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/utils/th.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/utils/th.py
deleted file mode 100644
index ca6ef9385e3b5c0a439579d3fd7aa73b5dc62758..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/utils/th.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import torch
-from torch.autograd import Variable
-import numpy as np
-import collections
-
-__all__ = ['as_variable', 'as_numpy', 'mark_volatile']
-
-def as_variable(obj):
- if isinstance(obj, Variable):
- return obj
- if isinstance(obj, collections.Sequence):
- return [as_variable(v) for v in obj]
- elif isinstance(obj, collections.Mapping):
- return {k: as_variable(v) for k, v in obj.items()}
- else:
- return Variable(obj)
-
-def as_numpy(obj):
- if isinstance(obj, collections.Sequence):
- return [as_numpy(v) for v in obj]
- elif isinstance(obj, collections.Mapping):
- return {k: as_numpy(v) for k, v in obj.items()}
- elif isinstance(obj, Variable):
- return obj.data.cpu().numpy()
- elif torch.is_tensor(obj):
- return obj.cpu().numpy()
- else:
- return np.array(obj)
-
-def mark_volatile(obj):
- if torch.is_tensor(obj):
- obj = Variable(obj)
- if isinstance(obj, Variable):
- obj.no_grad = True
- return obj
- elif isinstance(obj, collections.Mapping):
- return {k: mark_volatile(o) for k, o in obj.items()}
- elif isinstance(obj, collections.Sequence):
- return [mark_volatile(o) for o in obj]
- else:
- return obj
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/safe-r5rs.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/safe-r5rs.go
deleted file mode 100644
index 24f566b0f1661521e1ae120bfa6a64807dd8b986..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/safe-r5rs.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-9.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-9.go
deleted file mode 100644
index 471355593b44403adc25d650b9f5b48509e7b832..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-9.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/book_texinfo.py b/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/book_texinfo.py
deleted file mode 100644
index d1e429f6cc2c1cf25299621be862360462801e1c..0000000000000000000000000000000000000000
--- a/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/book_texinfo.py
+++ /dev/null
@@ -1,437 +0,0 @@
-# book_texinfo.py
-# -*- coding: utf-8 -*-
-#
-# This file is part of LilyPond, the GNU music typesetter.
-#
-# Copyright (C) 2010--2022 Reinhold Kainhofer
-#
-# LilyPond is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# LilyPond is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with LilyPond. If not, see .
-
-
-import copy
-import os
-import re
-import subprocess
-import sys
-import tempfile
-
-import book_base
-import book_snippets
-import lilylib as ly
-
-# See `book_latex.py` for some regex documentation.
-
-TexInfo_snippet_res = {
- 'include': r'''(?mx)
- ^
- (?P
- @include
- \s+
- (?P \S+ )
- )''',
-
- 'lilypond': r'''(?smx)
- ^
- [^\n]*? (?! @c \s+ ) [^\n]*?
- (?P
- @lilypond
- \s*
- ( \[ \s* (?P [^\[\]]*? ) \s* \] )?
- \s*
- { (?P''' + book_base.brace_matcher(10) + r''') }
- )''',
-
- 'lilypond_block': r'''(?smx)
- ^
- (?P
- @lilypond
- \s*
- ( \[ \s* (?P [^\[\]]*? ) \s* \] )?
- \s+?
- ^ (?P .*? )
- ^ @end \s+ lilypond
- ) \s''',
-
- 'lilypond_file': r'''(?mx)
- ^
- (?P
- @lilypondfile
- \s*
- ( \[ \s* (?P [^\[\]]*? ) \s* \] )?
- \s*
- { (?P \S+ ) }
- )''',
-
- 'multiline_comment': r'''(?smx)
- ^
- (?P
- (?P
- @ignore
- \s .*?
- @end \s+ ignore
- )
- ) \s''',
-
- 'musicxml_file': r'''(?mx)
- ^
- (?P
- @musicxmlfile
- \s*
- ( \[ \s* (?P [^\[\]]*? ) \s* \] )?
- \s*
- { (?P \S+ ) }
- )''',
-
- 'singleline_comment': r'''(?mx)
- ^
- .*
- (?P
- (?P
- @c ( [ \t] [^\n]* | ) \n
- )
- )''',
-
- # Don't do this: It interferes with @code{@{}.
- # 'verb': r'''(?P@code{.*?})''',
-
- 'verbatim': r'''(?sx)
- (?P
- (?P
- @example
- \s .*?
- @end \s+ example \s
- )
- )''',
-
- 'lilypondversion': r'''(?mx)
- [^@]
- (?P @lilypondversion )
- [^a-zA-Z]
- ''',
-}
-
-
-TexInfo_output = {
- book_snippets.FILTER: r'''@lilypond[%(options)s]
-%(code)s@end lilypond''',
-
- book_snippets.OUTPUT: r'''@iftex
-@include %(base)s-systems.texi
-@end iftex''',
-
- book_snippets.OUTPUTIMAGE: r'''@ifinfo
-@image{%(info_image_path)s,,,%(alt)s}
-@end ifinfo
-@html
-%(para_pre)s
- %(para_post)s
-@end html
-''',
-
- # TODO: The `@html` environment should be replaced with `@inlineraw`
- # for recent `texi2any` versions.
- #
- # There must be an empty line at the end to ensure that the following
- # images are typeset in vertical mode (and not in inline mode).
- book_snippets.PRINTFILENAME: '''@html
-
-@end html
-@file{%(filename)s}
-@html
-
-@end html
-
-''',
-
- book_snippets.QUOTE: r'''@quotation
-%(str)s
-@end quotation''',
-
- book_snippets.VERBATIM: r'''@verbatim
-%(verb)s@end verbatim
-''',
-
- book_snippets.VERSION: r'''%(program_version)s''',
-}
-
-
-texinfo_line_widths = {
- '@afourpaper': '160\\mm',
- '@afourwide': '6.5\\in',
- '@afourlatex': '150\\mm',
- '@smallbook': '5\\in',
- '@letterpaper': '6\\in',
-}
-
-
-###
-# Retrieve dimensions from texinfo
-TEXINFO_INSPECTION_DOCUMENT = r'''
-\input texinfo
-@setfilename Texinfo_width_test
-@settitle Texinfo width test
-%(preamble)s
-
-@message{Global: textwidth=@the@hsize,exampleindent=@the@lispnarrowing}
-
-dummy
-
-@bye
-'''
-
-
-def get_texinfo_width_indent(source, global_options):
- # TODO: Check for end of header command "@c %**end of header"
- # only use material before that comment ?
-
- # extract all relevant papter settings from the input:
- pagesize = None
- texinfo_paper_size_regexp = r'''(@(?:afourpaper|afourwide|afourlatex|afivepaper|smallbook|letterpaper))'''
- m = re.search(texinfo_paper_size_regexp, source)
- if m:
- pagesize = m.group(1)
-
- relevant_settings_regexp = r'''(@(?:fonttextsize|pagesizes|cropmarks|exampleindent).*)\n'''
- m = re.findall(relevant_settings_regexp, source)
- if pagesize:
- m.insert(0, pagesize)
- # all relevant options to insert into the test document:
- preamble = "\n".join(m)
-
- texinfo_document = TEXINFO_INSPECTION_DOCUMENT % {'preamble': preamble}
-
- (handle, tmpfile) = tempfile.mkstemp('.texi')
- outfile = os.path.splitext(tmpfile)[0] + '.pdf'
-
- tmp_handle = open(handle, 'w', encoding='utf-8')
- tmp_handle.write(texinfo_document)
- tmp_handle.close()
-
- # Work around a texi2pdf bug: if LANG=C is not given, a broken regexp is
- # used to detect relative/absolute paths, so the absolute path is not
- # detected as such and this command fails:
- ly.progress(
- _("Running texi2pdf on file %s to detect default page settings.\n") % tmpfile)
-
- # execute the command and pipe stdout to the parameter_string:
- cmd = '%s --batch -c -o %s %s' % (
- global_options.texinfo_program, outfile, tmpfile)
- ly.debug_output("Executing: %s\n" % cmd)
- run_env = os.environ.copy()
- run_env['LC_ALL'] = 'C'
-
- # unknown why this is necessary
- universal_newlines = True
- if sys.platform == 'mingw32':
- universal_newlines = False
- # use os.system to avoid weird sleep() problems on
- # GUB's python 2.4.2 on mingw
- # make file to write to
- output_dir = tempfile.mkdtemp()
- output_filename = os.path.join(output_dir, 'output.txt')
- # call command
- cmd += " > %s" % output_filename
- returncode = os.system(cmd)
- parameter_string = open(output_filename, encoding="utf8").read()
- if returncode != 0:
- ly.warning(_("Unable to auto-detect default settings:\n"))
- # clean up
- os.remove(output_filename)
- os.rmdir(output_dir)
- else:
- proc = subprocess.Popen(cmd,
- env=run_env,
- universal_newlines=universal_newlines,
- shell=True,
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- (parameter_string, error_string) = proc.communicate()
- if proc.returncode != 0:
- ly.warning(_("Unable to auto-detect default settings:\n%s")
- % error_string)
- os.unlink(tmpfile)
- if os.path.exists(outfile):
- os.unlink(outfile)
-
- # Find textwidth and exampleindent and format it as \\mm or \\in
- # Use defaults if they cannot be extracted
- textwidth = 0
- m = re.search('textwidth=([0-9.]+)pt', parameter_string)
- if m:
- val = float(m.group(1))/72.27
- if pagesize and pagesize.startswith("@afour"):
- textwidth = "%g\\mm" % round(val*25.4, 3)
- else:
- textwidth = "%g\\in" % round(val, 3)
- else:
- textwidth = texinfo_line_widths.get(pagesize, "6\\in")
-
- exampleindent = 0
- m = re.search('exampleindent=([0-9.]+)pt', parameter_string)
- if m:
- val = float(m.group(1))/72.27
- if pagesize and pagesize.startswith("@afour"):
- exampleindent = "%g\\mm" % round(val*25.4, 3)
- else:
- exampleindent = "%g\\in" % round(val, 3)
- else:
- exampleindent = "0.4\\in"
-
- retval = {book_snippets.LINE_WIDTH: textwidth,
- book_snippets.EXAMPLEINDENT: exampleindent}
- ly.debug_output("Auto-detected values are: %s\n" % retval)
- return retval
-
-
-texinfo_lang_re = re.compile('(?m)^@documentlanguage (.*?)( |$)')
-
-
-class BookTexinfoOutputFormat (book_base.BookOutputFormat):
- def __init__(self):
- book_base.BookOutputFormat.__init__(self)
- self.format = "texinfo"
- self.default_extension = ".texi"
- self.snippet_res = TexInfo_snippet_res
- self.output = TexInfo_output
- self.handled_extensions = ['.itely', '.tely', '.texi', '.texinfo']
- self.snippet_option_separator = r'\s*,\s*'
-
- def can_handle_format(self, format):
- return (book_base.BookOutputFormat.can_handle_format(self, format) or
- (format in ['texi-html', 'texi']))
-
- def process_options(self, global_options):
- self.process_options_pdfnotdefault(global_options)
-
- def get_document_language(self, source):
- m = texinfo_lang_re.search(source)
- if m and not m.group(1).startswith('en'):
- return m.group(1)
- else:
- return ''
-
- def init_default_snippet_options(self, source):
- texinfo_defaults = get_texinfo_width_indent(
- source, self.global_options)
- self.default_snippet_options.update(texinfo_defaults)
- book_base.BookOutputFormat.init_default_snippet_options(self, source)
-
- def adjust_snippet_command(self, cmd):
- if '-dseparate-page-formats' not in cmd:
- cmd += ' -dseparate-page-formats=png,pdf '
- if '-dtall-page-formats' not in cmd:
- # TODO: the EPS output here is useless for cairo, but the
- # rest of lilypond-book expects it to be there.
- formats = ['eps']
- if not self.global_options.skip_png_check:
- formats.append('png')
-
- cmd += ' -dtall-page-formats=%s ' % ','.join(formats)
- return cmd
-
- def output_info(self, basename, snippet):
- s = ''
- rep = snippet.get_replacements()
- rep['base'] = basename
- rep['filename'] = os.path.basename(snippet.filename)
- rep['ext'] = snippet.ext
- if book_snippets.INLINE not in snippet.option_dict:
- # URGH: For recent `texi2any` versions, both values must be
- # empty.
- rep['para_pre'] = '
'
- rep['para_post'] = '
'
- else:
- rep['para_pre'] = ''
- # URGH: The empty line after `` is necessary for texi2html
- # 1.82, which swallows the last newline of the `@ifhtml`
- # region. The `@html` environment should be replaced with
- # `@inlineraw` for recent `texi2any` versions.
- rep['para_post'] = '\n'
-
- for image in snippet.get_images():
- rep1 = copy.copy(rep)
- rep1['base'] = os.path.splitext(image)[0]
- rep1['image'] = image
- rep1['alt'] = snippet.option_dict[book_snippets.ALT]
- rep1['info_image_path'] = os.path.join(
- self.global_options.info_images_dir, rep1['base'])
- s += self.output[book_snippets.OUTPUTIMAGE] % rep1
-
- s += self.output[book_snippets.OUTPUT] % rep
- return s
-
- def snippet_output(self, basename, snippet):
- def find(fn):
- p = os.path.join(self.global_options.output_dir, fn)
- if os.path.exists(p):
- return p
- return ''
-
- s = ''
- base = basename
- if book_snippets.DOCTITLE in snippet.option_dict:
- doctitle = base + '.doctitle'
- translated_doctitle = doctitle + self.document_language
- for t in [translated_doctitle, doctitle]:
- fullpath = find(t)
- if fullpath:
- doctitle = open(fullpath, 'r', encoding='utf-8').read()
- doctitle = doctitle.replace(",", "@comma{}")
- s += '\n@lydoctitle %s\n\n' % doctitle
- break
-
- if book_snippets.TEXIDOC in snippet.option_dict:
- texidoc = base + '.texidoc'
- translated_texidoc = texidoc + self.document_language
- for t in [translated_texidoc, texidoc]:
- fullpath = find(t)
- if fullpath:
- # We need two empty lines to enforce a new paragraph
- # in case the included file doesn't end with a newline
- # character.
- s += '@include %s\n\n\n' % t
- break
-
- s += self.output_print_filename(basename, snippet)
-
- if book_snippets.INLINE not in snippet.option_dict:
- s += '\n'
-
- substr = ''
- rep = snippet.get_replacements()
- if book_snippets.VERBATIM in snippet.option_dict:
- ly_code = snippet.verb_ly()
- if self.global_options.highlight:
- from auxiliar.book_highlight import highlight_ly
- substr = highlight_ly(ly_code)
- else:
- rep['verb'] = ly_code
- substr = self.output[book_snippets.VERBATIM] % rep
- substr += self.output_info(basename, snippet)
- if book_snippets.QUOTE in snippet.option_dict:
- substr = self.output[book_snippets.QUOTE] % {'str': substr}
- s += substr
-
- if book_snippets.INLINE not in snippet.option_dict:
- s += '\n'
-
- return s
-
- def required_files(self, snippet, base, full, required_files):
- return self.required_files_png(snippet, base, full, required_files)
-
-
-book_base.register_format(BookTexinfoOutputFormat())
diff --git a/spaces/PeepDaSlan9/Bark-Voice-Cloning/util/helper.py b/spaces/PeepDaSlan9/Bark-Voice-Cloning/util/helper.py
deleted file mode 100644
index 185613661a2f450e55a5d2add1a1e75bc08f5c19..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/Bark-Voice-Cloning/util/helper.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import os
-from datetime import datetime
-from mutagen.wave import WAVE
-from mutagen.id3._frames import *
-
-def create_filename(path, seed, name, extension):
- now = datetime.now()
- date_str =now.strftime("%m-%d-%Y")
- outputs_folder = os.path.join(os.getcwd(), path)
- if not os.path.exists(outputs_folder):
- os.makedirs(outputs_folder)
-
- sub_folder = os.path.join(outputs_folder, date_str)
- if not os.path.exists(sub_folder):
- os.makedirs(sub_folder)
-
- time_str = now.strftime("%H-%M-%S")
- if seed == None:
- file_name = f"{name}_{time_str}{extension}"
- else:
- file_name = f"{name}_{time_str}_s{seed}{extension}"
- return os.path.join(sub_folder, file_name)
-
-
-def add_id3_tag(filename, text, speakername, seed):
- audio = WAVE(filename)
- if speakername == None:
- speakername = "Unconditional"
-
- # write id3 tag with text truncated to 60 chars, as a precaution...
- audio["TIT2"] = TIT2(encoding=3, text=text[:60])
- audio["TPE1"] = TPE1(encoding=3, text=f"Voice {speakername} using Seed={seed}")
- audio["TPUB"] = TPUB(encoding=3, text="Bark by Suno AI")
- audio["COMMENT"] = COMM(encoding=3, text="Generated with Bark GUI - Text-Prompted Generative Audio Model. Visit https://github.com/C0untFloyd/bark-gui")
- audio.save()
diff --git a/spaces/PeepDaSlan9/OpenAssistant-reward-model-deberta-v3-large-v2/app.py b/spaces/PeepDaSlan9/OpenAssistant-reward-model-deberta-v3-large-v2/app.py
deleted file mode 100644
index e902d9cc8321998d3d54ce486565a1b486b07fb0..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/OpenAssistant-reward-model-deberta-v3-large-v2/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/OpenAssistant/reward-model-deberta-v3-large-v2").launch()
\ No newline at end of file
diff --git a/spaces/Pengyey/bingo-chuchu/src/lib/isomorphic/node.ts b/spaces/Pengyey/bingo-chuchu/src/lib/isomorphic/node.ts
deleted file mode 100644
index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/src/lib/isomorphic/node.ts
+++ /dev/null
@@ -1,26 +0,0 @@
-import Debug from 'debug'
-
-const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici')
-const { HttpsProxyAgent } = require('https-proxy-agent')
-const ws = require('ws')
-
-const debug = Debug('bingo')
-
-const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY;
-let WebSocket = ws.WebSocket
-
-if (httpProxy) {
- setGlobalDispatcher(new ProxyAgent(httpProxy))
- const agent = new HttpsProxyAgent(httpProxy)
- // @ts-ignore
- WebSocket = class extends ws.WebSocket {
- constructor(address: string | URL, options: typeof ws.WebSocket) {
- super(address, {
- ...options,
- agent,
- })
- }
- }
-}
-
-export default { fetch, WebSocket, debug }
diff --git a/spaces/Pennywise881/wiki-chat-v2/QuestionAnswer.py b/spaces/Pennywise881/wiki-chat-v2/QuestionAnswer.py
deleted file mode 100644
index f27742aed5b5edba85a9a6a0d9c5f0038747afc2..0000000000000000000000000000000000000000
--- a/spaces/Pennywise881/wiki-chat-v2/QuestionAnswer.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import torch
-import numpy as np
-# # from transformers import AutoTokenizer, AutoModelForQuestionAnswering
-
-
-class QuestionAnswer:
-
- def __init__(self, data, model, tokenizer, torch_device):
-
- self.max_length = 384
- self.doc_stride = 128
-
- self.tokenizer = tokenizer
- self.model = model
- self.data = data
- self.torch_device = torch_device
-
- self.output = None
- self.features = None
- self.results = None
-
- def get_output_from_model(self):
- # data = {'question': question, 'context': context}
-
- with torch.no_grad():
- tokenized_data = self.tokenizer(
- self.data['question'],
- self.data['context'],
- truncation='only_second',
- max_length=self.max_length,
- stride=self.doc_stride,
- return_overflowing_tokens=True,
- return_offsets_mapping=True,
- padding='max_length',
- return_tensors='pt'
- ).to(self.torch_device)
-
- output = self.model(tokenized_data['input_ids'], tokenized_data['attention_mask'])
-
- return output
-
- # print(output.keys())
- # print(output['start_logits'].shape)
- # print(output['end_logits'].shape)
- # print(tokenized_data.keys())
-
- def prepare_features(self, example):
- tokenized_example = self.tokenizer(
- example['question'],
- example['context'],
- truncation='only_second',
- max_length=self.max_length,
- stride=self.doc_stride,
- return_overflowing_tokens=True,
- return_offsets_mapping=True,
- padding='max_length',
- )
-
- # sample_mapping = tokenized_example.pop("overflow_to_sample_mapping")
-
- for i in range(len(tokenized_example['input_ids'])):
- sequence_ids = tokenized_example.sequence_ids(i)
- # print(sequence_ids)
- context_index = 1
-
- # sample_index = sample_mapping[i]
-
- tokenized_example["offset_mapping"][i] = [
- (o if sequence_ids[k] == context_index else None)
- for k, o in enumerate(tokenized_example["offset_mapping"][i])
- ]
-
- return tokenized_example
-
- def postprocess_qa_predictions(self, data, features, raw_predictions, top_n_answers=5, max_answer_length=30):
- all_start_logits, all_end_logits = raw_predictions.start_logits, raw_predictions.end_logits
-
- # print(all_start_logits)
-
- results = []
- context = data['context']
-
- # print(len(features['input_ids']))
- for i in range(len(features['input_ids'])):
- start_logits = all_start_logits[i].cpu().numpy()
- end_logits = all_end_logits[i].cpu().numpy()
-
- # print(start_logits)
-
- offset_mapping = features['offset_mapping'][i]
-
- start_indices = np.argsort(start_logits)[-1: -top_n_answers - 1: -1].tolist()
- end_indices = np.argsort(end_logits)[-1: -top_n_answers - 1: -1].tolist()
-
- for start_index in start_indices:
- for end_index in end_indices:
- if (
- start_index >= len(offset_mapping)
- or end_index >= len(offset_mapping)
- or offset_mapping[start_index] is None
- or offset_mapping[end_index] is None
- or end_index < start_index
- or end_index - start_index + 1 > max_answer_length
- ):
- continue
-
- start_char = offset_mapping[start_index][0]
- end_char = offset_mapping[end_index][1]
-
- # print(start_logits[start_index])
- # print(end_logits[end_index])
- score = start_logits[start_index] + end_logits[end_index]
- results.append(
- {
- 'score': float('%.*g' % (3, score)),
- 'text': context[start_char: end_char]
- }
- )
-
- results = sorted(results, key=lambda x: x["score"], reverse=True)[:top_n_answers]
- return results
-
-
- def get_results(self):
- self.output = self.get_output_from_model()
- self.features = self.prepare_features(self.data)
- self.results = self.postprocess_qa_predictions(self.data, self.features, self.output)
-
- return self.results
\ No newline at end of file
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/visualization/image.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/visualization/image.py
deleted file mode 100644
index 61a56c75b67f593c298408462c63c0468be8e276..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/visualization/image.py
+++ /dev/null
@@ -1,152 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import cv2
-import numpy as np
-
-from annotator.uniformer.mmcv.image import imread, imwrite
-from .color import color_val
-
-
-def imshow(img, win_name='', wait_time=0):
- """Show an image.
-
- Args:
- img (str or ndarray): The image to be displayed.
- win_name (str): The window name.
- wait_time (int): Value of waitKey param.
- """
- cv2.imshow(win_name, imread(img))
- if wait_time == 0: # prevent from hanging if windows was closed
- while True:
- ret = cv2.waitKey(1)
-
- closed = cv2.getWindowProperty(win_name, cv2.WND_PROP_VISIBLE) < 1
- # if user closed window or if some key pressed
- if closed or ret != -1:
- break
- else:
- ret = cv2.waitKey(wait_time)
-
-
-def imshow_bboxes(img,
- bboxes,
- colors='green',
- top_k=-1,
- thickness=1,
- show=True,
- win_name='',
- wait_time=0,
- out_file=None):
- """Draw bboxes on an image.
-
- Args:
- img (str or ndarray): The image to be displayed.
- bboxes (list or ndarray): A list of ndarray of shape (k, 4).
- colors (list[str or tuple or Color]): A list of colors.
- top_k (int): Plot the first k bboxes only if set positive.
- thickness (int): Thickness of lines.
- show (bool): Whether to show the image.
- win_name (str): The window name.
- wait_time (int): Value of waitKey param.
- out_file (str, optional): The filename to write the image.
-
- Returns:
- ndarray: The image with bboxes drawn on it.
- """
- img = imread(img)
- img = np.ascontiguousarray(img)
-
- if isinstance(bboxes, np.ndarray):
- bboxes = [bboxes]
- if not isinstance(colors, list):
- colors = [colors for _ in range(len(bboxes))]
- colors = [color_val(c) for c in colors]
- assert len(bboxes) == len(colors)
-
- for i, _bboxes in enumerate(bboxes):
- _bboxes = _bboxes.astype(np.int32)
- if top_k <= 0:
- _top_k = _bboxes.shape[0]
- else:
- _top_k = min(top_k, _bboxes.shape[0])
- for j in range(_top_k):
- left_top = (_bboxes[j, 0], _bboxes[j, 1])
- right_bottom = (_bboxes[j, 2], _bboxes[j, 3])
- cv2.rectangle(
- img, left_top, right_bottom, colors[i], thickness=thickness)
-
- if show:
- imshow(img, win_name, wait_time)
- if out_file is not None:
- imwrite(img, out_file)
- return img
-
-
-def imshow_det_bboxes(img,
- bboxes,
- labels,
- class_names=None,
- score_thr=0,
- bbox_color='green',
- text_color='green',
- thickness=1,
- font_scale=0.5,
- show=True,
- win_name='',
- wait_time=0,
- out_file=None):
- """Draw bboxes and class labels (with scores) on an image.
-
- Args:
- img (str or ndarray): The image to be displayed.
- bboxes (ndarray): Bounding boxes (with scores), shaped (n, 4) or
- (n, 5).
- labels (ndarray): Labels of bboxes.
- class_names (list[str]): Names of each classes.
- score_thr (float): Minimum score of bboxes to be shown.
- bbox_color (str or tuple or :obj:`Color`): Color of bbox lines.
- text_color (str or tuple or :obj:`Color`): Color of texts.
- thickness (int): Thickness of lines.
- font_scale (float): Font scales of texts.
- show (bool): Whether to show the image.
- win_name (str): The window name.
- wait_time (int): Value of waitKey param.
- out_file (str or None): The filename to write the image.
-
- Returns:
- ndarray: The image with bboxes drawn on it.
- """
- assert bboxes.ndim == 2
- assert labels.ndim == 1
- assert bboxes.shape[0] == labels.shape[0]
- assert bboxes.shape[1] == 4 or bboxes.shape[1] == 5
- img = imread(img)
- img = np.ascontiguousarray(img)
-
- if score_thr > 0:
- assert bboxes.shape[1] == 5
- scores = bboxes[:, -1]
- inds = scores > score_thr
- bboxes = bboxes[inds, :]
- labels = labels[inds]
-
- bbox_color = color_val(bbox_color)
- text_color = color_val(text_color)
-
- for bbox, label in zip(bboxes, labels):
- bbox_int = bbox.astype(np.int32)
- left_top = (bbox_int[0], bbox_int[1])
- right_bottom = (bbox_int[2], bbox_int[3])
- cv2.rectangle(
- img, left_top, right_bottom, bbox_color, thickness=thickness)
- label_text = class_names[
- label] if class_names is not None else f'cls {label}'
- if len(bbox) > 4:
- label_text += f'|{bbox[-1]:.02f}'
- cv2.putText(img, label_text, (bbox_int[0], bbox_int[1] - 2),
- cv2.FONT_HERSHEY_COMPLEX, font_scale, text_color)
-
- if show:
- imshow(img, win_name, wait_time)
- if out_file is not None:
- imwrite(img, out_file)
- return img
diff --git a/spaces/Purple11/Grounded-Diffusion/ldm/modules/diffusionmodules/__init__.py b/spaces/Purple11/Grounded-Diffusion/ldm/modules/diffusionmodules/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Purple11/Grounded-Diffusion/src/CLIP/clip/clip.py b/spaces/Purple11/Grounded-Diffusion/src/CLIP/clip/clip.py
deleted file mode 100644
index 257511e1d40c120e0d64a0f1562d44b2b8a40a17..0000000000000000000000000000000000000000
--- a/spaces/Purple11/Grounded-Diffusion/src/CLIP/clip/clip.py
+++ /dev/null
@@ -1,237 +0,0 @@
-import hashlib
-import os
-import urllib
-import warnings
-from typing import Any, Union, List
-from pkg_resources import packaging
-
-import torch
-from PIL import Image
-from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
-from tqdm import tqdm
-
-from .model import build_model
-from .simple_tokenizer import SimpleTokenizer as _Tokenizer
-
-try:
- from torchvision.transforms import InterpolationMode
- BICUBIC = InterpolationMode.BICUBIC
-except ImportError:
- BICUBIC = Image.BICUBIC
-
-
-if packaging.version.parse(torch.__version__) < packaging.version.parse("1.7.1"):
- warnings.warn("PyTorch version 1.7.1 or higher is recommended")
-
-
-__all__ = ["available_models", "load", "tokenize"]
-_tokenizer = _Tokenizer()
-
-_MODELS = {
- "RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
- "RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
- "RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt",
- "RN50x16": "https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt",
- "RN50x64": "https://openaipublic.azureedge.net/clip/models/be1cfb55d75a9666199fb2206c106743da0f6468c9d327f3e0d0a543a9919d9c/RN50x64.pt",
- "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
- "ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt",
- "ViT-L/14": "https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt",
- "ViT-L/14@336px": "https://openaipublic.azureedge.net/clip/models/3035c92b350959924f9f00213499208652fc7ea050643e8b385c2dac08641f02/ViT-L-14-336px.pt",
-}
-
-
-def _download(url: str, root: str):
- os.makedirs(root, exist_ok=True)
- filename = os.path.basename(url)
-
- expected_sha256 = url.split("/")[-2]
- download_target = os.path.join(root, filename)
-
- if os.path.exists(download_target) and not os.path.isfile(download_target):
- raise RuntimeError(f"{download_target} exists and is not a regular file")
-
- if os.path.isfile(download_target):
- if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256:
- return download_target
- else:
- warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file")
-
- with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
- with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True, unit_divisor=1024) as loop:
- while True:
- buffer = source.read(8192)
- if not buffer:
- break
-
- output.write(buffer)
- loop.update(len(buffer))
-
- if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
- raise RuntimeError("Model has been downloaded but the SHA256 checksum does not not match")
-
- return download_target
-
-
-def _convert_image_to_rgb(image):
- return image.convert("RGB")
-
-
-def _transform(n_px):
- return Compose([
- Resize(n_px, interpolation=BICUBIC),
- CenterCrop(n_px),
- _convert_image_to_rgb,
- ToTensor(),
- Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
- ])
-
-
-def available_models() -> List[str]:
- """Returns the names of available CLIP models"""
- return list(_MODELS.keys())
-
-
-def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit: bool = False, download_root: str = None):
- """Load a CLIP model
-
- Parameters
- ----------
- name : str
- A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
-
- device : Union[str, torch.device]
- The device to put the loaded model
-
- jit : bool
- Whether to load the optimized JIT model or more hackable non-JIT model (default).
-
- download_root: str
- path to download the model files; by default, it uses "~/.cache/clip"
-
- Returns
- -------
- model : torch.nn.Module
- The CLIP model
-
- preprocess : Callable[[PIL.Image], torch.Tensor]
- A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
- """
- if name in _MODELS:
- model_path = _download(_MODELS[name], download_root or os.path.expanduser("~/.cache/clip"))
- elif os.path.isfile(name):
- model_path = name
- else:
- raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
-
- with open(model_path, 'rb') as opened_file:
- try:
- # loading JIT archive
- model = torch.jit.load(opened_file, map_location=device if jit else "cpu").eval()
- state_dict = None
- except RuntimeError:
- # loading saved state dict
- if jit:
- warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead")
- jit = False
- state_dict = torch.load(opened_file, map_location="cpu")
-
- if not jit:
- model = build_model(state_dict or model.state_dict()).to(device)
- if str(device) == "cpu":
- model.float()
- return model, _transform(model.visual.input_resolution)
-
- # patch the device names
- device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
- device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
-
- def patch_device(module):
- try:
- graphs = [module.graph] if hasattr(module, "graph") else []
- except RuntimeError:
- graphs = []
-
- if hasattr(module, "forward1"):
- graphs.append(module.forward1.graph)
-
- for graph in graphs:
- for node in graph.findAllNodes("prim::Constant"):
- if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
- node.copyAttributes(device_node)
-
- model.apply(patch_device)
- patch_device(model.encode_image)
- patch_device(model.encode_text)
-
- # patch dtype to float32 on CPU
- if str(device) == "cpu":
- float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
- float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
- float_node = float_input.node()
-
- def patch_float(module):
- try:
- graphs = [module.graph] if hasattr(module, "graph") else []
- except RuntimeError:
- graphs = []
-
- if hasattr(module, "forward1"):
- graphs.append(module.forward1.graph)
-
- for graph in graphs:
- for node in graph.findAllNodes("aten::to"):
- inputs = list(node.inputs())
- for i in [1, 2]: # dtype can be the second or third argument to aten::to()
- if inputs[i].node()["value"] == 5:
- inputs[i].node().copyAttributes(float_node)
-
- model.apply(patch_float)
- patch_float(model.encode_image)
- patch_float(model.encode_text)
-
- model.float()
-
- return model, _transform(model.input_resolution.item())
-
-
-def tokenize(texts: Union[str, List[str]], context_length: int = 77, truncate: bool = False) -> Union[torch.IntTensor, torch.LongTensor]:
- """
- Returns the tokenized representation of given input string(s)
-
- Parameters
- ----------
- texts : Union[str, List[str]]
- An input string or a list of input strings to tokenize
-
- context_length : int
- The context length to use; all CLIP models use 77 as the context length
-
- truncate: bool
- Whether to truncate the text in case its encoding is longer than the context length
-
- Returns
- -------
- A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length].
- We return LongTensor when torch version is <1.8.0, since older index_select requires indices to be long.
- """
- if isinstance(texts, str):
- texts = [texts]
-
- sot_token = _tokenizer.encoder["<|startoftext|>"]
- eot_token = _tokenizer.encoder["<|endoftext|>"]
- all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
- if packaging.version.parse(torch.__version__) < packaging.version.parse("1.8.0"):
- result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
- else:
- result = torch.zeros(len(all_tokens), context_length, dtype=torch.int)
-
- for i, tokens in enumerate(all_tokens):
- if len(tokens) > context_length:
- if truncate:
- tokens = tokens[:context_length]
- tokens[-1] = eot_token
- else:
- raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
- result[i, :len(tokens)] = torch.tensor(tokens)
-
- return result
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/charsetprober.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/charsetprober.py
deleted file mode 100644
index 9f1afd999c18fa6997c92f0c8b231e68b70ffdc0..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/charsetprober.py
+++ /dev/null
@@ -1,138 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Universal charset detector code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 2001
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-# Shy Shalom - original C code
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-import logging
-import re
-
-from .enums import ProbingState
-
-INTERNATIONAL_WORDS_PATTERN = re.compile(
- b"[a-zA-Z]*[\x80-\xFF]+[a-zA-Z]*[^a-zA-Z\x80-\xFF]?"
-)
-
-
-class CharSetProber:
-
- SHORTCUT_THRESHOLD = 0.95
-
- def __init__(self, lang_filter=None):
- self._state = None
- self.lang_filter = lang_filter
- self.logger = logging.getLogger(__name__)
-
- def reset(self):
- self._state = ProbingState.DETECTING
-
- @property
- def charset_name(self):
- return None
-
- def feed(self, byte_str):
- raise NotImplementedError
-
- @property
- def state(self):
- return self._state
-
- def get_confidence(self):
- return 0.0
-
- @staticmethod
- def filter_high_byte_only(buf):
- buf = re.sub(b"([\x00-\x7F])+", b" ", buf)
- return buf
-
- @staticmethod
- def filter_international_words(buf):
- """
- We define three types of bytes:
- alphabet: english alphabets [a-zA-Z]
- international: international characters [\x80-\xFF]
- marker: everything else [^a-zA-Z\x80-\xFF]
- The input buffer can be thought to contain a series of words delimited
- by markers. This function works to filter all words that contain at
- least one international character. All contiguous sequences of markers
- are replaced by a single space ascii character.
- This filter applies to all scripts which do not use English characters.
- """
- filtered = bytearray()
-
- # This regex expression filters out only words that have at-least one
- # international character. The word may include one marker character at
- # the end.
- words = INTERNATIONAL_WORDS_PATTERN.findall(buf)
-
- for word in words:
- filtered.extend(word[:-1])
-
- # If the last character in the word is a marker, replace it with a
- # space as markers shouldn't affect our analysis (they are used
- # similarly across all languages and may thus have similar
- # frequencies).
- last_char = word[-1:]
- if not last_char.isalpha() and last_char < b"\x80":
- last_char = b" "
- filtered.extend(last_char)
-
- return filtered
-
- @staticmethod
- def remove_xml_tags(buf):
- """
- Returns a copy of ``buf`` that retains only the sequences of English
- alphabet and high byte characters that are not between <> characters.
- This filter can be applied to all scripts which contain both English
- characters and extended ASCII characters, but is currently only used by
- ``Latin1Prober``.
- """
- filtered = bytearray()
- in_tag = False
- prev = 0
- buf = memoryview(buf).cast("c")
-
- for curr, buf_char in enumerate(buf):
- # Check if we're coming out of or entering an XML tag
- if buf_char == b">":
- prev = curr + 1
- in_tag = False
- elif buf_char == b"<":
- if curr > prev and not in_tag:
- # Keep everything after last non-extended-ASCII,
- # non-alphabetic character
- filtered.extend(buf[prev:curr])
- # Output a space to delimit stretch we kept
- filtered.extend(b" ")
- in_tag = True
-
- # If we're not in a tag...
- if not in_tag:
- # Keep everything after last non-extended-ASCII, non-alphabetic
- # character
- filtered.extend(buf[prev:])
-
- return filtered
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/resolvelib/reporters.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/resolvelib/reporters.py
deleted file mode 100644
index 6695480fff4c87608ac2002dfb341f90ed1a5ce4..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/resolvelib/reporters.py
+++ /dev/null
@@ -1,43 +0,0 @@
-class BaseReporter(object):
- """Delegate class to provider progress reporting for the resolver."""
-
- def starting(self):
- """Called before the resolution actually starts."""
-
- def starting_round(self, index):
- """Called before each round of resolution starts.
-
- The index is zero-based.
- """
-
- def ending_round(self, index, state):
- """Called before each round of resolution ends.
-
- This is NOT called if the resolution ends at this round. Use `ending`
- if you want to report finalization. The index is zero-based.
- """
-
- def ending(self, state):
- """Called before the resolution ends successfully."""
-
- def adding_requirement(self, requirement, parent):
- """Called when adding a new requirement into the resolve criteria.
-
- :param requirement: The additional requirement to be applied to filter
- the available candidaites.
- :param parent: The candidate that requires ``requirement`` as a
- dependency, or None if ``requirement`` is one of the root
- requirements passed in from ``Resolver.resolve()``.
- """
-
- def resolving_conflicts(self, causes):
- """Called when starting to attempt requirement conflict resolution.
-
- :param causes: The information on the collision that caused the backtracking.
- """
-
- def backtracking(self, candidate):
- """Called when rejecting a candidate during backtracking."""
-
- def pinning(self, candidate):
- """Called when adding a candidate to the potential solution."""
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_itertools.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_itertools.py
deleted file mode 100644
index b8bf6d210aec669b6b948942eda1db953e8725fa..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_itertools.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from setuptools.extern.more_itertools import consume # noqa: F401
-
-
-# copied from jaraco.itertools 6.1
-def ensure_unique(iterable, key=lambda x: x):
- """
- Wrap an iterable to raise a ValueError if non-unique values are encountered.
-
- >>> list(ensure_unique('abc'))
- ['a', 'b', 'c']
- >>> consume(ensure_unique('abca'))
- Traceback (most recent call last):
- ...
- ValueError: Duplicate element 'a' encountered.
- """
- seen = set()
- seen_add = seen.add
- for element in iterable:
- k = key(element)
- if k in seen:
- raise ValueError(f"Duplicate element {element!r} encountered.")
- seen_add(k)
- yield element
diff --git a/spaces/Redgon/bingo/src/components/ui/sheet.tsx b/spaces/Redgon/bingo/src/components/ui/sheet.tsx
deleted file mode 100644
index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000
--- a/spaces/Redgon/bingo/src/components/ui/sheet.tsx
+++ /dev/null
@@ -1,122 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as SheetPrimitive from '@radix-ui/react-dialog'
-
-import { cn } from '@/lib/utils'
-import { IconClose } from '@/components/ui/icons'
-
-const Sheet = SheetPrimitive.Root
-
-const SheetTrigger = SheetPrimitive.Trigger
-
-const SheetClose = SheetPrimitive.Close
-
-const SheetPortal = ({
- className,
- children,
- ...props
-}: SheetPrimitive.DialogPortalProps) => (
-
- {children}
-
-)
-SheetPortal.displayName = SheetPrimitive.Portal.displayName
-
-const SheetOverlay = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-))
-SheetOverlay.displayName = SheetPrimitive.Overlay.displayName
-
-const SheetContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-
- {children}
-
-
- Close
-
-
-
-))
-SheetContent.displayName = SheetPrimitive.Content.displayName
-
-const SheetHeader = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-SheetHeader.displayName = 'SheetHeader'
-
-const SheetFooter = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-SheetFooter.displayName = 'SheetFooter'
-
-const SheetTitle = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SheetTitle.displayName = SheetPrimitive.Title.displayName
-
-const SheetDescription = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SheetDescription.displayName = SheetPrimitive.Description.displayName
-
-export {
- Sheet,
- SheetTrigger,
- SheetClose,
- SheetContent,
- SheetHeader,
- SheetFooter,
- SheetTitle,
- SheetDescription
-}
diff --git a/spaces/Rifd/Sdallmodels/index.html b/spaces/Rifd/Sdallmodels/index.html
deleted file mode 100644
index 6250c2958a7186a4e64f21c02b0359ff5ecd7e97..0000000000000000000000000000000000000000
--- a/spaces/Rifd/Sdallmodels/index.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Rimi98/Relax-Teacher/README.md b/spaces/Rimi98/Relax-Teacher/README.md
deleted file mode 100644
index 0ae519ba6017193c0d861c226ee4778abe78b6b2..0000000000000000000000000000000000000000
--- a/spaces/Rimi98/Relax-Teacher/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Online Class
-emoji: 🐢
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/evaluation/mean_ap.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/evaluation/mean_ap.py
deleted file mode 100644
index 1d653a35497f6a0135c4374a09eb7c11399e3244..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/evaluation/mean_ap.py
+++ /dev/null
@@ -1,469 +0,0 @@
-from multiprocessing import Pool
-
-import mmcv
-import numpy as np
-from mmcv.utils import print_log
-from terminaltables import AsciiTable
-
-from .bbox_overlaps import bbox_overlaps
-from .class_names import get_classes
-
-
-def average_precision(recalls, precisions, mode='area'):
- """Calculate average precision (for single or multiple scales).
-
- Args:
- recalls (ndarray): shape (num_scales, num_dets) or (num_dets, )
- precisions (ndarray): shape (num_scales, num_dets) or (num_dets, )
- mode (str): 'area' or '11points', 'area' means calculating the area
- under precision-recall curve, '11points' means calculating
- the average precision of recalls at [0, 0.1, ..., 1]
-
- Returns:
- float or ndarray: calculated average precision
- """
- no_scale = False
- if recalls.ndim == 1:
- no_scale = True
- recalls = recalls[np.newaxis, :]
- precisions = precisions[np.newaxis, :]
- assert recalls.shape == precisions.shape and recalls.ndim == 2
- num_scales = recalls.shape[0]
- ap = np.zeros(num_scales, dtype=np.float32)
- if mode == 'area':
- zeros = np.zeros((num_scales, 1), dtype=recalls.dtype)
- ones = np.ones((num_scales, 1), dtype=recalls.dtype)
- mrec = np.hstack((zeros, recalls, ones))
- mpre = np.hstack((zeros, precisions, zeros))
- for i in range(mpre.shape[1] - 1, 0, -1):
- mpre[:, i - 1] = np.maximum(mpre[:, i - 1], mpre[:, i])
- for i in range(num_scales):
- ind = np.where(mrec[i, 1:] != mrec[i, :-1])[0]
- ap[i] = np.sum(
- (mrec[i, ind + 1] - mrec[i, ind]) * mpre[i, ind + 1])
- elif mode == '11points':
- for i in range(num_scales):
- for thr in np.arange(0, 1 + 1e-3, 0.1):
- precs = precisions[i, recalls[i, :] >= thr]
- prec = precs.max() if precs.size > 0 else 0
- ap[i] += prec
- ap /= 11
- else:
- raise ValueError(
- 'Unrecognized mode, only "area" and "11points" are supported')
- if no_scale:
- ap = ap[0]
- return ap
-
-
-def tpfp_imagenet(det_bboxes,
- gt_bboxes,
- gt_bboxes_ignore=None,
- default_iou_thr=0.5,
- area_ranges=None):
- """Check if detected bboxes are true positive or false positive.
-
- Args:
- det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5).
- gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4).
- gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image,
- of shape (k, 4). Default: None
- default_iou_thr (float): IoU threshold to be considered as matched for
- medium and large bboxes (small ones have special rules).
- Default: 0.5.
- area_ranges (list[tuple] | None): Range of bbox areas to be evaluated,
- in the format [(min1, max1), (min2, max2), ...]. Default: None.
-
- Returns:
- tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of
- each array is (num_scales, m).
- """
- # an indicator of ignored gts
- gt_ignore_inds = np.concatenate(
- (np.zeros(gt_bboxes.shape[0], dtype=np.bool),
- np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool)))
- # stack gt_bboxes and gt_bboxes_ignore for convenience
- gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore))
-
- num_dets = det_bboxes.shape[0]
- num_gts = gt_bboxes.shape[0]
- if area_ranges is None:
- area_ranges = [(None, None)]
- num_scales = len(area_ranges)
- # tp and fp are of shape (num_scales, num_gts), each row is tp or fp
- # of a certain scale.
- tp = np.zeros((num_scales, num_dets), dtype=np.float32)
- fp = np.zeros((num_scales, num_dets), dtype=np.float32)
- if gt_bboxes.shape[0] == 0:
- if area_ranges == [(None, None)]:
- fp[...] = 1
- else:
- det_areas = (det_bboxes[:, 2] - det_bboxes[:, 0]) * (
- det_bboxes[:, 3] - det_bboxes[:, 1])
- for i, (min_area, max_area) in enumerate(area_ranges):
- fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1
- return tp, fp
- ious = bbox_overlaps(det_bboxes, gt_bboxes - 1)
- gt_w = gt_bboxes[:, 2] - gt_bboxes[:, 0]
- gt_h = gt_bboxes[:, 3] - gt_bboxes[:, 1]
- iou_thrs = np.minimum((gt_w * gt_h) / ((gt_w + 10.0) * (gt_h + 10.0)),
- default_iou_thr)
- # sort all detections by scores in descending order
- sort_inds = np.argsort(-det_bboxes[:, -1])
- for k, (min_area, max_area) in enumerate(area_ranges):
- gt_covered = np.zeros(num_gts, dtype=bool)
- # if no area range is specified, gt_area_ignore is all False
- if min_area is None:
- gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool)
- else:
- gt_areas = gt_w * gt_h
- gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area)
- for i in sort_inds:
- max_iou = -1
- matched_gt = -1
- # find best overlapped available gt
- for j in range(num_gts):
- # different from PASCAL VOC: allow finding other gts if the
- # best overlapped ones are already matched by other det bboxes
- if gt_covered[j]:
- continue
- elif ious[i, j] >= iou_thrs[j] and ious[i, j] > max_iou:
- max_iou = ious[i, j]
- matched_gt = j
- # there are 4 cases for a det bbox:
- # 1. it matches a gt, tp = 1, fp = 0
- # 2. it matches an ignored gt, tp = 0, fp = 0
- # 3. it matches no gt and within area range, tp = 0, fp = 1
- # 4. it matches no gt but is beyond area range, tp = 0, fp = 0
- if matched_gt >= 0:
- gt_covered[matched_gt] = 1
- if not (gt_ignore_inds[matched_gt]
- or gt_area_ignore[matched_gt]):
- tp[k, i] = 1
- elif min_area is None:
- fp[k, i] = 1
- else:
- bbox = det_bboxes[i, :4]
- area = (bbox[2] - bbox[0]) * (bbox[3] - bbox[1])
- if area >= min_area and area < max_area:
- fp[k, i] = 1
- return tp, fp
-
-
-def tpfp_default(det_bboxes,
- gt_bboxes,
- gt_bboxes_ignore=None,
- iou_thr=0.5,
- area_ranges=None):
- """Check if detected bboxes are true positive or false positive.
-
- Args:
- det_bbox (ndarray): Detected bboxes of this image, of shape (m, 5).
- gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 4).
- gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image,
- of shape (k, 4). Default: None
- iou_thr (float): IoU threshold to be considered as matched.
- Default: 0.5.
- area_ranges (list[tuple] | None): Range of bbox areas to be evaluated,
- in the format [(min1, max1), (min2, max2), ...]. Default: None.
-
- Returns:
- tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of
- each array is (num_scales, m).
- """
- # an indicator of ignored gts
- gt_ignore_inds = np.concatenate(
- (np.zeros(gt_bboxes.shape[0], dtype=np.bool),
- np.ones(gt_bboxes_ignore.shape[0], dtype=np.bool)))
- # stack gt_bboxes and gt_bboxes_ignore for convenience
- gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore))
-
- num_dets = det_bboxes.shape[0]
- num_gts = gt_bboxes.shape[0]
- if area_ranges is None:
- area_ranges = [(None, None)]
- num_scales = len(area_ranges)
- # tp and fp are of shape (num_scales, num_gts), each row is tp or fp of
- # a certain scale
- tp = np.zeros((num_scales, num_dets), dtype=np.float32)
- fp = np.zeros((num_scales, num_dets), dtype=np.float32)
-
- # if there is no gt bboxes in this image, then all det bboxes
- # within area range are false positives
- if gt_bboxes.shape[0] == 0:
- if area_ranges == [(None, None)]:
- fp[...] = 1
- else:
- det_areas = (det_bboxes[:, 2] - det_bboxes[:, 0]) * (
- det_bboxes[:, 3] - det_bboxes[:, 1])
- for i, (min_area, max_area) in enumerate(area_ranges):
- fp[i, (det_areas >= min_area) & (det_areas < max_area)] = 1
- return tp, fp
-
- ious = bbox_overlaps(det_bboxes, gt_bboxes)
- # for each det, the max iou with all gts
- ious_max = ious.max(axis=1)
- # for each det, which gt overlaps most with it
- ious_argmax = ious.argmax(axis=1)
- # sort all dets in descending order by scores
- sort_inds = np.argsort(-det_bboxes[:, -1])
- for k, (min_area, max_area) in enumerate(area_ranges):
- gt_covered = np.zeros(num_gts, dtype=bool)
- # if no area range is specified, gt_area_ignore is all False
- if min_area is None:
- gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool)
- else:
- gt_areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * (
- gt_bboxes[:, 3] - gt_bboxes[:, 1])
- gt_area_ignore = (gt_areas < min_area) | (gt_areas >= max_area)
- for i in sort_inds:
- if ious_max[i] >= iou_thr:
- matched_gt = ious_argmax[i]
- if not (gt_ignore_inds[matched_gt]
- or gt_area_ignore[matched_gt]):
- if not gt_covered[matched_gt]:
- gt_covered[matched_gt] = True
- tp[k, i] = 1
- else:
- fp[k, i] = 1
- # otherwise ignore this detected bbox, tp = 0, fp = 0
- elif min_area is None:
- fp[k, i] = 1
- else:
- bbox = det_bboxes[i, :4]
- area = (bbox[2] - bbox[0]) * (bbox[3] - bbox[1])
- if area >= min_area and area < max_area:
- fp[k, i] = 1
- return tp, fp
-
-
-def get_cls_results(det_results, annotations, class_id):
- """Get det results and gt information of a certain class.
-
- Args:
- det_results (list[list]): Same as `eval_map()`.
- annotations (list[dict]): Same as `eval_map()`.
- class_id (int): ID of a specific class.
-
- Returns:
- tuple[list[np.ndarray]]: detected bboxes, gt bboxes, ignored gt bboxes
- """
- cls_dets = [img_res[class_id] for img_res in det_results]
- cls_gts = []
- cls_gts_ignore = []
- for ann in annotations:
- gt_inds = ann['labels'] == class_id
- cls_gts.append(ann['bboxes'][gt_inds, :])
-
- if ann.get('labels_ignore', None) is not None:
- ignore_inds = ann['labels_ignore'] == class_id
- cls_gts_ignore.append(ann['bboxes_ignore'][ignore_inds, :])
- else:
- cls_gts_ignore.append(np.empty((0, 4), dtype=np.float32))
-
- return cls_dets, cls_gts, cls_gts_ignore
-
-
-def eval_map(det_results,
- annotations,
- scale_ranges=None,
- iou_thr=0.5,
- dataset=None,
- logger=None,
- tpfp_fn=None,
- nproc=4):
- """Evaluate mAP of a dataset.
-
- Args:
- det_results (list[list]): [[cls1_det, cls2_det, ...], ...].
- The outer list indicates images, and the inner list indicates
- per-class detected bboxes.
- annotations (list[dict]): Ground truth annotations where each item of
- the list indicates an image. Keys of annotations are:
-
- - `bboxes`: numpy array of shape (n, 4)
- - `labels`: numpy array of shape (n, )
- - `bboxes_ignore` (optional): numpy array of shape (k, 4)
- - `labels_ignore` (optional): numpy array of shape (k, )
- scale_ranges (list[tuple] | None): Range of scales to be evaluated,
- in the format [(min1, max1), (min2, max2), ...]. A range of
- (32, 64) means the area range between (32**2, 64**2).
- Default: None.
- iou_thr (float): IoU threshold to be considered as matched.
- Default: 0.5.
- dataset (list[str] | str | None): Dataset name or dataset classes,
- there are minor differences in metrics for different datsets, e.g.
- "voc07", "imagenet_det", etc. Default: None.
- logger (logging.Logger | str | None): The way to print the mAP
- summary. See `mmcv.utils.print_log()` for details. Default: None.
- tpfp_fn (callable | None): The function used to determine true/
- false positives. If None, :func:`tpfp_default` is used as default
- unless dataset is 'det' or 'vid' (:func:`tpfp_imagenet` in this
- case). If it is given as a function, then this function is used
- to evaluate tp & fp. Default None.
- nproc (int): Processes used for computing TP and FP.
- Default: 4.
-
- Returns:
- tuple: (mAP, [dict, dict, ...])
- """
- assert len(det_results) == len(annotations)
-
- num_imgs = len(det_results)
- num_scales = len(scale_ranges) if scale_ranges is not None else 1
- num_classes = len(det_results[0]) # positive class num
- area_ranges = ([(rg[0]**2, rg[1]**2) for rg in scale_ranges]
- if scale_ranges is not None else None)
-
- pool = Pool(nproc)
- eval_results = []
- for i in range(num_classes):
- # get gt and det bboxes of this class
- cls_dets, cls_gts, cls_gts_ignore = get_cls_results(
- det_results, annotations, i)
- # choose proper function according to datasets to compute tp and fp
- if tpfp_fn is None:
- if dataset in ['det', 'vid']:
- tpfp_fn = tpfp_imagenet
- else:
- tpfp_fn = tpfp_default
- if not callable(tpfp_fn):
- raise ValueError(
- f'tpfp_fn has to be a function or None, but got {tpfp_fn}')
-
- # compute tp and fp for each image with multiple processes
- tpfp = pool.starmap(
- tpfp_fn,
- zip(cls_dets, cls_gts, cls_gts_ignore,
- [iou_thr for _ in range(num_imgs)],
- [area_ranges for _ in range(num_imgs)]))
- tp, fp = tuple(zip(*tpfp))
- # calculate gt number of each scale
- # ignored gts or gts beyond the specific scale are not counted
- num_gts = np.zeros(num_scales, dtype=int)
- for j, bbox in enumerate(cls_gts):
- if area_ranges is None:
- num_gts[0] += bbox.shape[0]
- else:
- gt_areas = (bbox[:, 2] - bbox[:, 0]) * (
- bbox[:, 3] - bbox[:, 1])
- for k, (min_area, max_area) in enumerate(area_ranges):
- num_gts[k] += np.sum((gt_areas >= min_area)
- & (gt_areas < max_area))
- # sort all det bboxes by score, also sort tp and fp
- cls_dets = np.vstack(cls_dets)
- num_dets = cls_dets.shape[0]
- sort_inds = np.argsort(-cls_dets[:, -1])
- tp = np.hstack(tp)[:, sort_inds]
- fp = np.hstack(fp)[:, sort_inds]
- # calculate recall and precision with tp and fp
- tp = np.cumsum(tp, axis=1)
- fp = np.cumsum(fp, axis=1)
- eps = np.finfo(np.float32).eps
- recalls = tp / np.maximum(num_gts[:, np.newaxis], eps)
- precisions = tp / np.maximum((tp + fp), eps)
- # calculate AP
- if scale_ranges is None:
- recalls = recalls[0, :]
- precisions = precisions[0, :]
- num_gts = num_gts.item()
- mode = 'area' if dataset != 'voc07' else '11points'
- ap = average_precision(recalls, precisions, mode)
- eval_results.append({
- 'num_gts': num_gts,
- 'num_dets': num_dets,
- 'recall': recalls,
- 'precision': precisions,
- 'ap': ap
- })
- pool.close()
- if scale_ranges is not None:
- # shape (num_classes, num_scales)
- all_ap = np.vstack([cls_result['ap'] for cls_result in eval_results])
- all_num_gts = np.vstack(
- [cls_result['num_gts'] for cls_result in eval_results])
- mean_ap = []
- for i in range(num_scales):
- if np.any(all_num_gts[:, i] > 0):
- mean_ap.append(all_ap[all_num_gts[:, i] > 0, i].mean())
- else:
- mean_ap.append(0.0)
- else:
- aps = []
- for cls_result in eval_results:
- if cls_result['num_gts'] > 0:
- aps.append(cls_result['ap'])
- mean_ap = np.array(aps).mean().item() if aps else 0.0
-
- print_map_summary(
- mean_ap, eval_results, dataset, area_ranges, logger=logger)
-
- return mean_ap, eval_results
-
-
-def print_map_summary(mean_ap,
- results,
- dataset=None,
- scale_ranges=None,
- logger=None):
- """Print mAP and results of each class.
-
- A table will be printed to show the gts/dets/recall/AP of each class and
- the mAP.
-
- Args:
- mean_ap (float): Calculated from `eval_map()`.
- results (list[dict]): Calculated from `eval_map()`.
- dataset (list[str] | str | None): Dataset name or dataset classes.
- scale_ranges (list[tuple] | None): Range of scales to be evaluated.
- logger (logging.Logger | str | None): The way to print the mAP
- summary. See `mmcv.utils.print_log()` for details. Default: None.
- """
-
- if logger == 'silent':
- return
-
- if isinstance(results[0]['ap'], np.ndarray):
- num_scales = len(results[0]['ap'])
- else:
- num_scales = 1
-
- if scale_ranges is not None:
- assert len(scale_ranges) == num_scales
-
- num_classes = len(results)
-
- recalls = np.zeros((num_scales, num_classes), dtype=np.float32)
- aps = np.zeros((num_scales, num_classes), dtype=np.float32)
- num_gts = np.zeros((num_scales, num_classes), dtype=int)
- for i, cls_result in enumerate(results):
- if cls_result['recall'].size > 0:
- recalls[:, i] = np.array(cls_result['recall'], ndmin=2)[:, -1]
- aps[:, i] = cls_result['ap']
- num_gts[:, i] = cls_result['num_gts']
-
- if dataset is None:
- label_names = [str(i) for i in range(num_classes)]
- elif mmcv.is_str(dataset):
- label_names = get_classes(dataset)
- else:
- label_names = dataset
-
- if not isinstance(mean_ap, list):
- mean_ap = [mean_ap]
-
- header = ['class', 'gts', 'dets', 'recall', 'ap']
- for i in range(num_scales):
- if scale_ranges is not None:
- print_log(f'Scale range {scale_ranges[i]}', logger=logger)
- table_data = [header]
- for j in range(num_classes):
- row_data = [
- label_names[j], num_gts[i, j], results[j]['num_dets'],
- f'{recalls[i, j]:.3f}', f'{aps[i, j]:.3f}'
- ]
- table_data.append(row_data)
- table_data.append(['mAP', '', '', '', f'{mean_ap[i]:.3f}'])
- table = AsciiTable(table_data)
- table.inner_footing_row_border = True
- print_log('\n' + table.table, logger=logger)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/global_context_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/global_context_head.py
deleted file mode 100644
index d8e8cbca95d69e86ec7a2a1e7ed7f158be1b5753..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/global_context_head.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import ConvModule
-from mmcv.runner import auto_fp16, force_fp32
-
-from mmdet.models.builder import HEADS
-from mmdet.models.utils import ResLayer, SimplifiedBasicBlock
-
-
-@HEADS.register_module()
-class GlobalContextHead(nn.Module):
- """Global context head used in `SCNet `_.
-
- Args:
- num_convs (int, optional): number of convolutional layer in GlbCtxHead.
- Default: 4.
- in_channels (int, optional): number of input channels. Default: 256.
- conv_out_channels (int, optional): number of output channels before
- classification layer. Default: 256.
- num_classes (int, optional): number of classes. Default: 80.
- loss_weight (float, optional): global context loss weight. Default: 1.
- conv_cfg (dict, optional): config to init conv layer. Default: None.
- norm_cfg (dict, optional): config to init norm layer. Default: None.
- conv_to_res (bool, optional): if True, 2 convs will be grouped into
- 1 `SimplifiedBasicBlock` using a skip connection. Default: False.
- """
-
- def __init__(self,
- num_convs=4,
- in_channels=256,
- conv_out_channels=256,
- num_classes=80,
- loss_weight=1.0,
- conv_cfg=None,
- norm_cfg=None,
- conv_to_res=False):
- super(GlobalContextHead, self).__init__()
- self.num_convs = num_convs
- self.in_channels = in_channels
- self.conv_out_channels = conv_out_channels
- self.num_classes = num_classes
- self.loss_weight = loss_weight
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.conv_to_res = conv_to_res
- self.fp16_enabled = False
-
- if self.conv_to_res:
- num_res_blocks = num_convs // 2
- self.convs = ResLayer(
- SimplifiedBasicBlock,
- in_channels,
- self.conv_out_channels,
- num_res_blocks,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg)
- self.num_convs = num_res_blocks
- else:
- self.convs = nn.ModuleList()
- for i in range(self.num_convs):
- in_channels = self.in_channels if i == 0 else conv_out_channels
- self.convs.append(
- ConvModule(
- in_channels,
- conv_out_channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
-
- self.pool = nn.AdaptiveAvgPool2d(1)
- self.fc = nn.Linear(conv_out_channels, num_classes)
-
- self.criterion = nn.BCEWithLogitsLoss()
-
- def init_weights(self):
- """Init weights for the head."""
- nn.init.normal_(self.fc.weight, 0, 0.01)
- nn.init.constant_(self.fc.bias, 0)
-
- @auto_fp16()
- def forward(self, feats):
- """Forward function."""
- x = feats[-1]
- for i in range(self.num_convs):
- x = self.convs[i](x)
- x = self.pool(x)
-
- # multi-class prediction
- mc_pred = x.reshape(x.size(0), -1)
- mc_pred = self.fc(mc_pred)
-
- return mc_pred, x
-
- @force_fp32(apply_to=('pred', ))
- def loss(self, pred, labels):
- """Loss function."""
- labels = [lbl.unique() for lbl in labels]
- targets = pred.new_zeros(pred.size())
- for i, label in enumerate(labels):
- targets[i, label] = 1.0
- loss = self.loss_weight * self.criterion(pred, targets)
- return loss
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/core/seg/sampler/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/core/seg/sampler/__init__.py
deleted file mode 100644
index 332b242c03d1c5e80d4577df442a9a037b1816e1..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/core/seg/sampler/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .base_pixel_sampler import BasePixelSampler
-from .ohem_pixel_sampler import OHEMPixelSampler
-
-__all__ = ['BasePixelSampler', 'OHEMPixelSampler']
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/necks/fpn.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/necks/fpn.py
deleted file mode 100644
index a53b2a69500f8c2edb835abc3ff0ccc2173d1fb1..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/necks/fpn.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, xavier_init
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class FPN(nn.Module):
- """Feature Pyramid Network.
-
- This is an implementation of - Feature Pyramid Networks for Object
- Detection (https://arxiv.org/abs/1612.03144)
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale)
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Default: 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool | str): If bool, it decides whether to add conv
- layers on top of the original feature maps. Default to False.
- If True, its actual mode is specified by `extra_convs_on_inputs`.
- If str, it specifies the source feature map of the extra convs.
- Only the following options are allowed
-
- - 'on_input': Last feat map of neck inputs (i.e. backbone feature).
- - 'on_lateral': Last feature map after lateral convs.
- - 'on_output': The last output feature map after fpn convs.
- extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs
- on the original feature from the backbone. If True,
- it is equivalent to `add_extra_convs='on_input'`. If False, it is
- equivalent to set `add_extra_convs='on_output'`. Default to True.
- relu_before_extra_convs (bool): Whether to apply relu before the extra
- conv. Default: False.
- no_norm_on_lateral (bool): Whether to apply norm on lateral.
- Default: False.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (str): Config dict for activation layer in ConvModule.
- Default: None.
- upsample_cfg (dict): Config dict for interpolate layer.
- Default: `dict(mode='nearest')`
-
- Example:
- >>> import torch
- >>> in_channels = [2, 3, 5, 7]
- >>> scales = [340, 170, 84, 43]
- >>> inputs = [torch.rand(1, c, s, s)
- ... for c, s in zip(in_channels, scales)]
- >>> self = FPN(in_channels, 11, len(in_channels)).eval()
- >>> outputs = self.forward(inputs)
- >>> for i in range(len(outputs)):
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
- outputs[0].shape = torch.Size([1, 11, 340, 340])
- outputs[1].shape = torch.Size([1, 11, 170, 170])
- outputs[2].shape = torch.Size([1, 11, 84, 84])
- outputs[3].shape = torch.Size([1, 11, 43, 43])
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=0,
- end_level=-1,
- add_extra_convs=False,
- extra_convs_on_inputs=False,
- relu_before_extra_convs=False,
- no_norm_on_lateral=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=None,
- upsample_cfg=dict(mode='nearest')):
- super(FPN, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.relu_before_extra_convs = relu_before_extra_convs
- self.no_norm_on_lateral = no_norm_on_lateral
- self.fp16_enabled = False
- self.upsample_cfg = upsample_cfg.copy()
-
- if end_level == -1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- # if end_level < inputs, no extra level is allowed
- self.backbone_end_level = end_level
- assert end_level <= len(in_channels)
- assert num_outs == end_level - start_level
- self.start_level = start_level
- self.end_level = end_level
- self.add_extra_convs = add_extra_convs
- assert isinstance(add_extra_convs, (str, bool))
- if isinstance(add_extra_convs, str):
- # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
- assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
- elif add_extra_convs: # True
- if extra_convs_on_inputs:
- # For compatibility with previous release
- # TODO: deprecate `extra_convs_on_inputs`
- self.add_extra_convs = 'on_input'
- else:
- self.add_extra_convs = 'on_output'
-
- self.lateral_convs = nn.ModuleList()
- self.fpn_convs = nn.ModuleList()
-
- for i in range(self.start_level, self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
- act_cfg=act_cfg,
- inplace=False)
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
-
- self.lateral_convs.append(l_conv)
- self.fpn_convs.append(fpn_conv)
-
- # add extra conv layers (e.g., RetinaNet)
- extra_levels = num_outs - self.backbone_end_level + self.start_level
- if self.add_extra_convs and extra_levels >= 1:
- for i in range(extra_levels):
- if i == 0 and self.add_extra_convs == 'on_input':
- in_channels = self.in_channels[self.backbone_end_level - 1]
- else:
- in_channels = out_channels
- extra_fpn_conv = ConvModule(
- in_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.fpn_convs.append(extra_fpn_conv)
-
- # default init_weights for conv(msra) and norm in ConvModule
- def init_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- def forward(self, inputs):
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i + self.start_level])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- # In some cases, fixing `scale factor` (e.g. 2) is preferred, but
- # it cannot co-exist with `size` in `F.interpolate`.
- if 'scale_factor' in self.upsample_cfg:
- laterals[i - 1] += F.interpolate(laterals[i],
- **self.upsample_cfg)
- else:
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] += F.interpolate(
- laterals[i], size=prev_shape, **self.upsample_cfg)
-
- # build outputs
- # part 1: from original levels
- outs = [
- self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
- ]
- # part 2: add extra levels
- if self.num_outs > len(outs):
- # use max pool to get more levels on top of outputs
- # (e.g., Faster R-CNN, Mask R-CNN)
- if not self.add_extra_convs:
- for i in range(self.num_outs - used_backbone_levels):
- outs.append(F.max_pool2d(outs[-1], 1, stride=2))
- # add conv layers on top of original feature maps (RetinaNet)
- else:
- if self.add_extra_convs == 'on_input':
- extra_source = inputs[self.backbone_end_level - 1]
- elif self.add_extra_convs == 'on_lateral':
- extra_source = laterals[-1]
- elif self.add_extra_convs == 'on_output':
- extra_source = outs[-1]
- else:
- raise NotImplementedError
- outs.append(self.fpn_convs[used_backbone_levels](extra_source))
- for i in range(used_backbone_levels + 1, self.num_outs):
- if self.relu_before_extra_convs:
- outs.append(self.fpn_convs[i](F.relu(outs[-1])))
- else:
- outs.append(self.fpn_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/ops/encoding.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/ops/encoding.py
deleted file mode 100644
index 7eb3629a6426550b8e4c537ee1ff4341893e489e..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/ops/encoding.py
+++ /dev/null
@@ -1,74 +0,0 @@
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-class Encoding(nn.Module):
- """Encoding Layer: a learnable residual encoder.
-
- Input is of shape (batch_size, channels, height, width).
- Output is of shape (batch_size, num_codes, channels).
-
- Args:
- channels: dimension of the features or feature channels
- num_codes: number of code words
- """
-
- def __init__(self, channels, num_codes):
- super(Encoding, self).__init__()
- # init codewords and smoothing factor
- self.channels, self.num_codes = channels, num_codes
- std = 1. / ((num_codes * channels)**0.5)
- # [num_codes, channels]
- self.codewords = nn.Parameter(
- torch.empty(num_codes, channels,
- dtype=torch.float).uniform_(-std, std),
- requires_grad=True)
- # [num_codes]
- self.scale = nn.Parameter(
- torch.empty(num_codes, dtype=torch.float).uniform_(-1, 0),
- requires_grad=True)
-
- @staticmethod
- def scaled_l2(x, codewords, scale):
- num_codes, channels = codewords.size()
- batch_size = x.size(0)
- reshaped_scale = scale.view((1, 1, num_codes))
- expanded_x = x.unsqueeze(2).expand(
- (batch_size, x.size(1), num_codes, channels))
- reshaped_codewords = codewords.view((1, 1, num_codes, channels))
-
- scaled_l2_norm = reshaped_scale * (
- expanded_x - reshaped_codewords).pow(2).sum(dim=3)
- return scaled_l2_norm
-
- @staticmethod
- def aggregate(assignment_weights, x, codewords):
- num_codes, channels = codewords.size()
- reshaped_codewords = codewords.view((1, 1, num_codes, channels))
- batch_size = x.size(0)
-
- expanded_x = x.unsqueeze(2).expand(
- (batch_size, x.size(1), num_codes, channels))
- encoded_feat = (assignment_weights.unsqueeze(3) *
- (expanded_x - reshaped_codewords)).sum(dim=1)
- return encoded_feat
-
- def forward(self, x):
- assert x.dim() == 4 and x.size(1) == self.channels
- # [batch_size, channels, height, width]
- batch_size = x.size(0)
- # [batch_size, height x width, channels]
- x = x.view(batch_size, self.channels, -1).transpose(1, 2).contiguous()
- # assignment_weights: [batch_size, channels, num_codes]
- assignment_weights = F.softmax(
- self.scaled_l2(x, self.codewords, self.scale), dim=2)
- # aggregate
- encoded_feat = self.aggregate(assignment_weights, x, self.codewords)
- return encoded_feat
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(Nx{self.channels}xHxW =>Nx{self.num_codes}' \
- f'x{self.channels})'
- return repr_str
diff --git a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/emotion/inference.py b/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/emotion/inference.py
deleted file mode 100644
index 6def7f12b8d610147e2f3bb977b46c92eac91b42..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/emotion/inference.py
+++ /dev/null
@@ -1,177 +0,0 @@
-from data_gen.tts.emotion.params_data import *
-from data_gen.tts.emotion.model import EmotionEncoder
-from data_gen.tts.emotion.audio import preprocess_wav # We want to expose this function from here
-from matplotlib import cm
-from data_gen.tts.emotion import audio
-from pathlib import Path
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-_model = None # type: EmotionEncoder
-_device = None # type: torch.device
-
-
-def load_model(weights_fpath: Path, device=None):
- """
- Loads the model in memory. If this function is not explicitely called, it will be run on the
- first call to embed_frames() with the default weights file.
-
- :param weights_fpath: the path to saved model weights.
- :param device: either a torch device or the name of a torch device (e.g. "cpu", "cuda"). The
- model will be loaded and will run on this device. Outputs will however always be on the cpu.
- If None, will default to your GPU if it"s available, otherwise your CPU.
- """
- # TODO: I think the slow loading of the encoder might have something to do with the device it
- # was saved on. Worth investigating.
- global _model, _device
- if device is None:
- _device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- elif isinstance(device, str):
- _device = torch.device(device)
- _model = EmotionEncoder(_device, torch.device("cpu"))
- checkpoint = torch.load(weights_fpath, map_location="cpu")
- _model.load_state_dict(checkpoint["model_state"])
- _model.eval()
- print("Loaded encoder trained to step %d" % (checkpoint["step"]))
-
-
-def is_loaded():
- return _model is not None
-
-
-def embed_frames_batch(frames_batch):
- """
- Computes embeddings for a batch of mel spectrogram.
-
- :param frames_batch: a batch mel of spectrogram as a numpy array of float32 of shape
- (batch_size, n_frames, n_channels)
- :return: the embeddings as a numpy array of float32 of shape (batch_size, model_embedding_size)
- """
- if _model is None:
- raise Exception("Model was not loaded. Call load_model() before inference.")
-
- frames = torch.from_numpy(frames_batch).to(_device)
- embed = _model.inference(frames).detach().cpu().numpy()
- return embed
-
-
-def compute_partial_slices(n_samples, partial_utterance_n_frames=partials_n_frames,
- min_pad_coverage=0.75, overlap=0.5):
- """
- Computes where to split an utterance waveform and its corresponding mel spectrogram to obtain
- partial utterances of each. Both the waveform and the mel
- spectrogram slices are returned, so as to make each partial utterance waveform correspond to
- its spectrogram. This function assumes that the mel spectrogram parameters used are those
- defined in params_data.py.
-
- The returned ranges may be indexing further than the length of the waveform. It is
- recommended that you pad the waveform with zeros up to wave_slices[-1].stop.
-
- :param n_samples: the number of samples in the waveform
- :param partial_utterance_n_frames: the number of mel spectrogram frames in each partial
- utterance
- :param min_pad_coverage: when reaching the last partial utterance, it may or may not have
- enough frames. If at least of are present,
- then the last partial utterance will be considered, as if we padded the audio. Otherwise,
- it will be discarded, as if we trimmed the audio. If there aren't enough frames for 1 partial
- utterance, this parameter is ignored so that the function always returns at least 1 slice.
- :param overlap: by how much the partial utterance should overlap. If set to 0, the partial
- utterances are entirely disjoint.
- :return: the waveform slices and mel spectrogram slices as lists of array slices. Index
- respectively the waveform and the mel spectrogram with these slices to obtain the partial
- utterances.
- """
- assert 0 <= overlap < 1
- assert 0 < min_pad_coverage <= 1
-
- samples_per_frame = int((sampling_rate * mel_window_step / 1000))
- n_frames = int(np.ceil((n_samples + 1) / samples_per_frame))
- frame_step = max(int(np.round(partial_utterance_n_frames * (1 - overlap))), 1)
-
- # Compute the slices
- wav_slices, mel_slices = [], []
- steps = max(1, n_frames - partial_utterance_n_frames + frame_step + 1)
- for i in range(0, steps, frame_step):
- mel_range = np.array([i, i + partial_utterance_n_frames])
- wav_range = mel_range * samples_per_frame
- mel_slices.append(slice(*mel_range))
- wav_slices.append(slice(*wav_range))
-
- # Evaluate whether extra padding is warranted or not
- last_wav_range = wav_slices[-1]
- coverage = (n_samples - last_wav_range.start) / (last_wav_range.stop - last_wav_range.start)
- if coverage < min_pad_coverage and len(mel_slices) > 1:
- mel_slices = mel_slices[:-1]
- wav_slices = wav_slices[:-1]
-
- return wav_slices, mel_slices
-
-
-def embed_utterance(wav, using_partials=True, return_partials=False, **kwargs):
- """
- Computes an embedding for a single utterance.
-
- # TODO: handle multiple wavs to benefit from batching on GPU
- :param wav: a preprocessed (see audio.py) utterance waveform as a numpy array of float32
- :param using_partials: if True, then the utterance is split in partial utterances of
- frames and the utterance embedding is computed from their
- normalized average. If False, the utterance is instead computed from feeding the entire
- spectogram to the network.
- :param return_partials: if True, the partial embeddings will also be returned along with the
- wav slices that correspond to the partial embeddings.
- :param kwargs: additional arguments to compute_partial_splits()
- :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). If
- is True, the partial utterances as a numpy array of float32 of shape
- (n_partials, model_embedding_size) and the wav partials as a list of slices will also be
- returned. If is simultaneously set to False, both these values will be None
- instead.
- """
- # Process the entire utterance if not using partials
- if not using_partials:
- frames = audio.wav_to_mel_spectrogram(wav)
- embed = embed_frames_batch(frames[None, ...])[0]
- if return_partials:
- return embed, None, None
- return embed
-
- # Compute where to split the utterance into partials and pad if necessary
- wave_slices, mel_slices = compute_partial_slices(len(wav), **kwargs)
- max_wave_length = wave_slices[-1].stop
- if max_wave_length >= len(wav):
- wav = np.pad(wav, (0, max_wave_length - len(wav)), "constant")
-
- # Split the utterance into partials
- frames = audio.wav_to_mel_spectrogram(wav)
- frames_batch = np.array([frames[s] for s in mel_slices])
- partial_embeds = embed_frames_batch(frames_batch)
-
- # Compute the utterance embedding from the partial embeddings
- raw_embed = np.mean(partial_embeds, axis=0)
- embed = raw_embed / np.linalg.norm(raw_embed, 2)
-
- if return_partials:
- return embed, partial_embeds, wave_slices
- return embed
-
-
-def embed_speaker(wavs, **kwargs):
- raise NotImplemented()
-
-
-def plot_embedding_as_heatmap(embed, ax=None, title="", shape=None, color_range=(0, 0.30)):
- if ax is None:
- ax = plt.gca()
-
- if shape is None:
- height = int(np.sqrt(len(embed)))
- shape = (height, -1)
- embed = embed.reshape(shape)
-
- cmap = cm.get_cmap()
- mappable = ax.imshow(embed, cmap=cmap)
- cbar = plt.colorbar(mappable, ax=ax, fraction=0.046, pad=0.04)
- cbar.set_clim(*color_range)
-
- ax.set_xticks([]), ax.set_yticks([])
- ax.set_title(title)
diff --git a/spaces/Salesforce/BLIP2/app.py b/spaces/Salesforce/BLIP2/app.py
deleted file mode 100644
index a24724064ff0b7ae56436c077b8ca901b3e0291f..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/BLIP2/app.py
+++ /dev/null
@@ -1,285 +0,0 @@
-from io import BytesIO
-
-import string
-import gradio as gr
-import requests
-from utils import Endpoint, get_token
-
-
-def encode_image(image):
- buffered = BytesIO()
- image.save(buffered, format="JPEG")
- buffered.seek(0)
-
- return buffered
-
-
-def query_chat_api(
- image, prompt, decoding_method, temperature, len_penalty, repetition_penalty
-):
-
- url = endpoint.url
- url = url + "/api/generate"
-
- headers = {
- "User-Agent": "BLIP-2 HuggingFace Space",
- "Auth-Token": get_token(),
- }
-
- data = {
- "prompt": prompt,
- "use_nucleus_sampling": decoding_method == "Nucleus sampling",
- "temperature": temperature,
- "length_penalty": len_penalty,
- "repetition_penalty": repetition_penalty,
- }
-
- image = encode_image(image)
- files = {"image": image}
-
- response = requests.post(url, data=data, files=files, headers=headers)
-
- if response.status_code == 200:
- return response.json()
- else:
- return "Error: " + response.text
-
-
-def query_caption_api(
- image, decoding_method, temperature, len_penalty, repetition_penalty
-):
-
- url = endpoint.url
- url = url + "/api/caption"
-
- headers = {
- "User-Agent": "BLIP-2 HuggingFace Space",
- "Auth-Token": get_token(),
- }
-
- data = {
- "use_nucleus_sampling": decoding_method == "Nucleus sampling",
- "temperature": temperature,
- "length_penalty": len_penalty,
- "repetition_penalty": repetition_penalty,
- }
-
- image = encode_image(image)
- files = {"image": image}
-
- response = requests.post(url, data=data, files=files, headers=headers)
-
- if response.status_code == 200:
- return response.json()
- else:
- return "Error: " + response.text
-
-
-def postprocess_output(output):
- # if last character is not a punctuation, add a full stop
- if not output[0][-1] in string.punctuation:
- output[0] += "."
-
- return output
-
-
-def inference_chat(
- image,
- text_input,
- decoding_method,
- temperature,
- length_penalty,
- repetition_penalty,
- history=[],
-):
- text_input = text_input
- history.append(text_input)
-
- prompt = " ".join(history)
-
- output = query_chat_api(
- image, prompt, decoding_method, temperature, length_penalty, repetition_penalty
- )
- output = postprocess_output(output)
- history += output
-
- chat = [
- (history[i], history[i + 1]) for i in range(0, len(history) - 1, 2)
- ] # convert to tuples of list
-
- return {chatbot: chat, state: history}
-
-
-def inference_caption(
- image,
- decoding_method,
- temperature,
- length_penalty,
- repetition_penalty,
-):
- output = query_caption_api(
- image, decoding_method, temperature, length_penalty, repetition_penalty
- )
-
- return output[0]
-
-
-title = """
BLIP-2
"""
-description = """Gradio demo for BLIP-2, image-to-text generation from Salesforce Research. To use it, simply upload your image, or click one of the examples to load them.
- Disclaimer: This is a research prototype and is not intended for production use. No data including but not restricted to text and images is collected."""
-article = """Paper: BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
- Code: BLIP2 is now integrated into GitHub repo: LAVIS: a One-stop Library for Language and Vision
- 🤗 `transformers` integration: You can now use `transformers` to use our BLIP-2 models! Check out the official docs
-
Project Page: BLIP2 on LAVIS
- Description: Captioning results from BLIP2_OPT_6.7B. Chat results from BLIP2_FlanT5xxl.
-
-
We have now suspended the official BLIP2 demo from March 23. 2023.
-
- Demo for Lomo Diffusion
- Stable Diffusion model by Wavymulder. {"" if prefix else ""}
- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU ⚡"}.
-
-
Please use the prompt template below to achieve the desired result:
-
-
-Prompt:
-
-lomo style photograph of * subject * , (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, realistic, photo-realistic, full length frame, High detail RAW color art, piercing, diffused soft lighting, shallow depth of field, sharp focus, hyperrealism, cinematic lighting
-
-
-Example: lomo style photograph of Heath Ledger as Batman
-
-Important note: Lomo Diffusion works best at a 1:1 aspect ratio, it is also successful using tall aspect ratios as well.
-
-Negative Prompt:
-
-blender illustration hdr, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature
-
-
-Have Fun & Enjoy ⚡ //THAFX
-
-
-
-In this first image stemming is shown in the results. Even though the query is "event", the results contain instances with "events."
-
-
-
-
-
-The second image exemplifies stemming on a query. The query is for "events", but the results show resources containing "event" as well.
-
-### Urn Matching
-
-Previously queries were not properly parsing out and tokenizing the expected portions of Urn types. Changes have been made on the index mapping and query side to support various partial and full Urn matching.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-### Synonyms
-
-Synonyms includes a static list of equivalent terms that are baked into the index at index creation time. This allows for efficient indexing of related terms. It is possible to add these to the query side as well to
-allow for dynamic synonyms, but this is unsupported at this time and has performance implications.
-
-
-
-
-
-
-
-
-
-### Autocomplete improvements
-
-Improvements were made to autocomplete handling around special characters like underscores and spaces.
-
-
I have been helping many people over Linkedin find their first job into the Data Science area. I would like to make this experience more enriching and formal.
Interview
How did you hear about SM?
One of my friends, who is not actively mentoring but knew about it
Career journey
Masters in mech engineer in India
joined Rolls Royce and they gave him the opportunity to do some DS
math was already pretty strong
Moved recently to Canada (without a job)
and had the confidence to do it, and didn't take long
Tiger Analytics
Mentorship exp?
Lots of ppl approaching me on LI (university juniors, who work with me but in different areas)
don't know where to start
what are the right resources
how to optimize your resume / LI
Pre-interview and while interview
helped a lot of students choose their careers/courses after high school
What are beginners lacking?
Not focusing enough on statistics and math fundamentals
learning software development (e.g. git and command line)
Need to
And how can you add value as a mentor?
Help them gain confidence / get a job
How to approach people
-
-
Questions about SM?
What was your journey like?
What are the next steps
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/avivdm1/AutoGPT/autogpt/processing/text.py b/spaces/avivdm1/AutoGPT/autogpt/processing/text.py
deleted file mode 100644
index 52add81401775c1b111512d8149f86a175fd9acb..0000000000000000000000000000000000000000
--- a/spaces/avivdm1/AutoGPT/autogpt/processing/text.py
+++ /dev/null
@@ -1,132 +0,0 @@
-"""Text processing functions"""
-from typing import Dict, Generator, Optional
-
-from selenium.webdriver.remote.webdriver import WebDriver
-
-from autogpt.config import Config
-from autogpt.llm_utils import create_chat_completion
-from autogpt.memory import get_memory
-
-CFG = Config()
-MEMORY = get_memory(CFG)
-
-
-def split_text(text: str, max_length: int = 8192) -> Generator[str, None, None]:
- """Split text into chunks of a maximum length
-
- Args:
- text (str): The text to split
- max_length (int, optional): The maximum length of each chunk. Defaults to 8192.
-
- Yields:
- str: The next chunk of text
-
- Raises:
- ValueError: If the text is longer than the maximum length
- """
- paragraphs = text.split("\n")
- current_length = 0
- current_chunk = []
-
- for paragraph in paragraphs:
- if current_length + len(paragraph) + 1 <= max_length:
- current_chunk.append(paragraph)
- current_length += len(paragraph) + 1
- else:
- yield "\n".join(current_chunk)
- current_chunk = [paragraph]
- current_length = len(paragraph) + 1
-
- if current_chunk:
- yield "\n".join(current_chunk)
-
-
-def summarize_text(
- url: str, text: str, question: str, driver: Optional[WebDriver] = None
-) -> str:
- """Summarize text using the OpenAI API
-
- Args:
- url (str): The url of the text
- text (str): The text to summarize
- question (str): The question to ask the model
- driver (WebDriver): The webdriver to use to scroll the page
-
- Returns:
- str: The summary of the text
- """
- if not text:
- return "Error: No text to summarize"
-
- text_length = len(text)
- print(f"Text length: {text_length} characters")
-
- summaries = []
- chunks = list(split_text(text))
- scroll_ratio = 1 / len(chunks)
-
- for i, chunk in enumerate(chunks):
- if driver:
- scroll_to_percentage(driver, scroll_ratio * i)
- print(f"Adding chunk {i + 1} / {len(chunks)} to memory")
-
- memory_to_add = f"Source: {url}\n" f"Raw content part#{i + 1}: {chunk}"
-
- MEMORY.add(memory_to_add)
-
- print(f"Summarizing chunk {i + 1} / {len(chunks)}")
- messages = [create_message(chunk, question)]
-
- summary = create_chat_completion(
- model=CFG.fast_llm_model,
- messages=messages,
- )
- summaries.append(summary)
- print(f"Added chunk {i + 1} summary to memory")
-
- memory_to_add = f"Source: {url}\n" f"Content summary part#{i + 1}: {summary}"
-
- MEMORY.add(memory_to_add)
-
- print(f"Summarized {len(chunks)} chunks.")
-
- combined_summary = "\n".join(summaries)
- messages = [create_message(combined_summary, question)]
-
- return create_chat_completion(
- model=CFG.fast_llm_model,
- messages=messages,
- )
-
-
-def scroll_to_percentage(driver: WebDriver, ratio: float) -> None:
- """Scroll to a percentage of the page
-
- Args:
- driver (WebDriver): The webdriver to use
- ratio (float): The percentage to scroll to
-
- Raises:
- ValueError: If the ratio is not between 0 and 1
- """
- if ratio < 0 or ratio > 1:
- raise ValueError("Percentage should be between 0 and 1")
- driver.execute_script(f"window.scrollTo(0, document.body.scrollHeight * {ratio});")
-
-
-def create_message(chunk: str, question: str) -> Dict[str, str]:
- """Create a message for the chat completion
-
- Args:
- chunk (str): The chunk of text to summarize
- question (str): The question to answer
-
- Returns:
- Dict[str, str]: The message to send to the chat completion
- """
- return {
- "role": "user",
- "content": f'"""{chunk}""" Using the above text, answer the following'
- f' question: "{question}" -- if the question cannot be answered using the text,'
- " summarize the text.",
- }
diff --git a/spaces/ayaanzaveri/whisper-webui/src/whisperContainer.py b/spaces/ayaanzaveri/whisper-webui/src/whisperContainer.py
deleted file mode 100644
index a502678f9ae1a4a2e5daaa105f5fcf0797980a56..0000000000000000000000000000000000000000
--- a/spaces/ayaanzaveri/whisper-webui/src/whisperContainer.py
+++ /dev/null
@@ -1,166 +0,0 @@
-# External programs
-import os
-import sys
-from typing import List
-
-import whisper
-from whisper import Whisper
-
-from src.config import ModelConfig
-from src.hooks.whisperProgressHook import ProgressListener, create_progress_listener_handle
-
-from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache
-
-class WhisperContainer:
- def __init__(self, model_name: str, device: str = None, download_root: str = None,
- cache: ModelCache = None, models: List[ModelConfig] = []):
- self.model_name = model_name
- self.device = device
- self.download_root = download_root
- self.cache = cache
-
- # Will be created on demand
- self.model = None
-
- # List of known models
- self.models = models
-
- def get_model(self):
- if self.model is None:
-
- if (self.cache is None):
- self.model = self._create_model()
- else:
- model_key = "WhisperContainer." + self.model_name + ":" + (self.device if self.device else '')
- self.model = self.cache.get(model_key, self._create_model)
- return self.model
-
- def ensure_downloaded(self):
- """
- Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before
- passing the container to a subprocess.
- """
- # Warning: Using private API here
- try:
- root_dir = self.download_root
- model_config = self.get_model_config()
-
- if root_dir is None:
- root_dir = os.path.join(os.path.expanduser("~"), ".cache", "whisper")
-
- if self.model_name in whisper._MODELS:
- whisper._download(whisper._MODELS[self.model_name], root_dir, False)
- else:
- # If the model is not in the official list, see if it needs to be downloaded
- model_config.download_url(root_dir)
- return True
-
- except Exception as e:
- # Given that the API is private, it could change at any time. We don't want to crash the program
- print("Error pre-downloading model: " + str(e))
- return False
-
- def get_model_config(self) -> ModelConfig:
- """
- Get the model configuration for the model.
- """
- for model in self.models:
- if model.name == self.model_name:
- return model
- return None
-
- def _create_model(self):
- print("Loading whisper model " + self.model_name)
-
- model_config = self.get_model_config()
- # Note that the model will not be downloaded in the case of an official Whisper model
- model_path = model_config.download_url(self.download_root)
-
- return whisper.load_model(model_path, device=self.device, download_root=self.download_root)
-
- def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None, **decodeOptions: dict):
- """
- Create a WhisperCallback object that can be used to transcript audio files.
-
- Parameters
- ----------
- language: str
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- initial_prompt: str
- The initial prompt to use for the transcription.
- decodeOptions: dict
- Additional options to pass to the decoder. Must be pickleable.
-
- Returns
- -------
- A WhisperCallback object.
- """
- return WhisperCallback(self, language=language, task=task, initial_prompt=initial_prompt, **decodeOptions)
-
- # This is required for multiprocessing
- def __getstate__(self):
- return { "model_name": self.model_name, "device": self.device, "download_root": self.download_root, "models": self.models }
-
- def __setstate__(self, state):
- self.model_name = state["model_name"]
- self.device = state["device"]
- self.download_root = state["download_root"]
- self.models = state["models"]
- self.model = None
- # Depickled objects must use the global cache
- self.cache = GLOBAL_MODEL_CACHE
-
-
-class WhisperCallback:
- def __init__(self, model_container: WhisperContainer, language: str = None, task: str = None, initial_prompt: str = None, **decodeOptions: dict):
- self.model_container = model_container
- self.language = language
- self.task = task
- self.initial_prompt = initial_prompt
- self.decodeOptions = decodeOptions
-
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None):
- """
- Peform the transcription of the given audio file or data.
-
- Parameters
- ----------
- audio: Union[str, np.ndarray, torch.Tensor]
- The audio file to transcribe, or the audio data as a numpy array or torch tensor.
- segment_index: int
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- prompt: str
- The prompt to use for the transcription.
- detected_language: str
- The detected language of the audio file.
-
- Returns
- -------
- The result of the Whisper call.
- """
- model = self.model_container.get_model()
-
- if progress_listener is not None:
- with create_progress_listener_handle(progress_listener):
- return self._transcribe(model, audio, segment_index, prompt, detected_language)
- else:
- return self._transcribe(model, audio, segment_index, prompt, detected_language)
-
- def _transcribe(self, model: Whisper, audio, segment_index: int, prompt: str, detected_language: str):
- return model.transcribe(audio, \
- language=self.language if self.language else detected_language, task=self.task, \
- initial_prompt=self._concat_prompt(self.initial_prompt, prompt) if segment_index == 0 else prompt, \
- **self.decodeOptions
- )
-
- def _concat_prompt(self, prompt1, prompt2):
- if (prompt1 is None):
- return prompt2
- elif (prompt2 is None):
- return prompt1
- else:
- return prompt1 + " " + prompt2
\ No newline at end of file
diff --git a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/feature_fusion.py b/spaces/badayvedat/AudioSep/models/CLAP/open_clip/feature_fusion.py
deleted file mode 100644
index dbe4e170e05894c12ebdc36ba1dc1de65e441b89..0000000000000000000000000000000000000000
--- a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/feature_fusion.py
+++ /dev/null
@@ -1,192 +0,0 @@
-"""
-Feature Fusion for Varible-Length Data Processing
-AFF/iAFF is referred and modified from https://github.com/YimianDai/open-aff/blob/master/aff_pytorch/aff_net/fusion.py
-According to the paper: Yimian Dai et al, Attentional Feature Fusion, IEEE Winter Conference on Applications of Computer Vision, WACV 2021
-"""
-
-import torch
-import torch.nn as nn
-
-
-class DAF(nn.Module):
- """
- 直接相加 DirectAddFuse
- """
-
- def __init__(self):
- super(DAF, self).__init__()
-
- def forward(self, x, residual):
- return x + residual
-
-
-class iAFF(nn.Module):
- """
- 多特征融合 iAFF
- """
-
- def __init__(self, channels=64, r=4, type="2D"):
- super(iAFF, self).__init__()
- inter_channels = int(channels // r)
-
- if type == "1D":
- # 本地注意力
- self.local_att = nn.Sequential(
- nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(channels),
- )
-
- # 全局注意力
- self.global_att = nn.Sequential(
- nn.AdaptiveAvgPool1d(1),
- nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(channels),
- )
-
- # 第二次本地注意力
- self.local_att2 = nn.Sequential(
- nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(channels),
- )
- # 第二次全局注意力
- self.global_att2 = nn.Sequential(
- nn.AdaptiveAvgPool1d(1),
- nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(channels),
- )
- elif type == "2D":
- # 本地注意力
- self.local_att = nn.Sequential(
- nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(channels),
- )
-
- # 全局注意力
- self.global_att = nn.Sequential(
- nn.AdaptiveAvgPool2d(1),
- nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(channels),
- )
-
- # 第二次本地注意力
- self.local_att2 = nn.Sequential(
- nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(channels),
- )
- # 第二次全局注意力
- self.global_att2 = nn.Sequential(
- nn.AdaptiveAvgPool2d(1),
- nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(channels),
- )
- else:
- raise f"the type is not supported"
-
- self.sigmoid = nn.Sigmoid()
-
- def forward(self, x, residual):
- flag = False
- xa = x + residual
- if xa.size(0) == 1:
- xa = torch.cat([xa, xa], dim=0)
- flag = True
- xl = self.local_att(xa)
- xg = self.global_att(xa)
- xlg = xl + xg
- wei = self.sigmoid(xlg)
- xi = x * wei + residual * (1 - wei)
-
- xl2 = self.local_att2(xi)
- xg2 = self.global_att(xi)
- xlg2 = xl2 + xg2
- wei2 = self.sigmoid(xlg2)
- xo = x * wei2 + residual * (1 - wei2)
- if flag:
- xo = xo[0].unsqueeze(0)
- return xo
-
-
-class AFF(nn.Module):
- """
- 多特征融合 AFF
- """
-
- def __init__(self, channels=64, r=4, type="2D"):
- super(AFF, self).__init__()
- inter_channels = int(channels // r)
-
- if type == "1D":
- self.local_att = nn.Sequential(
- nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(channels),
- )
- self.global_att = nn.Sequential(
- nn.AdaptiveAvgPool1d(1),
- nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm1d(channels),
- )
- elif type == "2D":
- self.local_att = nn.Sequential(
- nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(channels),
- )
- self.global_att = nn.Sequential(
- nn.AdaptiveAvgPool2d(1),
- nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(inter_channels),
- nn.ReLU(inplace=True),
- nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
- nn.BatchNorm2d(channels),
- )
- else:
- raise f"the type is not supported."
-
- self.sigmoid = nn.Sigmoid()
-
- def forward(self, x, residual):
- flag = False
- xa = x + residual
- if xa.size(0) == 1:
- xa = torch.cat([xa, xa], dim=0)
- flag = True
- xl = self.local_att(xa)
- xg = self.global_att(xa)
- xlg = xl + xg
- wei = self.sigmoid(xlg)
- xo = 2 * x * wei + 2 * residual * (1 - wei)
- if flag:
- xo = xo[0].unsqueeze(0)
- return xo
diff --git a/spaces/badayvedat/AudioSep/models/CLAP/training/train.py b/spaces/badayvedat/AudioSep/models/CLAP/training/train.py
deleted file mode 100644
index f5759c4679d2ee9c0748444adf66b8453cf09728..0000000000000000000000000000000000000000
--- a/spaces/badayvedat/AudioSep/models/CLAP/training/train.py
+++ /dev/null
@@ -1,838 +0,0 @@
-import json
-import logging
-import math
-import os
-import time
-from contextlib import suppress
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-
-try:
- import wandb
-except ImportError:
- wandb = None
-
-from open_clip import ClipLoss, gather_features
-from .distributed import is_master
-from .zero_shot import zero_shot_eval
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
-
-def unwrap_model(model):
- if hasattr(model, "module"):
- return model.module
- else:
- return model
-
-
-def train_one_epoch(
- model, data, epoch, optimizer, scaler, scheduler, args, tb_writer=None
-):
- device = torch.device(args.device)
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- model.train()
- loss = ClipLoss(
- local_loss=args.local_loss,
- gather_with_grad=args.gather_with_grad,
- cache_labels=True,
- rank=args.rank,
- world_size=args.world_size,
- use_horovod=args.horovod,
- mlp_loss=args.clap_mlploss,
- weight_loss_kappa=args.kappa,
- )
-
- dataloader, sampler = data["train"].dataloader, data["train"].sampler
- if args.distributed and sampler is not None:
- sampler.set_epoch(epoch)
- num_batches_per_epoch = dataloader.num_batches
- sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10))
-
- # for toy dataset
- if args.dataset_type == "toy":
- dataloader.dataset.generate_queue()
-
- loss_m = AverageMeter()
- batch_time_m = AverageMeter()
- data_time_m = AverageMeter()
- end = time.time()
-
- for i, batch in enumerate(dataloader):
- # logging.info(f"batch {i} of {num_batches_per_epoch}")
- step = num_batches_per_epoch * epoch + i
- if isinstance(scheduler, dict):
- for s in scheduler.values():
- s(step)
- else:
- scheduler(step)
- audios = batch # contains mel_spec, wavform, and longer list
- texts = batch["text"]
- # audios = audios.to(device=device, non_blocking=True)
- # texts = texts.to(device=device, non_blocking=True)
-
- data_time_m.update(time.time() - end)
- if isinstance(optimizer, dict):
- for o_ in optimizer.values():
- o_.zero_grad()
- else:
- optimizer.zero_grad()
-
- with autocast():
- (
- audio_features,
- text_features,
- audio_features_mlp,
- text_features_mlp,
- logit_scale_a,
- logit_scale_t,
- ) = model(audios, texts, device)
-
- if args.clap_mlploss:
- total_loss = loss(
- audio_features=audio_features,
- text_features=text_features,
- logit_scale_a=logit_scale_a,
- logit_scale_t=logit_scale_t,
- audio_features_mlp=audio_features_mlp,
- text_features_mlp=text_features_mlp,
- )
- else:
- total_loss = loss(
- audio_features=audio_features,
- text_features=text_features,
- logit_scale_a=logit_scale_a,
- )
- if isinstance(optimizer, dict):
- if scaler is not None:
- scaler.scale(total_loss).backward()
- for o_ in optimizer.values():
- if args.horovod:
- o_.synchronize()
- scaler.unscale_(o_)
- with o_.skip_synchronize():
- scaler.step(o_)
- else:
- scaler.step(o_)
- scaler.update()
- else:
- total_loss.backward()
- for o_ in optimizer.values():
- o_.step()
- else:
- if scaler is not None:
- scaler.scale(total_loss).backward()
- if args.horovod:
- optimizer.synchronize()
- scaler.unscale_(optimizer)
- with optimizer.skip_synchronize():
- scaler.step(optimizer)
- else:
- scaler.step(optimizer)
- scaler.update()
- else:
- total_loss.backward()
- optimizer.step()
-
- # Note: we clamp to 4.6052 = ln(100), as in the original paper.
- with torch.no_grad():
- unwrap_model(model).logit_scale_a.clamp_(0, math.log(100))
- if args.clap_mlploss:
- unwrap_model(model).logit_scale_t.clamp_(0, math.log(100))
-
- batch_time_m.update(time.time() - end)
- end = time.time()
- batch_count = i + 1
- if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch):
- if isinstance(audios, dict):
- batch_size = len(audios["waveform"])
- else:
- batch_size = len(audios)
- num_samples = batch_count * batch_size * args.world_size
- samples_per_epoch = dataloader.num_samples
- percent_complete = 100.0 * batch_count / num_batches_per_epoch
-
- # NOTE loss is coarsely sampled, just master node and per log update
- loss_m.update(total_loss.item(), batch_size)
- logit_scale_scalar_a = logit_scale_a.item()
- logit_scale_scalar_t = logit_scale_t.item()
- if isinstance(optimizer, dict):
- if args.clap_mlploss:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]} "
- f"Logit Scale Audio: {logit_scale_scalar_a:.3f}"
- f"Logit Scale Text: {logit_scale_scalar_t:.3f}"
- )
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "scale_audio": logit_scale_scalar_a,
- "scale_text": logit_scale_scalar_t,
- "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()],
- }
- else:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]} "
- f"Logit Scale Audio: {logit_scale_scalar_a:.3f}"
- )
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "scale_audio": logit_scale_scalar_a,
- "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()],
- }
-
- else:
- if args.clap_mlploss:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {optimizer.param_groups[0]['lr']:5f} "
- f"Logit Scale Audio: {logit_scale_scalar_a:.3f}"
- f"Logit Scale Text: {logit_scale_scalar_t:.3f}"
- )
-
- # Save train loss / etc. Using non avg meter values as loggers have their own smoothing
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "scale_audio": logit_scale_scalar_a,
- "scale_text": logit_scale_scalar_t,
- "lr": optimizer.param_groups[0]["lr"],
- }
- else:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {optimizer.param_groups[0]['lr']:5f} "
- f"Logit Scale Audio: {logit_scale_scalar_a:.3f}"
- )
-
- # Save train loss / etc. Using non avg meter values as loggers have their own smoothing
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "scale_audio": logit_scale_scalar_a,
- "lr": optimizer.param_groups[0]["lr"],
- }
- for name, val in log_data.items():
- name = "train/" + name
- if tb_writer is not None:
- tb_writer.add_scalar(name, val, step)
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- wandb.log({name: val, "step": step})
-
- # resetting batch / data time meters per log window
- batch_time_m.reset()
- data_time_m.reset()
- # end for
-
-
-def evaluate(model, data, epoch, args, tb_writer=None):
- metrics = {}
- if not args.parallel_eval:
- if not is_master(args):
- return metrics
- device = torch.device(args.device)
- model.eval()
-
- # CHANGE
- # zero_shot_metrics = zero_shot_eval(model, data, epoch, args)
- # metrics.update(zero_shot_metrics)
- if is_master(args):
- print("Evaluating...")
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- if args.val_dataset_names == ["Clotho", "audiocaps"]:
- # if only clotho and audiocaps are used, then we will use a different evaluation function.
- # This is because in the Clotho and audiocaps valid and test set, there are 5 text for 1 audio.
- if args.parallel_eval:
- # (yusong): just a hack here. Don't use parallel eval when evaluating only clotho and audiocaps.
- raise NotImplementedError(
- "Parallel evaluation not supported for eval only Clotho and audiocaps."
- )
- val_metrics_per_dataset = evaluate_clotho_audiocaps(
- model, data, epoch, args, autocast, device, tb_writer
- )
- for m in val_metrics_per_dataset.values():
- metrics.update(m)
- if "epoch" not in metrics.keys():
- metrics.update({"epoch": epoch})
- metrics = select_top_metric_clotho_audiocaps(
- metrics, val_metrics_per_dataset, args
- )
- elif "val" in data and (
- args.val_frequency
- and ((epoch % args.val_frequency) == 0 or epoch == args.epochs)
- ):
- dataloader = data["val"].dataloader
- num_samples = 0
- samples_per_val = dataloader.num_samples
-
- # FIXME this does not scale past small eval datasets
- # all_audio_features @ all_text_features will blow up memory and compute very quickly
- eval_info = {}
- if args.clap_mlploss:
- eval_info["all"] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- "all_audio_features_mlp": [],
- "all_text_features_mlp": [],
- } # cumulative_loss = 0.0
- else:
- eval_info["all"] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- } # cumu
- # all_audio_features, all_text_features, all_audio_features_mlp, all_text_features_mlp = [], [], [], []
- with torch.no_grad():
- for i, batch in enumerate(dataloader):
- audios = batch # contains mel_spec, wavform, and longer list
- texts = batch["text"]
- # audios = audios.to(device=device, non_blocking=True)
-
- all_names = list(
- set(["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]])
- )
- for name in all_names:
- if name not in eval_info.keys():
- if args.clap_mlploss:
- eval_info[name] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- "all_audio_features_mlp": [],
- "all_text_features_mlp": [],
- }
- else:
- eval_info[name] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- }
- with autocast():
- (
- audio_features,
- text_features,
- audio_features_mlp,
- text_features_mlp,
- logit_scale_a,
- logit_scale_t,
- ) = model(audios, texts, device)
-
- if args.parallel_eval:
- # multi-GPU eval
- if args.clap_mlploss:
- (
- audio_features,
- text_features,
- audio_features_mlp,
- text_features_mlp,
- ) = gather_features(
- audio_features=audio_features,
- text_features=text_features,
- audio_features_mlp=audio_features_mlp,
- text_features_mlp=text_features_mlp,
- local_loss=False,
- gather_with_grad=False,
- rank=args.rank,
- world_size=args.world_size,
- use_horovod=args.horovod,
- mlp_loss=args.clap_mlploss,
- )
- else:
- (audio_features, text_features,) = gather_features(
- audio_features=audio_features,
- text_features=text_features,
- local_loss=False,
- gather_with_grad=False,
- rank=args.rank,
- world_size=args.world_size,
- use_horovod=args.horovod,
- mlp_loss=args.clap_mlploss,
- )
-
- if is_master(args):
- num_samples += audio_features.shape[0]
- for n in [*all_names, "all"]:
- if n == "all":
- eval_info[n]["all_audio_features"].append(
- audio_features.cpu()
- )
- eval_info[n]["all_text_features"].append(
- text_features.cpu()
- )
- if args.clap_mlploss:
- eval_info[n]["all_audio_features_mlp"].append(
- audio_features_mlp.cpu()
- )
- eval_info[n]["all_text_features_mlp"].append(
- text_features_mlp.cpu()
- )
- else:
- idx = np.where(
- np.array(
- [
- "-".join(b.split("/")[-3:-1])
- for b in batch["__url__"]
- ]
- )
- == n
- )[0]
- eval_info[n]["all_audio_features"].append(
- audio_features.cpu().index_select(
- 0, torch.tensor(idx).long()
- )
- )
- eval_info[n]["all_text_features"].append(
- text_features.cpu().index_select(
- 0, torch.tensor(idx).long()
- )
- )
- if args.clap_mlploss:
- eval_info[n]["all_audio_features_mlp"].append(
- audio_features_mlp.cpu().index_select(
- 0, torch.tensor(idx).long()
- )
- )
- eval_info[n]["all_text_features_mlp"].append(
- text_features_mlp.cpu().index_select(
- 0, torch.tensor(idx).long()
- )
- )
- # print(f'eval step {i}') # (yusong): for debug
-
- # cumulative_loss += total_loss * batch_size
- # num_samples += batch_size
- if is_master(args) and (i % 100) == 0: # and i != 0:
- logging.info(
- f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]"
- )
- if is_master(args):
- val_metrics_per_dataset = {}
- for n in eval_info.keys():
- if args.clap_mlploss:
- metrics_single_dataset = get_metrics(
- audio_features=torch.cat(
- eval_info[n]["all_audio_features"]
- ),
- text_features=torch.cat(eval_info[n]["all_text_features"]),
- logit_scale_a=logit_scale_a.cpu(),
- audio_features_mlp=torch.cat(
- eval_info[n]["all_audio_features_mlp"]
- ),
- text_features_mlp=torch.cat(
- eval_info[n]["all_text_features_mlp"]
- ),
- logit_scale_t=logit_scale_t.cpu(),
- mlp_loss=args.clap_mlploss,
- )
- else:
- metrics_single_dataset = get_metrics(
- audio_features=torch.cat(
- eval_info[n]["all_audio_features"]
- ),
- text_features=torch.cat(eval_info[n]["all_text_features"]),
- logit_scale_a=logit_scale_a.cpu(),
- mlp_loss=args.clap_mlploss,
- )
- val_metrics_per_dataset[n] = {
- n + "/" + k: v for k, v in metrics_single_dataset.items()
- }
- metrics.update(val_metrics_per_dataset[n])
- if "epoch" not in metrics.keys():
- metrics.update({"epoch": epoch})
- if is_master(args):
- if not metrics:
- return metrics
-
- logging.info(
- f"Eval Epoch: {epoch} "
- + "\n".join(
- [
- "\t".join([f"{k}: {round(v, 4):.4f}" for k, v in m.items()])
- for m in val_metrics_per_dataset.values()
- ]
- )
- )
-
- if args.save_logs:
- for name, val in metrics.items():
- if tb_writer is not None:
- tb_writer.add_scalar(f"val/{name}", val, epoch)
-
- with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f:
- f.write(json.dumps(metrics))
- f.write("\n")
-
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- for name, val in metrics.items():
- wandb.log({f"val/{name}": val, "epoch": epoch})
-
- return metrics
- else:
- return metrics
-
-
-def get_metrics(
- audio_features,
- text_features,
- logit_scale_a,
- audio_features_mlp=None,
- text_features_mlp=None,
- logit_scale_t=None,
- mlp_loss=False,
-):
- metrics = {}
- if mlp_loss:
- # Set up audio to text & text to audio similary matrice
- a_logits_per_audio = (
- (logit_scale_a * audio_features @ text_features_mlp.t()).detach().cpu()
- )
- a_logits_per_text = a_logits_per_audio.t().detach().cpu()
- t_logits_per_audio = (
- (logit_scale_t * audio_features_mlp @ text_features.t()).detach().cpu()
- )
- t_logits_per_text = t_logits_per_audio.t().detach().cpu()
-
- labels = torch.arange(audio_features.shape[0]).long()
- # Change the loss from two terms into four terms with 2x2 combined CE loss
- total_loss = (
- F.cross_entropy(a_logits_per_audio, labels)
- + F.cross_entropy(a_logits_per_text, labels)
- + F.cross_entropy(t_logits_per_audio, labels)
- + F.cross_entropy(t_logits_per_text, labels)
- ) / 4
-
- metrics[f"cumulative_loss"] = total_loss.item()
- metrics[f"num_samples"] = audio_features.shape[0]
-
- logits = {
- "audio_to_text": (a_logits_per_audio + t_logits_per_audio) / 2,
- "text_to_audio": (a_logits_per_text + t_logits_per_text) / 2,
- }
- ground_truth = torch.arange(len(text_features)).view(-1, 1)
-
- else:
- # print("text_features", text_features)
- # print("text_features.shape", text_features.shape)
- logits_per_audio = (
- (logit_scale_a * audio_features @ text_features.t()).detach().cpu()
- )
- logits_per_text = logits_per_audio.t().detach().cpu()
-
- labels = torch.arange(audio_features.shape[0]).long()
- # Change the loss from two terms into four terms with 2x2 combined CE loss
- total_loss = (
- F.cross_entropy(logits_per_audio, labels)
- + F.cross_entropy(logits_per_text, labels)
- ) / 2
-
- metrics[f"cumulative_loss"] = total_loss.item()
- metrics[f"num_samples"] = audio_features.shape[0]
-
- logits = {"audio_to_text": logits_per_audio, "text_to_audio": logits_per_text}
-
- ground_truth = torch.arange(len(text_features)).view(-1, 1)
-
- for name, logit in logits.items():
- ranking = torch.argsort(logit, descending=True)
- preds = torch.where(ranking == ground_truth)[
- 1
- ] # (yusong) this line is slow because it uses single thread
- preds = preds.detach().cpu().numpy()
- metrics[f"{name}_mean_rank"] = preds.mean() + 1
- metrics[f"{name}_median_rank"] = np.floor(np.median(preds)) + 1
- for k in [1, 5, 10]:
- metrics[f"{name}_R@{k}"] = np.mean(preds < k)
- # map@10
- metrics[f"{name}_mAP@10"] = np.mean(np.where(preds < 10, 1 / (preds + 1), 0.0))
-
- return metrics
-
-
-def evaluate_clotho_audiocaps(
- model, data, epoch, args, autocast, device, tb_writer=None
-):
- """
- Adapted from https://github.com/XinhaoMei/audio-text_retrieval/blob/main/tools/utils.py.
- 1. for text-to-audio retrieval, do 5 times and average the results
- 2. for R@1, R@5, R@10 in audio-to-text retrieval, take the best rank among 5 text
- 3. for map@10 in audio-to-text retrieval:
- 3.1: sort the rank of 5 text
- 3.2: exclude the rank >=10 (0-index)
- 3.3: compute the map regarding the remaining ranks: np.mean(np.arange(1, len(ranks)+1) / ranks).
- (3.3) That is, take the top ranks of 5 text that is < 10, and assign the descending number as ground truth.
- (3.3) E.g.: the ground truth of first rank of the 5 text should be 1, the second rank should be 2, etc.
- """
- # TODO: (yusong) only support single GPU evaluation and only support non-mlp case for now.
- dataloader = data["val"].dataloader
- with torch.no_grad():
- eval_info = {}
- for i, batch in enumerate(dataloader):
- audios = batch # contains mel_spec, wavform, and longer list
-
- # each item in the list has 5 texts
- if args.tmodel == "transformer":
- from open_clip import tokenize
-
- texts = [tokenize(t) for t in batch["full_text"]]
- texts = torch.cat(texts)
- else:
- from .data import tokenizer
-
- texts = [
- tokenizer(t) for t in batch["full_text"]
- ] # 5 texts for each audio
- texts = {
- k: torch.cat([t[k] for t in texts]) for k in texts[0].keys()
- } # 5 x batch
-
- # audios = audios.to(device=device, non_blocking=True)
-
- all_names = list(
- set(["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]])
- )
- for name in all_names:
- if name not in eval_info.keys():
- # we will not use mlp outputs even if args.clap_mlploss=True
- eval_info[name] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- }
- with autocast():
- audio_features = model(audios, None, device)
- text_features = model(None, texts, device)
- audio_features = F.normalize(audio_features, dim=-1)
- text_features = F.normalize(text_features, dim=-1)
-
- all_names = list(
- set(["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]])
- )
- for n in all_names:
- idx = np.where(
- np.array(
- ["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]]
- )
- == n
- )[0]
- eval_info[n]["all_audio_features"].append(
- audio_features.cpu().index_select(0, torch.tensor(idx).long())
- )
- # (yusong) please double-check. This is for selecting 5 text features at once.
- # because idx is a list of indices in size of num_samples,
- # and text_features is a tensor of size (5*num_samples, dim)
- # so we need to select 5 consecutive indices at once for a single index in idx.
- eval_info[n]["all_text_features"].append(
- text_features.cpu()
- .reshape([-1, 5, text_features.shape[1]])
- .index_select(0, torch.tensor(idx).long())
- .reshape([-1, text_features.shape[1]])
- )
-
- val_metrics_all = {}
-
- for n in eval_info.keys():
- logit_scale_a, logit_scale_t = model(None, None, device)
- logit_scale_a = logit_scale_a.cpu()
-
- audio_features = torch.cat(eval_info[n]["all_audio_features"], dim=0)
- text_features = torch.cat(eval_info[n]["all_text_features"], dim=0)
-
- logits_per_audio = (
- (logit_scale_a * audio_features @ text_features.t()).detach().cpu()
- )
- logits_per_text = logits_per_audio.t().detach().cpu()
-
- # logits_per_audio shape: [num_samples, num_samples*5]
- # logits_per_text shape: [num_samples*5, num_samples]
-
- logging.info(
- f"dataset {n}, logits_per_audio shape: {logits_per_audio.shape}, "
- f"logits_per_text shape: {logits_per_text.shape}"
- )
-
- metrics = {}
- num_samples = audio_features.shape[0]
- metrics[f"num_samples"] = num_samples
-
- # (yusong) the following code is very important, please double-check:
- # logits_per_audio.reshape(num_samples, num_samples, 5)[:, :, d]
- # logits_per_text.reshape(num_samples, 5, num_samples)[:, d, :]
- # Those two are retrieving one of the 5 text for each audio.
- labels = torch.arange(audio_features.shape[0]).long()
- audio_to_text_loss = [
- F.cross_entropy(
- logits_per_audio.reshape(num_samples, num_samples, 5)[:, :, d],
- labels,
- )
- for d in range(5)
- ]
- text_to_audio_loss = [
- F.cross_entropy(
- logits_per_text.reshape(num_samples, 5, num_samples)[:, d, :],
- labels,
- )
- for d in range(5)
- ]
- total_loss = (np.mean(audio_to_text_loss) + np.mean(text_to_audio_loss)) / 2
-
- metrics[f"cumulative_loss"] = total_loss.item()
-
- # text to audio: do 5 times
- pred_text = []
- for d in range(5):
- logit = logits_per_text.reshape(num_samples, 5, num_samples)[:, d, :]
- ground_truth = torch.arange(len(logit)).view(-1, 1)
- ranking = torch.argsort(
- logit, descending=True
- ) # [num_samples, num_samples]
- preds = torch.where(ranking == ground_truth)[1]
- pred_text.append(preds.detach().cpu().numpy())
- pred_text_concat = np.concatenate(pred_text, axis=0) # [5*num_samples]
- metrics[f"text_to_audio_mean_rank"] = pred_text_concat.mean() + 1
- metrics[f"text_to_audio_median_rank"] = (
- np.floor(np.median(pred_text_concat)) + 1
- )
- for k in [1, 5, 10]:
- metrics[f"text_to_audio_R@{k}"] = np.mean(pred_text_concat < k)
- # map@10
- metrics[f"text_to_audio_mAP@10"] = np.mean(
- np.where(pred_text_concat < 10, 1 / (pred_text_concat + 1), 0.0)
- )
-
- # audio to text: take the best result
- # for audio to text map 10, sort and assign descending ground truth.
- # see https://github.com/XinhaoMei/audio-text_retrieval/blob/main/tools/utils.py#L103
- # map@10
- map_all = []
- pred_audio_all = []
- for d in range(num_samples):
- # logits_per_audio: [num_samples, num_samples*5]
- logit_single = logits_per_audio[d, :] # [5*num_samples]
- # Ground-truth index: [d*5, d*5+1, d*5+2, d*5+3, d*5+4]
- ranking = torch.argsort(
- logit_single, descending=True
- ) # [5*num_samples]
- # ranking: the index of first match, second match, ...
- ground_truth = torch.arange(d * 5, d * 5 + 5)[None]
- all_pred = torch.where(
- torch.stack([ranking] * 5) == ground_truth.view(-1, 1)
- )[1]
- min_pred = torch.min(all_pred)
- pred_audio_all.append(min_pred.detach().cpu().numpy())
- all_pred_filter = all_pred[all_pred < 10].detach().cpu().numpy()
- # /5 because we have 5 text, so it means for the text rank >=10 we count as 0.
- map_single = (
- np.sum(
- (np.arange(1, len(all_pred_filter) + 1) / (all_pred_filter + 1))
- )
- / 5
- )
- map_all.append(map_single)
- metrics[f"audio_to_text_mAP@10"] = np.mean(map_all)
- for k in [1, 5, 10]:
- metrics[f"audio_to_text_R@{k}"] = np.mean(np.array(pred_audio_all) < k)
-
- val_metrics_all[n] = {n + "/" + k: v for k, v in metrics.items()}
- return val_metrics_all
-
-
-def calculate_selection_performance_clotho_audiocaps(val_metrics_per_dataset):
- """
- Calculate performance for Clotho+AudioCaps for model selection.
- """
- selection_performance_all = []
- for n in val_metrics_per_dataset.keys():
- selection_performance = (
- val_metrics_per_dataset[n][f"{n}/audio_to_text_mAP@10"]
- + val_metrics_per_dataset[n][f"{n}/text_to_audio_mAP@10"]
- ) / 2
- selection_performance_all.append(selection_performance)
- return np.mean(selection_performance_all)
-
-
-def select_top_metric_clotho_audiocaps(metrics, val_metrics_per_dataset, args):
- # val_metrics_per_dataset: dict, key: dataset name, value: dict, key: metric name, value: metric value
- # metrics: dict, key: metric name, value: metric value
- # Hack: use args to save the top performance
- if not hasattr(args, "top_selection_performance"):
- selection_performance = calculate_selection_performance_clotho_audiocaps(
- val_metrics_per_dataset
- )
- # TODO: write the if and else together
- metric_update = {}
- for n in val_metrics_per_dataset.keys():
- for k in val_metrics_per_dataset[n].keys():
- metric_update[
- k.split("/")[0] + "-top" + "/" + k.split("/")[1]
- ] = val_metrics_per_dataset[n][k]
- metric_update["top_selection_performance"] = selection_performance
- metric_update["top-selection-epoch"] = metrics["epoch"]
- metrics.update(metric_update)
- args.top_metric = metric_update
- args.top_selection_performance = selection_performance
- else:
- selection_performance_new = calculate_selection_performance_clotho_audiocaps(
- val_metrics_per_dataset
- )
- selection_performance_old = args.top_selection_performance
- if selection_performance_new > selection_performance_old:
- metric_update = {}
- for n in val_metrics_per_dataset.keys():
- for k in val_metrics_per_dataset[n].keys():
- metric_update[
- k.split("/")[0] + "-top" + "/" + k.split("/")[1]
- ] = val_metrics_per_dataset[n][k]
- metric_update["top_selection_performance"] = selection_performance_new
- metric_update["top-selection-epoch"] = metrics["epoch"]
- metrics.update(metric_update)
- args.top_metric = metric_update
- args.top_selection_performance = selection_performance_new
- else:
- metrics.update(args.top_metric)
- return metrics
diff --git a/spaces/better57/CHATGPT/modules/models.py b/spaces/better57/CHATGPT/modules/models.py
deleted file mode 100644
index 25b18b1904910e183a997a763008403d960868d6..0000000000000000000000000000000000000000
--- a/spaces/better57/CHATGPT/modules/models.py
+++ /dev/null
@@ -1,625 +0,0 @@
-from __future__ import annotations
-from typing import TYPE_CHECKING, List
-
-import logging
-import json
-import commentjson as cjson
-import os
-import sys
-import requests
-import urllib3
-import platform
-import base64
-from io import BytesIO
-from PIL import Image
-
-from tqdm import tqdm
-import colorama
-from duckduckgo_search import ddg
-import asyncio
-import aiohttp
-from enum import Enum
-import uuid
-
-from .presets import *
-from .llama_func import *
-from .utils import *
-from . import shared
-from .config import retrieve_proxy
-from modules import config
-from .base_model import BaseLLMModel, ModelType
-
-
-class OpenAIClient(BaseLLMModel):
- def __init__(
- self,
- model_name,
- api_key,
- system_prompt=INITIAL_SYSTEM_PROMPT,
- temperature=1.0,
- top_p=1.0,
- ) -> None:
- super().__init__(
- model_name=model_name,
- temperature=temperature,
- top_p=top_p,
- system_prompt=system_prompt,
- )
- self.api_key = api_key
- self.need_api_key = True
- self._refresh_header()
-
- def get_answer_stream_iter(self):
- response = self._get_response(stream=True)
- if response is not None:
- iter = self._decode_chat_response(response)
- partial_text = ""
- for i in iter:
- partial_text += i
- yield partial_text
- else:
- yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG
-
- def get_answer_at_once(self):
- response = self._get_response()
- response = json.loads(response.text)
- content = response["choices"][0]["message"]["content"]
- total_token_count = response["usage"]["total_tokens"]
- return content, total_token_count
-
- def count_token(self, user_input):
- input_token_count = count_token(construct_user(user_input))
- if self.system_prompt is not None and len(self.all_token_counts) == 0:
- system_prompt_token_count = count_token(
- construct_system(self.system_prompt)
- )
- return input_token_count + system_prompt_token_count
- return input_token_count
-
- def billing_info(self):
- try:
- curr_time = datetime.datetime.now()
- last_day_of_month = get_last_day_of_month(
- curr_time).strftime("%Y-%m-%d")
- first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d")
- usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}"
- try:
- usage_data = self._get_billing_data(usage_url)
- except Exception as e:
- logging.error(f"获取API使用情况失败:" + str(e))
- return i18n("**获取API使用情况失败**")
- rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100)
- return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}"
- except requests.exceptions.ConnectTimeout:
- status_text = (
- STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- )
- return status_text
- except requests.exceptions.ReadTimeout:
- status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- return status_text
- except Exception as e:
- import traceback
- traceback.print_exc()
- logging.error(i18n("获取API使用情况失败:") + str(e))
- return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG
-
- def set_token_upper_limit(self, new_upper_limit):
- pass
-
- @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用
- def _get_response(self, stream=False):
- openai_api_key = self.api_key
- system_prompt = self.system_prompt
- history = self.history
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}",
- }
-
- if system_prompt is not None:
- history = [construct_system(system_prompt), *history]
-
- payload = {
- "model": self.model_name,
- "messages": history,
- "temperature": self.temperature,
- "top_p": self.top_p,
- "n": self.n_choices,
- "stream": stream,
- "presence_penalty": self.presence_penalty,
- "frequency_penalty": self.frequency_penalty,
- }
-
- if self.max_generation_token is not None:
- payload["max_tokens"] = self.max_generation_token
- if self.stop_sequence is not None:
- payload["stop"] = self.stop_sequence
- if self.logit_bias is not None:
- payload["logit_bias"] = self.logit_bias
- if self.user_identifier is not None:
- payload["user"] = self.user_identifier
-
- if stream:
- timeout = TIMEOUT_STREAMING
- else:
- timeout = TIMEOUT_ALL
-
- # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求
- if shared.state.completion_url != COMPLETION_URL:
- logging.info(f"使用自定义API URL: {shared.state.completion_url}")
-
- with retrieve_proxy():
- try:
- response = requests.post(
- shared.state.completion_url,
- headers=headers,
- json=payload,
- stream=stream,
- timeout=timeout,
- )
- except:
- return None
- return response
-
- def _refresh_header(self):
- self.headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {self.api_key}",
- }
-
- def _get_billing_data(self, billing_url):
- with retrieve_proxy():
- response = requests.get(
- billing_url,
- headers=self.headers,
- timeout=TIMEOUT_ALL,
- )
-
- if response.status_code == 200:
- data = response.json()
- return data
- else:
- raise Exception(
- f"API request failed with status code {response.status_code}: {response.text}"
- )
-
- def _decode_chat_response(self, response):
- error_msg = ""
- for chunk in response.iter_lines():
- if chunk:
- chunk = chunk.decode()
- chunk_length = len(chunk)
- try:
- chunk = json.loads(chunk[6:])
- except json.JSONDecodeError:
- print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}")
- error_msg += chunk
- continue
- if chunk_length > 6 and "delta" in chunk["choices"][0]:
- if chunk["choices"][0]["finish_reason"] == "stop":
- break
- try:
- yield chunk["choices"][0]["delta"]["content"]
- except Exception as e:
- # logging.error(f"Error: {e}")
- continue
- if error_msg:
- raise Exception(error_msg)
-
- def set_key(self, new_access_key):
- ret = super().set_key(new_access_key)
- self._refresh_header()
- return ret
-
-
-class ChatGLM_Client(BaseLLMModel):
- def __init__(self, model_name) -> None:
- super().__init__(model_name=model_name)
- from transformers import AutoTokenizer, AutoModel
- import torch
- global CHATGLM_TOKENIZER, CHATGLM_MODEL
- if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None:
- system_name = platform.system()
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"THUDM/{model_name}"
- CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained(
- model_source, trust_remote_code=True
- )
- quantified = False
- if "int4" in model_name:
- quantified = True
- model = AutoModel.from_pretrained(
- model_source, trust_remote_code=True
- )
- if torch.cuda.is_available():
- # run on CUDA
- logging.info("CUDA is available, using CUDA")
- model = model.half().cuda()
- # mps加速还存在一些问题,暂时不使用
- elif system_name == "Darwin" and model_path is not None and not quantified:
- logging.info("Running on macOS, using MPS")
- # running on macOS and model already downloaded
- model = model.half().to("mps")
- else:
- logging.info("GPU is not available, using CPU")
- model = model.float()
- model = model.eval()
- CHATGLM_MODEL = model
-
- def _get_glm_style_input(self):
- history = [x["content"] for x in self.history]
- query = history.pop()
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- assert (
- len(history) % 2 == 0
- ), f"History should be even length. current history is: {history}"
- history = [[history[i], history[i + 1]]
- for i in range(0, len(history), 2)]
- return history, query
-
- def get_answer_at_once(self):
- history, query = self._get_glm_style_input()
- response, _ = CHATGLM_MODEL.chat(
- CHATGLM_TOKENIZER, query, history=history)
- return response, len(response)
-
- def get_answer_stream_iter(self):
- history, query = self._get_glm_style_input()
- for response, history in CHATGLM_MODEL.stream_chat(
- CHATGLM_TOKENIZER,
- query,
- history,
- max_length=self.token_upper_limit,
- top_p=self.top_p,
- temperature=self.temperature,
- ):
- yield response
-
-
-class LLaMA_Client(BaseLLMModel):
- def __init__(
- self,
- model_name,
- lora_path=None,
- ) -> None:
- super().__init__(model_name=model_name)
- from lmflow.datasets.dataset import Dataset
- from lmflow.pipeline.auto_pipeline import AutoPipeline
- from lmflow.models.auto_model import AutoModel
- from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments
-
- self.max_generation_token = 1000
- self.end_string = "\n\n"
- # We don't need input data
- data_args = DatasetArguments(dataset_path=None)
- self.dataset = Dataset(data_args)
- self.system_prompt = ""
-
- global LLAMA_MODEL, LLAMA_INFERENCER
- if LLAMA_MODEL is None or LLAMA_INFERENCER is None:
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"decapoda-research/{model_name}"
- # raise Exception(f"models目录下没有这个模型: {model_name}")
- if lora_path is not None:
- lora_path = f"lora/{lora_path}"
- model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None,
- use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True)
- pipeline_args = InferencerArguments(
- local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16')
-
- with open(pipeline_args.deepspeed, "r") as f:
- ds_config = json.load(f)
- LLAMA_MODEL = AutoModel.get_model(
- model_args,
- tune_strategy="none",
- ds_config=ds_config,
- )
- LLAMA_INFERENCER = AutoPipeline.get_pipeline(
- pipeline_name="inferencer",
- model_args=model_args,
- data_args=data_args,
- pipeline_args=pipeline_args,
- )
-
- def _get_llama_style_input(self):
- history = []
- instruction = ""
- if self.system_prompt:
- instruction = (f"Instruction: {self.system_prompt}\n")
- for x in self.history:
- if x["role"] == "user":
- history.append(f"{instruction}Input: {x['content']}")
- else:
- history.append(f"Output: {x['content']}")
- context = "\n\n".join(history)
- context += "\n\nOutput: "
- return context
-
- def get_answer_at_once(self):
- context = self._get_llama_style_input()
-
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [{"text": context}]}
- )
-
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=self.max_generation_token,
- temperature=self.temperature,
- )
-
- response = output_dataset.to_dict()["instances"][0]["text"]
- return response, len(response)
-
- def get_answer_stream_iter(self):
- context = self._get_llama_style_input()
- partial_text = ""
- step = 1
- for _ in range(0, self.max_generation_token, step):
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [
- {"text": context + partial_text}]}
- )
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=step,
- temperature=self.temperature,
- )
- response = output_dataset.to_dict()["instances"][0]["text"]
- if response == "" or response == self.end_string:
- break
- partial_text += response
- yield partial_text
-
-
-class XMChat(BaseLLMModel):
- def __init__(self, api_key):
- super().__init__(model_name="xmchat")
- self.api_key = api_key
- self.session_id = None
- self.reset()
- self.image_bytes = None
- self.image_path = None
- self.xm_history = []
- self.url = "https://xmbot.net/web"
- self.last_conv_id = None
-
- def reset(self):
- self.session_id = str(uuid.uuid4())
- self.last_conv_id = None
- return [], "已重置"
-
- def image_to_base64(self, image_path):
- # 打开并加载图片
- img = Image.open(image_path)
-
- # 获取图片的宽度和高度
- width, height = img.size
-
- # 计算压缩比例,以确保最长边小于4096像素
- max_dimension = 2048
- scale_ratio = min(max_dimension / width, max_dimension / height)
-
- if scale_ratio < 1:
- # 按压缩比例调整图片大小
- new_width = int(width * scale_ratio)
- new_height = int(height * scale_ratio)
- img = img.resize((new_width, new_height), Image.ANTIALIAS)
-
- # 将图片转换为jpg格式的二进制数据
- buffer = BytesIO()
- if img.mode == "RGBA":
- img = img.convert("RGB")
- img.save(buffer, format='JPEG')
- binary_image = buffer.getvalue()
-
- # 对二进制数据进行Base64编码
- base64_image = base64.b64encode(binary_image).decode('utf-8')
-
- return base64_image
-
- def try_read_image(self, filepath):
- def is_image_file(filepath):
- # 判断文件是否为图片
- valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"]
- file_extension = os.path.splitext(filepath)[1].lower()
- return file_extension in valid_image_extensions
-
- if is_image_file(filepath):
- logging.info(f"读取图片文件: {filepath}")
- self.image_bytes = self.image_to_base64(filepath)
- self.image_path = filepath
- else:
- self.image_bytes = None
- self.image_path = None
-
- def like(self):
- if self.last_conv_id is None:
- return "点赞失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "good"
- }
- response = requests.post(self.url, json=data)
- return "👍点赞成功,,感谢反馈~"
-
- def dislike(self):
- if self.last_conv_id is None:
- return "点踩失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "bad"
- }
- response = requests.post(self.url, json=data)
- return "👎点踩成功,感谢反馈~"
-
- def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
- fake_inputs = real_inputs
- display_append = ""
- limited_context = False
- return limited_context, fake_inputs, display_append, real_inputs, chatbot
-
- def handle_file_upload(self, files, chatbot):
- """if the model accepts multi modal input, implement this function"""
- if files:
- for file in files:
- if file.name:
- logging.info(f"尝试读取图像: {file.name}")
- self.try_read_image(file.name)
- if self.image_path is not None:
- chatbot = chatbot + [((self.image_path,), None)]
- if self.image_bytes is not None:
- logging.info("使用图片作为输入")
- # XMChat的一轮对话中实际上只能处理一张图片
- self.reset()
- conv_id = str(uuid.uuid4())
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "imgbase64",
- "data": self.image_bytes
- }
- response = requests.post(self.url, json=data)
- response = json.loads(response.text)
- logging.info(f"图片回复: {response['data']}")
- return None, chatbot, None
-
- def get_answer_at_once(self):
- question = self.history[-1]["content"]
- conv_id = str(uuid.uuid4())
- self.last_conv_id = conv_id
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "text",
- "data": question
- }
- response = requests.post(self.url, json=data)
- try:
- response = json.loads(response.text)
- return response["data"], len(response["data"])
- except Exception as e:
- return response.text, len(response.text)
-
-
-
-
-def get_model(
- model_name,
- lora_model_path=None,
- access_key=None,
- temperature=None,
- top_p=None,
- system_prompt=None,
-) -> BaseLLMModel:
- msg = i18n("模型设置为了:") + f" {model_name}"
- model_type = ModelType.get_type(model_name)
- lora_selector_visibility = False
- lora_choices = []
- dont_change_lora_selector = False
- if model_type != ModelType.OpenAI:
- config.local_embedding = True
- # del current_model.model
- model = None
- try:
- if model_type == ModelType.OpenAI:
- logging.info(f"正在加载OpenAI模型: {model_name}")
- model = OpenAIClient(
- model_name=model_name,
- api_key=access_key,
- system_prompt=system_prompt,
- temperature=temperature,
- top_p=top_p,
- )
- elif model_type == ModelType.ChatGLM:
- logging.info(f"正在加载ChatGLM模型: {model_name}")
- model = ChatGLM_Client(model_name)
- elif model_type == ModelType.LLaMA and lora_model_path == "":
- msg = f"现在请为 {model_name} 选择LoRA模型"
- logging.info(msg)
- lora_selector_visibility = True
- if os.path.isdir("lora"):
- lora_choices = get_file_names(
- "lora", plain=True, filetypes=[""])
- lora_choices = ["No LoRA"] + lora_choices
- elif model_type == ModelType.LLaMA and lora_model_path != "":
- logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}")
- dont_change_lora_selector = True
- if lora_model_path == "No LoRA":
- lora_model_path = None
- msg += " + No LoRA"
- else:
- msg += f" + {lora_model_path}"
- model = LLaMA_Client(model_name, lora_model_path)
- elif model_type == ModelType.XMChat:
- if os.environ.get("XMCHAT_API_KEY") != "":
- access_key = os.environ.get("XMCHAT_API_KEY")
- model = XMChat(api_key=access_key)
- elif model_type == ModelType.Unknown:
- raise ValueError(f"未知模型: {model_name}")
- logging.info(msg)
- except Exception as e:
- logging.error(e)
- msg = f"{STANDARD_ERROR_MSG}: {e}"
- if dont_change_lora_selector:
- return model, msg
- else:
- return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility)
-
-
-if __name__ == "__main__":
- with open("config.json", "r") as f:
- openai_api_key = cjson.load(f)["openai_api_key"]
- # set logging level to debug
- logging.basicConfig(level=logging.DEBUG)
- # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key)
- client = get_model(model_name="chatglm-6b-int4")
- chatbot = []
- stream = False
- # 测试账单功能
- logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET)
- logging.info(client.billing_info())
- # 测试问答
- logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET)
- question = "巴黎是中国的首都吗?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试问答后history : {client.history}")
- # 测试记忆力
- logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET)
- question = "我刚刚问了你什么问题?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试记忆力后history : {client.history}")
- # 测试重试功能
- logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET)
- for i in client.retry(chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"重试后history : {client.history}")
- # # 测试总结功能
- # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET)
- # chatbot, msg = client.reduce_token_size(chatbot=chatbot)
- # print(chatbot, msg)
- # print(f"总结后history: {client.history}")
diff --git a/spaces/bguberfain/Detic/detic/data/datasets/cc.py b/spaces/bguberfain/Detic/detic/data/datasets/cc.py
deleted file mode 100644
index 7c3e50726f781dba4c72d4e18f4922e503218af8..0000000000000000000000000000000000000000
--- a/spaces/bguberfain/Detic/detic/data/datasets/cc.py
+++ /dev/null
@@ -1,23 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import os
-
-from detectron2.data.datasets.builtin_meta import _get_builtin_metadata
-from detectron2.data.datasets.lvis import get_lvis_instances_meta
-from .lvis_v1 import custom_register_lvis_instances
-
-_CUSTOM_SPLITS = {
- "cc3m_v1_val": ("cc3m/validation/", "cc3m/val_image_info.json"),
- "cc3m_v1_train": ("cc3m/training/", "cc3m/train_image_info.json"),
- "cc3m_v1_train_tags": ("cc3m/training/", "cc3m/train_image_info_tags.json"),
-
-}
-
-for key, (image_root, json_file) in _CUSTOM_SPLITS.items():
- custom_register_lvis_instances(
- key,
- get_lvis_instances_meta('lvis_v1'),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
-
diff --git a/spaces/bigscience/bloom-book/utils/deprecated.py b/spaces/bigscience/bloom-book/utils/deprecated.py
deleted file mode 100644
index b023f31ec94e98579c77d1356c8b41fbc9f25077..0000000000000000000000000000000000000000
--- a/spaces/bigscience/bloom-book/utils/deprecated.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import streamlit as st
-
-# https://discuss.streamlit.io/t/how-do-i-use-a-background-image-on-streamlit/5067/5
-def set_png_as_page_bg(main_bg):
- '''
- A function to unpack an image from root folder and set as bg.
- Returns
- -------
- The background.
- '''
- # set bg name
- main_bg_ext = "png"
-
- st.markdown(
- f"""
-
- """,
- unsafe_allow_html=True
- )
-
-def sidebar_bg(side_bg):
-
- side_bg_ext = 'png'
-
- st.markdown(
- f"""
-
- """,
- unsafe_allow_html=True,
- )
-
-def render_chapter_from_chapter_number(date, suffix):
- template_final_html = """
-
-
-
-
- """
- template_card = """
-
-
-
-
-
-
-
-
- {}
-
-
-
- """
- json_data = get_json_from_date(date, suffix)
- nb_prompts = len(json_data['inputs'])
- for i in range(nb_prompts):
- input_text = json_data["inputs"][i]
- output_text = json_data["outputs"][i]
-
- input_text = preprocess_raw_text_to_html(input_text)
- output_text = preprocess_raw_text_to_html(output_text)
-
- output_text = output_text.replace(input_text, """{}""".format(input_text))
- template_final_html += template_card.format(i, i, i, input_text, i, i, output_text)
- template_final_html += "
"
- return template_final_html
\ No newline at end of file
diff --git a/spaces/brainblow/beat_remixer/beat_manipulator/io.py b/spaces/brainblow/beat_remixer/beat_manipulator/io.py
deleted file mode 100644
index 0aff8b9a0a581539447d4fa4b010083d2ed95ab6..0000000000000000000000000000000000000000
--- a/spaces/brainblow/beat_remixer/beat_manipulator/io.py
+++ /dev/null
@@ -1,178 +0,0 @@
-
-import numpy as np
-from . import main
-
-def open_audio(path:str = None, lib:str = 'auto', normalize = True) -> tuple:
- """Opens audio from path, returns (audio, samplerate) tuple.
-
- Audio is returned as an array with normal volume range between -1, 1.
-
- Example of returned audio:
-
- [
- [0.35, -0.25, ... -0.15, -0.15],
-
- [0.31, -0.21, ... -0.11, -0.07]
- ]"""
-
- if path is None:
- from tkinter.filedialog import askopenfilename
- path = askopenfilename(title='select song', filetypes=[("mp3", ".mp3"),("wav", ".wav"),("flac", ".flac"),("ogg", ".ogg"),("wma", ".wma")])
-
- path=path.replace('\\', '/')
-
- if lib=='pedalboard.io':
- import pedalboard.io
- with pedalboard.io.AudioFile(path) as f:
- audio = f.read(f.frames)
- sr = f.samplerate
-
- elif lib=='librosa':
- import librosa
- audio, sr = librosa.load(path, sr=None, mono=False)
-
- elif lib=='soundfile':
- import soundfile
- audio, sr = soundfile.read(path)
- audio=audio.T
-
- elif lib=='madmom':
- import madmom
- audio, sr = madmom.io.audio.load_audio_file(path, dtype=float)
- audio=audio.T
-
- # elif lib=='pydub':
- # from pydub import AudioSegment
- # song=AudioSegment.from_file(filename)
- # audio = song.get_array_of_samples()
- # samplerate=song.frame_rate
- # print(audio)
- # print(filename)
-
- elif lib=='auto':
- for i in ('madmom', 'soundfile', 'librosa', 'pedalboard.io'):
- try:
- audio,sr=open_audio(path, i)
- break
- except Exception as e:
- print(f'open_audio with {i}: {e}')
-
- if len(audio)>16: audio=np.array([audio, audio], copy=False)
- if normalize is True:
- audio = np.clip(audio, -1, 1)
- audio = audio*(1/np.max(np.abs(audio)))
- return audio.astype(np.float32),sr
-
-def _sr(sr):
- try: return int(sr)
- except (ValueError, TypeError): assert False, f"Audio is an array, but `sr` argument is not valid. If audio is an array, you have to provide samplerate as an integer in the `sr` argument. Currently sr = {sr} of type {type(sr)}"
-
-def write_audio(audio:np.ndarray, sr:int, output:str, lib:str='auto', libs=('pedalboard.io', 'soundfile'), log = True):
- """"writes audio to path specified by output. Path should end with file extension, for example `folder/audio.mp3`"""
- if log is True: print(f'Writing {output}...', end=' ')
- assert _iterable(audio), f"audio should be an array/iterable object, but it is {type(audio)}"
- sr = _sr(sr)
- if not isinstance(audio, np.ndarray): audio = np.array(audio, copy=False)
- if lib=='pedalboard.io':
- #print(audio)
- import pedalboard.io
- with pedalboard.io.AudioFile(output, 'w', sr, audio.shape[0]) as f:
- f.write(audio)
- elif lib=='soundfile':
- audio=audio.T
- import soundfile
- soundfile.write(output, audio, sr)
- del audio
- elif lib=='auto':
- for i in libs:
- try:
- write_audio(audio=audio, sr=sr, output=output, lib=i, log = False)
- break
- except Exception as e:
- print(e)
- else: assert False, 'Failed to write audio, chances are there is something wrong with it...'
- if log is True: print(f'Done!')
-
-def _iterable(a):
- try:
- _ = iter(a)
- return True
- except TypeError: return False
-
-def _load(audio, sr:int = None, lib:str = 'auto', channels:int = 2, transpose3D:bool = False) -> tuple:
- """Automatically converts audio from path or any format to [[...],[...]] array. Returns (audio, samplerate) tuple."""
- # path
- if isinstance(audio, str): return(open_audio(path=audio, lib=lib))
- # array
- if _iterable(audio):
- if isinstance(audio, main.song):
- if sr is None: sr = audio.sr
- audio = audio.audio
- # sr is provided in a tuple
- if sr is None and len(audio) == 2:
- if not _iterable(audio[0]):
- sr = audio[0]
- audio = audio[1]
- elif not _iterable(audio[1]):
- sr = audio[1]
- audio = audio[0]
- if not isinstance(audio, np.ndarray): audio = np.array(audio, copy=False)
- sr = _sr(sr)
- if _iterable(audio[0]):
- # image
- if _iterable(audio[0][0]):
- audio2 = []
- if transpose3D is True: audio = audio.T
- for i in audio:
- audio2.extend(_load(audio=i, sr=sr, lib=lib, channels=channels, transpose3D=transpose3D)[0])
- return audio2, sr
- # transposed
- if len(audio) > 16:
- audio = audio.T
- return _load(audio=audio, sr=sr, lib=lib, channels=channels, transpose3D=transpose3D)
- # multi channel
- elif isinstance(channels, int):
- if len(audio) >= channels:
- return audio[:channels], sr
- # masked mono
- else: return np.array([audio[0] for _ in range(channels)], copy=False), sr
- else: return audio, sr
- else:
- # mono
- return (np.array([audio for _ in range(channels)], copy=False) if channels is not None else audio), sr
- # unknown
- else: assert False, f"Audio should be either a string with path, an array/iterable object, or a song object, but it is {type(audio)}"
-
-def _tosong(audio, sr=None):
- if isinstance(audio, main.song): return audio
- else:
- audio, sr = _load(audio = audio, sr = sr)
- return main.song(audio=audio, sr = sr)
-
-def _outputfilename(path:str = None, filename:str = None, suffix:str = None, ext:str = None):
- """If path has file extension, returns `path + suffix + ext`. Else returns `path + filename + suffix + .ext`. If nothing is specified, returns `output.mp3`"""
- if ext is not None:
- if not ext.startswith('.'): ext = '.'+ext
- if path is None: path = ''
- if path.endswith('/') or path.endswith('\\'): path=path[:-1]
- if '.' in path:
- path = path.split('.')
- if path[-1].lower() in ['mp3', 'wav', 'flac', 'ogg', 'wma', 'aac', 'ac3', 'aiff']:
- if ext is not None:
- path[-1] = ext
- if suffix is not None: path[len(path)-2]+=suffix
- return ''.join(path)
- else: path = ''.join(path)
- if filename is not None:
- filename = filename.replace('\\','/').split('/')[-1]
- if '.' in filename:
- filename = filename.split('.')
- if filename[-1].lower() in ['mp3', 'wav', 'flac', 'ogg', 'wma', 'aac', 'ac3', 'aiff']:
- if ext is not None:
- filename[-1] = ext
- if suffix is not None: filename.insert(len(filename)-1, suffix)
- else: filename += [ext]
- filename = ''.join(filename)
- return f'{path}/{filename}' if path != '' else filename
- return f'{(path + "/") * (path != "")}{filename}{suffix if suffix is not None else ""}.{ext if ext is not None else "mp3"}'
- else: return f'{path}/output.mp3'
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/datasets/pascal_voc.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/datasets/pascal_voc.py
deleted file mode 100644
index dbbf82cb96442bfa0cf05ed0f4dddf3645434b7e..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/datasets/pascal_voc.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import numpy as np
-import os
-import xml.etree.ElementTree as ET
-from typing import List, Tuple, Union
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.structures import BoxMode
-from detectron2.utils.file_io import PathManager
-
-__all__ = ["load_voc_instances", "register_pascal_voc"]
-
-
-# fmt: off
-CLASS_NAMES = (
- "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat",
- "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person",
- "pottedplant", "sheep", "sofa", "train", "tvmonitor"
-)
-# fmt: on
-
-
-def load_voc_instances(dirname: str, split: str, class_names: Union[List[str], Tuple[str, ...]]):
- """
- Load Pascal VOC detection annotations to Detectron2 format.
-
- Args:
- dirname: Contain "Annotations", "ImageSets", "JPEGImages"
- split (str): one of "train", "test", "val", "trainval"
- class_names: list or tuple of class names
- """
- with PathManager.open(os.path.join(dirname, "ImageSets", "Main", split + ".txt")) as f:
- fileids = np.loadtxt(f, dtype=np.str)
-
- # Needs to read many small annotation files. Makes sense at local
- annotation_dirname = PathManager.get_local_path(os.path.join(dirname, "Annotations/"))
- dicts = []
- for fileid in fileids:
- anno_file = os.path.join(annotation_dirname, fileid + ".xml")
- jpeg_file = os.path.join(dirname, "JPEGImages", fileid + ".jpg")
-
- with PathManager.open(anno_file) as f:
- tree = ET.parse(f)
-
- r = {
- "file_name": jpeg_file,
- "image_id": fileid,
- "height": int(tree.findall("./size/height")[0].text),
- "width": int(tree.findall("./size/width")[0].text),
- }
- instances = []
-
- for obj in tree.findall("object"):
- cls = obj.find("name").text
- # We include "difficult" samples in training.
- # Based on limited experiments, they don't hurt accuracy.
- # difficult = int(obj.find("difficult").text)
- # if difficult == 1:
- # continue
- bbox = obj.find("bndbox")
- bbox = [float(bbox.find(x).text) for x in ["xmin", "ymin", "xmax", "ymax"]]
- # Original annotations are integers in the range [1, W or H]
- # Assuming they mean 1-based pixel indices (inclusive),
- # a box with annotation (xmin=1, xmax=W) covers the whole image.
- # In coordinate space this is represented by (xmin=0, xmax=W)
- bbox[0] -= 1.0
- bbox[1] -= 1.0
- instances.append(
- {"category_id": class_names.index(cls), "bbox": bbox, "bbox_mode": BoxMode.XYXY_ABS}
- )
- r["annotations"] = instances
- dicts.append(r)
- return dicts
-
-
-def register_pascal_voc(name, dirname, split, year, class_names=CLASS_NAMES):
- DatasetCatalog.register(name, lambda: load_voc_instances(dirname, split, class_names))
- MetadataCatalog.get(name).set(
- thing_classes=list(class_names), dirname=dirname, year=year, split=split
- )
diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/downloads.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/downloads.py
deleted file mode 100644
index ebe5bd36e8ff87c85252eaa38bc9125f3c8c1e2b..0000000000000000000000000000000000000000
--- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/downloads.py
+++ /dev/null
@@ -1,178 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Download utils
-"""
-
-import logging
-import os
-import platform
-import subprocess
-import time
-import urllib
-from pathlib import Path
-from zipfile import ZipFile
-
-import requests
-import torch
-
-
-def is_url(url):
- # Check if online file exists
- try:
- r = urllib.request.urlopen(url) # response
- return r.getcode() == 200
- except urllib.request.HTTPError:
- return False
-
-
-def gsutil_getsize(url=''):
- # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
- s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')
- return eval(s.split(' ')[0]) if len(s) else 0 # bytes
-
-
-def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''):
- # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes
- from utils.general import LOGGER
-
- file = Path(file)
- assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}"
- try: # url1
- LOGGER.info(f'Downloading {url} to {file}...')
- torch.hub.download_url_to_file(url, str(file), progress=LOGGER.level <= logging.INFO)
- assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check
- except Exception as e: # url2
- file.unlink(missing_ok=True) # remove partial downloads
- LOGGER.info(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...')
- os.system(f"curl -L '{url2 or url}' -o '{file}' --retry 3 -C -") # curl download, retry and resume on fail
- finally:
- if not file.exists() or file.stat().st_size < min_bytes: # check
- file.unlink(missing_ok=True) # remove partial downloads
- LOGGER.info(f"ERROR: {assert_msg}\n{error_msg}")
- LOGGER.info('')
-
-
-def attempt_download(file, repo='ultralytics/yolov5', release='v6.1'):
- # Attempt file download from GitHub release assets if not found locally. release = 'latest', 'v6.1', etc.
- from utils.general import LOGGER
-
- def github_assets(repository, version='latest'):
- # Return GitHub repo tag (i.e. 'v6.1') and assets (i.e. ['yolov5s.pt', 'yolov5m.pt', ...])
- if version != 'latest':
- version = f'tags/{version}' # i.e. tags/v6.1
- response = requests.get(f'https://api.github.com/repos/{repository}/releases/{version}').json() # github api
- return response['tag_name'], [x['name'] for x in response['assets']] # tag, assets
-
- file = Path(str(file).strip().replace("'", ''))
- if not file.exists():
- # URL specified
- name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc.
- if str(file).startswith(('http:/', 'https:/')): # download
- url = str(file).replace(':/', '://') # Pathlib turns :// -> :/
- file = name.split('?')[0] # parse authentication https://url.com/file.txt?auth...
- if Path(file).is_file():
- LOGGER.info(f'Found {url} locally at {file}') # file already exists
- else:
- safe_download(file=file, url=url, min_bytes=1E5)
- return file
-
- # GitHub assets
- assets = [
- 'yolov5n.pt', 'yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt', 'yolov5n6.pt', 'yolov5s6.pt',
- 'yolov5m6.pt', 'yolov5l6.pt', 'yolov5x6.pt']
- try:
- tag, assets = github_assets(repo, release)
- except Exception:
- try:
- tag, assets = github_assets(repo) # latest release
- except Exception:
- try:
- tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1]
- except Exception:
- tag = release
-
- file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required)
- if name in assets:
- url3 = 'https://drive.google.com/drive/folders/1EFQTEUeXWSFww0luse2jB9M1QNZQGwNl' # backup gdrive mirror
- safe_download(
- file,
- url=f'https://github.com/{repo}/releases/download/{tag}/{name}',
- url2=f'https://storage.googleapis.com/{repo}/{tag}/{name}', # backup url (optional)
- min_bytes=1E5,
- error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/{tag} or {url3}')
-
- return str(file)
-
-
-def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'):
- # Downloads a file from Google Drive. from yolov5.utils.downloads import *; gdrive_download()
- t = time.time()
- file = Path(file)
- cookie = Path('cookie') # gdrive cookie
- print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='')
- file.unlink(missing_ok=True) # remove existing file
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Attempt file download
- out = "NUL" if platform.system() == "Windows" else "/dev/null"
- os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}')
- if os.path.exists('cookie'): # large file
- s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}'
- else: # small file
- s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"'
- r = os.system(s) # execute, capture return
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Error check
- if r != 0:
- file.unlink(missing_ok=True) # remove partial
- print('Download error ') # raise Exception('Download error')
- return r
-
- # Unzip if archive
- if file.suffix == '.zip':
- print('unzipping... ', end='')
- ZipFile(file).extractall(path=file.parent) # unzip
- file.unlink() # remove zip
-
- print(f'Done ({time.time() - t:.1f}s)')
- return r
-
-
-def get_token(cookie="./cookie"):
- with open(cookie) as f:
- for line in f:
- if "download" in line:
- return line.split()[-1]
- return ""
-
-
-# Google utils: https://cloud.google.com/storage/docs/reference/libraries ----------------------------------------------
-#
-#
-# def upload_blob(bucket_name, source_file_name, destination_blob_name):
-# # Uploads a file to a bucket
-# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python
-#
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(destination_blob_name)
-#
-# blob.upload_from_filename(source_file_name)
-#
-# print('File {} uploaded to {}.'.format(
-# source_file_name,
-# destination_blob_name))
-#
-#
-# def download_blob(bucket_name, source_blob_name, destination_file_name):
-# # Uploads a blob from a bucket
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(source_blob_name)
-#
-# blob.download_to_filename(destination_file_name)
-#
-# print('Blob {} downloaded to {}.'.format(
-# source_blob_name,
-# destination_file_name))
diff --git a/spaces/camel-ai/camel-agents/sync.sh b/spaces/camel-ai/camel-agents/sync.sh
deleted file mode 100644
index 2cb17c7dd53ee1e3ee5bc59f5ee2836fcf880d3f..0000000000000000000000000000000000000000
--- a/spaces/camel-ai/camel-agents/sync.sh
+++ /dev/null
@@ -1,14 +0,0 @@
-TMP_DIR=/tmp/camel_hf_tmp
-echo $TMP_DIR
-HF_REPO_DIR=`realpath .`
-echo $HF_REPO_DIR
-
-mkdir -p $TMP_DIR
-git clone -b hf_spaces_3 https://github.com/lightaime/camel.git $TMP_DIR
-cd $TMP_DIR
-
-find apps/agents -name "*.py" | grep -v test | xargs -n 1 -I {} rsync -R {} $HF_REPO_DIR
-find apps/common -name "*.py" | grep -v test | xargs -n 1 -I {} rsync -R {} $HF_REPO_DIR
-find data -name "*.txt" | grep -v test | xargs -n 1 -I {} rsync -R {} $HF_REPO_DIR
-
-rm -rf $TMP_DIR
diff --git a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/train_val_divide.py b/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/train_val_divide.py
deleted file mode 100644
index 183abb58d6054a98dd92fcffa107ea0571502130..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/train_val_divide.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import os
-import numpy as np
-filename = 'E:/uma_voice/output.txt'
-split ='|'
-with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
-
-train_filename = filename.split('.')[0] + '_train' + '.txt'
-val_filename = filename.split('.')[0] + '_val' + '.txt'
-
-train_split_ratio = 0.99
-train_f = open(train_filename, 'w', encoding='utf-8')
-val_f = open(val_filename, 'w', encoding='utf-8')
-for i in range(len(filepaths_and_text)):
- if np.random.rand() < train_split_ratio:
- train_f.writelines('|'.join(filepaths_and_text[i]) + '\n')
- else:
- val_f.writelines('|'.join(filepaths_and_text[i]) + '\n')
\ No newline at end of file
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/Jpeg2KImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/Jpeg2KImagePlugin.py
deleted file mode 100644
index 9309768bacffcf071dcc3db764285db911d38323..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/Jpeg2KImagePlugin.py
+++ /dev/null
@@ -1,399 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# JPEG2000 file handling
-#
-# History:
-# 2014-03-12 ajh Created
-# 2021-06-30 rogermb Extract dpi information from the 'resc' header box
-#
-# Copyright (c) 2014 Coriolis Systems Limited
-# Copyright (c) 2014 Alastair Houghton
-#
-# See the README file for information on usage and redistribution.
-#
-import io
-import os
-import struct
-
-from . import Image, ImageFile, _binary
-
-
-class BoxReader:
- """
- A small helper class to read fields stored in JPEG2000 header boxes
- and to easily step into and read sub-boxes.
- """
-
- def __init__(self, fp, length=-1):
- self.fp = fp
- self.has_length = length >= 0
- self.length = length
- self.remaining_in_box = -1
-
- def _can_read(self, num_bytes):
- if self.has_length and self.fp.tell() + num_bytes > self.length:
- # Outside box: ensure we don't read past the known file length
- return False
- if self.remaining_in_box >= 0:
- # Inside box contents: ensure read does not go past box boundaries
- return num_bytes <= self.remaining_in_box
- else:
- return True # No length known, just read
-
- def _read_bytes(self, num_bytes):
- if not self._can_read(num_bytes):
- msg = "Not enough data in header"
- raise SyntaxError(msg)
-
- data = self.fp.read(num_bytes)
- if len(data) < num_bytes:
- msg = f"Expected to read {num_bytes} bytes but only got {len(data)}."
- raise OSError(msg)
-
- if self.remaining_in_box > 0:
- self.remaining_in_box -= num_bytes
- return data
-
- def read_fields(self, field_format):
- size = struct.calcsize(field_format)
- data = self._read_bytes(size)
- return struct.unpack(field_format, data)
-
- def read_boxes(self):
- size = self.remaining_in_box
- data = self._read_bytes(size)
- return BoxReader(io.BytesIO(data), size)
-
- def has_next_box(self):
- if self.has_length:
- return self.fp.tell() + self.remaining_in_box < self.length
- else:
- return True
-
- def next_box_type(self):
- # Skip the rest of the box if it has not been read
- if self.remaining_in_box > 0:
- self.fp.seek(self.remaining_in_box, os.SEEK_CUR)
- self.remaining_in_box = -1
-
- # Read the length and type of the next box
- lbox, tbox = self.read_fields(">I4s")
- if lbox == 1:
- lbox = self.read_fields(">Q")[0]
- hlen = 16
- else:
- hlen = 8
-
- if lbox < hlen or not self._can_read(lbox - hlen):
- msg = "Invalid header length"
- raise SyntaxError(msg)
-
- self.remaining_in_box = lbox - hlen
- return tbox
-
-
-def _parse_codestream(fp):
- """Parse the JPEG 2000 codestream to extract the size and component
- count from the SIZ marker segment, returning a PIL (size, mode) tuple."""
-
- hdr = fp.read(2)
- lsiz = _binary.i16be(hdr)
- siz = hdr + fp.read(lsiz - 2)
- lsiz, rsiz, xsiz, ysiz, xosiz, yosiz, _, _, _, _, csiz = struct.unpack_from(
- ">HHIIIIIIIIH", siz
- )
- ssiz = [None] * csiz
- xrsiz = [None] * csiz
- yrsiz = [None] * csiz
- for i in range(csiz):
- ssiz[i], xrsiz[i], yrsiz[i] = struct.unpack_from(">BBB", siz, 36 + 3 * i)
-
- size = (xsiz - xosiz, ysiz - yosiz)
- if csiz == 1:
- if (yrsiz[0] & 0x7F) > 8:
- mode = "I;16"
- else:
- mode = "L"
- elif csiz == 2:
- mode = "LA"
- elif csiz == 3:
- mode = "RGB"
- elif csiz == 4:
- mode = "RGBA"
- else:
- mode = None
-
- return size, mode
-
-
-def _res_to_dpi(num, denom, exp):
- """Convert JPEG2000's (numerator, denominator, exponent-base-10) resolution,
- calculated as (num / denom) * 10^exp and stored in dots per meter,
- to floating-point dots per inch."""
- if denom != 0:
- return (254 * num * (10**exp)) / (10000 * denom)
-
-
-def _parse_jp2_header(fp):
- """Parse the JP2 header box to extract size, component count,
- color space information, and optionally DPI information,
- returning a (size, mode, mimetype, dpi) tuple."""
-
- # Find the JP2 header box
- reader = BoxReader(fp)
- header = None
- mimetype = None
- while reader.has_next_box():
- tbox = reader.next_box_type()
-
- if tbox == b"jp2h":
- header = reader.read_boxes()
- break
- elif tbox == b"ftyp":
- if reader.read_fields(">4s")[0] == b"jpx ":
- mimetype = "image/jpx"
-
- size = None
- mode = None
- bpc = None
- nc = None
- dpi = None # 2-tuple of DPI info, or None
-
- while header.has_next_box():
- tbox = header.next_box_type()
-
- if tbox == b"ihdr":
- height, width, nc, bpc = header.read_fields(">IIHB")
- size = (width, height)
- if nc == 1 and (bpc & 0x7F) > 8:
- mode = "I;16"
- elif nc == 1:
- mode = "L"
- elif nc == 2:
- mode = "LA"
- elif nc == 3:
- mode = "RGB"
- elif nc == 4:
- mode = "RGBA"
- elif tbox == b"res ":
- res = header.read_boxes()
- while res.has_next_box():
- tres = res.next_box_type()
- if tres == b"resc":
- vrcn, vrcd, hrcn, hrcd, vrce, hrce = res.read_fields(">HHHHBB")
- hres = _res_to_dpi(hrcn, hrcd, hrce)
- vres = _res_to_dpi(vrcn, vrcd, vrce)
- if hres is not None and vres is not None:
- dpi = (hres, vres)
- break
-
- if size is None or mode is None:
- msg = "Malformed JP2 header"
- raise SyntaxError(msg)
-
- return size, mode, mimetype, dpi
-
-
-##
-# Image plugin for JPEG2000 images.
-
-
-class Jpeg2KImageFile(ImageFile.ImageFile):
- format = "JPEG2000"
- format_description = "JPEG 2000 (ISO 15444)"
-
- def _open(self):
- sig = self.fp.read(4)
- if sig == b"\xff\x4f\xff\x51":
- self.codec = "j2k"
- self._size, self.mode = _parse_codestream(self.fp)
- else:
- sig = sig + self.fp.read(8)
-
- if sig == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a":
- self.codec = "jp2"
- header = _parse_jp2_header(self.fp)
- self._size, self.mode, self.custom_mimetype, dpi = header
- if dpi is not None:
- self.info["dpi"] = dpi
- if self.fp.read(12).endswith(b"jp2c\xff\x4f\xff\x51"):
- self._parse_comment()
- else:
- msg = "not a JPEG 2000 file"
- raise SyntaxError(msg)
-
- if self.size is None or self.mode is None:
- msg = "unable to determine size/mode"
- raise SyntaxError(msg)
-
- self._reduce = 0
- self.layers = 0
-
- fd = -1
- length = -1
-
- try:
- fd = self.fp.fileno()
- length = os.fstat(fd).st_size
- except Exception:
- fd = -1
- try:
- pos = self.fp.tell()
- self.fp.seek(0, io.SEEK_END)
- length = self.fp.tell()
- self.fp.seek(pos)
- except Exception:
- length = -1
-
- self.tile = [
- (
- "jpeg2k",
- (0, 0) + self.size,
- 0,
- (self.codec, self._reduce, self.layers, fd, length),
- )
- ]
-
- def _parse_comment(self):
- hdr = self.fp.read(2)
- length = _binary.i16be(hdr)
- self.fp.seek(length - 2, os.SEEK_CUR)
-
- while True:
- marker = self.fp.read(2)
- if not marker:
- break
- typ = marker[1]
- if typ in (0x90, 0xD9):
- # Start of tile or end of codestream
- break
- hdr = self.fp.read(2)
- length = _binary.i16be(hdr)
- if typ == 0x64:
- # Comment
- self.info["comment"] = self.fp.read(length - 2)[2:]
- break
- else:
- self.fp.seek(length - 2, os.SEEK_CUR)
-
- @property
- def reduce(self):
- # https://github.com/python-pillow/Pillow/issues/4343 found that the
- # new Image 'reduce' method was shadowed by this plugin's 'reduce'
- # property. This attempts to allow for both scenarios
- return self._reduce or super().reduce
-
- @reduce.setter
- def reduce(self, value):
- self._reduce = value
-
- def load(self):
- if self.tile and self._reduce:
- power = 1 << self._reduce
- adjust = power >> 1
- self._size = (
- int((self.size[0] + adjust) / power),
- int((self.size[1] + adjust) / power),
- )
-
- # Update the reduce and layers settings
- t = self.tile[0]
- t3 = (t[3][0], self._reduce, self.layers, t[3][3], t[3][4])
- self.tile = [(t[0], (0, 0) + self.size, t[2], t3)]
-
- return ImageFile.ImageFile.load(self)
-
-
-def _accept(prefix):
- return (
- prefix[:4] == b"\xff\x4f\xff\x51"
- or prefix[:12] == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a"
- )
-
-
-# ------------------------------------------------------------
-# Save support
-
-
-def _save(im, fp, filename):
- # Get the keyword arguments
- info = im.encoderinfo
-
- if filename.endswith(".j2k") or info.get("no_jp2", False):
- kind = "j2k"
- else:
- kind = "jp2"
-
- offset = info.get("offset", None)
- tile_offset = info.get("tile_offset", None)
- tile_size = info.get("tile_size", None)
- quality_mode = info.get("quality_mode", "rates")
- quality_layers = info.get("quality_layers", None)
- if quality_layers is not None and not (
- isinstance(quality_layers, (list, tuple))
- and all(
- [
- isinstance(quality_layer, (int, float))
- for quality_layer in quality_layers
- ]
- )
- ):
- msg = "quality_layers must be a sequence of numbers"
- raise ValueError(msg)
-
- num_resolutions = info.get("num_resolutions", 0)
- cblk_size = info.get("codeblock_size", None)
- precinct_size = info.get("precinct_size", None)
- irreversible = info.get("irreversible", False)
- progression = info.get("progression", "LRCP")
- cinema_mode = info.get("cinema_mode", "no")
- mct = info.get("mct", 0)
- signed = info.get("signed", False)
- comment = info.get("comment")
- if isinstance(comment, str):
- comment = comment.encode()
- plt = info.get("plt", False)
-
- fd = -1
- if hasattr(fp, "fileno"):
- try:
- fd = fp.fileno()
- except Exception:
- fd = -1
-
- im.encoderconfig = (
- offset,
- tile_offset,
- tile_size,
- quality_mode,
- quality_layers,
- num_resolutions,
- cblk_size,
- precinct_size,
- irreversible,
- progression,
- cinema_mode,
- mct,
- signed,
- fd,
- comment,
- plt,
- )
-
- ImageFile._save(im, fp, [("jpeg2k", (0, 0) + im.size, 0, kind)])
-
-
-# ------------------------------------------------------------
-# Registry stuff
-
-
-Image.register_open(Jpeg2KImageFile.format, Jpeg2KImageFile, _accept)
-Image.register_save(Jpeg2KImageFile.format, _save)
-
-Image.register_extensions(
- Jpeg2KImageFile.format, [".jp2", ".j2k", ".jpc", ".jpf", ".jpx", ".j2c"]
-)
-
-Image.register_mime(Jpeg2KImageFile.format, "image/jp2")
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/tracking/bbox_iou_tracker.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/tracking/bbox_iou_tracker.py
deleted file mode 100644
index 598081cb542ce64dd1d100c0d3e12a59f57b8e0e..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/tracking/bbox_iou_tracker.py
+++ /dev/null
@@ -1,276 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2004-present Facebook. All Rights Reserved.
-import copy
-import numpy as np
-from typing import List
-import torch
-
-from detectron2.config import configurable
-from detectron2.structures import Boxes, Instances
-from detectron2.structures.boxes import pairwise_iou
-
-from ..config.config import CfgNode as CfgNode_
-from .base_tracker import TRACKER_HEADS_REGISTRY, BaseTracker
-
-
-@TRACKER_HEADS_REGISTRY.register()
-class BBoxIOUTracker(BaseTracker):
- """
- A bounding box tracker to assign ID based on IoU between current and previous instances
- """
-
- @configurable
- def __init__(
- self,
- *,
- video_height: int,
- video_width: int,
- max_num_instances: int = 200,
- max_lost_frame_count: int = 0,
- min_box_rel_dim: float = 0.02,
- min_instance_period: int = 1,
- track_iou_threshold: float = 0.5,
- **kwargs,
- ):
- """
- Args:
- video_height: height the video frame
- video_width: width of the video frame
- max_num_instances: maximum number of id allowed to be tracked
- max_lost_frame_count: maximum number of frame an id can lost tracking
- exceed this number, an id is considered as lost
- forever
- min_box_rel_dim: a percentage, smaller than this dimension, a bbox is
- removed from tracking
- min_instance_period: an instance will be shown after this number of period
- since its first showing up in the video
- track_iou_threshold: iou threshold, below this number a bbox pair is removed
- from tracking
- """
- super().__init__(**kwargs)
- self._video_height = video_height
- self._video_width = video_width
- self._max_num_instances = max_num_instances
- self._max_lost_frame_count = max_lost_frame_count
- self._min_box_rel_dim = min_box_rel_dim
- self._min_instance_period = min_instance_period
- self._track_iou_threshold = track_iou_threshold
-
- @classmethod
- def from_config(cls, cfg: CfgNode_):
- """
- Old style initialization using CfgNode
-
- Args:
- cfg: D2 CfgNode, config file
- Return:
- dictionary storing arguments for __init__ method
- """
- assert "VIDEO_HEIGHT" in cfg.TRACKER_HEADS
- assert "VIDEO_WIDTH" in cfg.TRACKER_HEADS
- video_height = cfg.TRACKER_HEADS.get("VIDEO_HEIGHT")
- video_width = cfg.TRACKER_HEADS.get("VIDEO_WIDTH")
- max_num_instances = cfg.TRACKER_HEADS.get("MAX_NUM_INSTANCES", 200)
- max_lost_frame_count = cfg.TRACKER_HEADS.get("MAX_LOST_FRAME_COUNT", 0)
- min_box_rel_dim = cfg.TRACKER_HEADS.get("MIN_BOX_REL_DIM", 0.02)
- min_instance_period = cfg.TRACKER_HEADS.get("MIN_INSTANCE_PERIOD", 1)
- track_iou_threshold = cfg.TRACKER_HEADS.get("TRACK_IOU_THRESHOLD", 0.5)
- return {
- "_target_": "detectron2.tracking.bbox_iou_tracker.BBoxIOUTracker",
- "video_height": video_height,
- "video_width": video_width,
- "max_num_instances": max_num_instances,
- "max_lost_frame_count": max_lost_frame_count,
- "min_box_rel_dim": min_box_rel_dim,
- "min_instance_period": min_instance_period,
- "track_iou_threshold": track_iou_threshold,
- }
-
- def update(self, instances: Instances) -> Instances:
- """
- See BaseTracker description
- """
- instances = self._initialize_extra_fields(instances)
- if self._prev_instances is not None:
- # calculate IoU of all bbox pairs
- iou_all = pairwise_iou(
- boxes1=instances.pred_boxes,
- boxes2=self._prev_instances.pred_boxes,
- )
- # sort IoU in descending order
- bbox_pairs = self._create_prediction_pairs(instances, iou_all)
- # assign previous ID to current bbox if IoU > track_iou_threshold
- self._reset_fields()
- for bbox_pair in bbox_pairs:
- idx = bbox_pair["idx"]
- prev_id = bbox_pair["prev_id"]
- if (
- idx in self._matched_idx
- or prev_id in self._matched_ID
- or bbox_pair["IoU"] < self._track_iou_threshold
- ):
- continue
- instances.ID[idx] = prev_id
- instances.ID_period[idx] = bbox_pair["prev_period"] + 1
- instances.lost_frame_count[idx] = 0
- self._matched_idx.add(idx)
- self._matched_ID.add(prev_id)
- self._untracked_prev_idx.remove(bbox_pair["prev_idx"])
- instances = self._assign_new_id(instances)
- instances = self._merge_untracked_instances(instances)
- self._prev_instances = copy.deepcopy(instances)
- return instances
-
- def _create_prediction_pairs(self, instances: Instances, iou_all: np.ndarray) -> List:
- """
- For all instances in previous and current frames, create pairs. For each
- pair, store index of the instance in current frame predcitions, index in
- previous predictions, ID in previous predictions, IoU of the bboxes in this
- pair, period in previous predictions.
-
- Args:
- instances: D2 Instances, for predictions of the current frame
- iou_all: IoU for all bboxes pairs
- Return:
- A list of IoU for all pairs
- """
- bbox_pairs = []
- for i in range(len(instances)):
- for j in range(len(self._prev_instances)):
- bbox_pairs.append(
- {
- "idx": i,
- "prev_idx": j,
- "prev_id": self._prev_instances.ID[j],
- "IoU": iou_all[i, j],
- "prev_period": self._prev_instances.ID_period[j],
- }
- )
- return bbox_pairs
-
- def _initialize_extra_fields(self, instances: Instances) -> Instances:
- """
- If input instances don't have ID, ID_period, lost_frame_count fields,
- this method is used to initialize these fields.
-
- Args:
- instances: D2 Instances, for predictions of the current frame
- Return:
- D2 Instances with extra fields added
- """
- if not instances.has("ID"):
- instances.set("ID", [None] * len(instances))
- if not instances.has("ID_period"):
- instances.set("ID_period", [None] * len(instances))
- if not instances.has("lost_frame_count"):
- instances.set("lost_frame_count", [None] * len(instances))
- if self._prev_instances is None:
- instances.ID = list(range(len(instances)))
- self._id_count += len(instances)
- instances.ID_period = [1] * len(instances)
- instances.lost_frame_count = [0] * len(instances)
- return instances
-
- def _reset_fields(self):
- """
- Before each uodate call, reset fields first
- """
- self._matched_idx = set()
- self._matched_ID = set()
- self._untracked_prev_idx = set(range(len(self._prev_instances)))
-
- def _assign_new_id(self, instances: Instances) -> Instances:
- """
- For each untracked instance, assign a new id
-
- Args:
- instances: D2 Instances, for predictions of the current frame
- Return:
- D2 Instances with new ID assigned
- """
- untracked_idx = set(range(len(instances))).difference(self._matched_idx)
- for idx in untracked_idx:
- instances.ID[idx] = self._id_count
- self._id_count += 1
- instances.ID_period[idx] = 1
- instances.lost_frame_count[idx] = 0
- return instances
-
- def _merge_untracked_instances(self, instances: Instances) -> Instances:
- """
- For untracked previous instances, under certain condition, still keep them
- in tracking and merge with the current instances.
-
- Args:
- instances: D2 Instances, for predictions of the current frame
- Return:
- D2 Instances merging current instances and instances from previous
- frame decided to keep tracking
- """
- untracked_instances = Instances(
- image_size=instances.image_size,
- pred_boxes=[],
- pred_classes=[],
- scores=[],
- ID=[],
- ID_period=[],
- lost_frame_count=[],
- )
- prev_bboxes = list(self._prev_instances.pred_boxes)
- prev_classes = list(self._prev_instances.pred_classes)
- prev_scores = list(self._prev_instances.scores)
- prev_ID_period = self._prev_instances.ID_period
- if instances.has("pred_masks"):
- untracked_instances.set("pred_masks", [])
- prev_masks = list(self._prev_instances.pred_masks)
- if instances.has("pred_keypoints"):
- untracked_instances.set("pred_keypoints", [])
- prev_keypoints = list(self._prev_instances.pred_keypoints)
- if instances.has("pred_keypoint_heatmaps"):
- untracked_instances.set("pred_keypoint_heatmaps", [])
- prev_keypoint_heatmaps = list(self._prev_instances.pred_keypoint_heatmaps)
- for idx in self._untracked_prev_idx:
- x_left, y_top, x_right, y_bot = prev_bboxes[idx]
- if (
- (1.0 * (x_right - x_left) / self._video_width < self._min_box_rel_dim)
- or (1.0 * (y_bot - y_top) / self._video_height < self._min_box_rel_dim)
- or self._prev_instances.lost_frame_count[idx] >= self._max_lost_frame_count
- or prev_ID_period[idx] <= self._min_instance_period
- ):
- continue
- untracked_instances.pred_boxes.append(list(prev_bboxes[idx].numpy()))
- untracked_instances.pred_classes.append(int(prev_classes[idx]))
- untracked_instances.scores.append(float(prev_scores[idx]))
- untracked_instances.ID.append(self._prev_instances.ID[idx])
- untracked_instances.ID_period.append(self._prev_instances.ID_period[idx])
- untracked_instances.lost_frame_count.append(
- self._prev_instances.lost_frame_count[idx] + 1
- )
- if instances.has("pred_masks"):
- untracked_instances.pred_masks.append(prev_masks[idx].numpy().astype(np.uint8))
- if instances.has("pred_keypoints"):
- untracked_instances.pred_keypoints.append(
- prev_keypoints[idx].numpy().astype(np.uint8)
- )
- if instances.has("pred_keypoint_heatmaps"):
- untracked_instances.pred_keypoint_heatmaps.append(
- prev_keypoint_heatmaps[idx].numpy().astype(np.float32)
- )
- untracked_instances.pred_boxes = Boxes(torch.FloatTensor(untracked_instances.pred_boxes))
- untracked_instances.pred_classes = torch.IntTensor(untracked_instances.pred_classes)
- untracked_instances.scores = torch.FloatTensor(untracked_instances.scores)
- if instances.has("pred_masks"):
- untracked_instances.pred_masks = torch.IntTensor(untracked_instances.pred_masks)
- if instances.has("pred_keypoints"):
- untracked_instances.pred_keypoints = torch.IntTensor(untracked_instances.pred_keypoints)
- if instances.has("pred_keypoint_heatmaps"):
- untracked_instances.pred_keypoint_heatmaps = torch.FloatTensor(
- untracked_instances.pred_keypoint_heatmaps
- )
-
- return Instances.cat(
- [
- instances,
- untracked_instances,
- ]
- )
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/modeling/test_anchor_generator.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/modeling/test_anchor_generator.py
deleted file mode 100644
index 13a808e587382216da6fe7ee957603f448172657..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/modeling/test_anchor_generator.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import unittest
-import torch
-
-from detectron2.config import get_cfg
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.anchor_generator import DefaultAnchorGenerator, RotatedAnchorGenerator
-
-logger = logging.getLogger(__name__)
-
-
-class TestAnchorGenerator(unittest.TestCase):
- def test_default_anchor_generator(self):
- cfg = get_cfg()
- cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]]
- cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]]
-
- anchor_generator = DefaultAnchorGenerator(cfg, [ShapeSpec(stride=4)])
-
- # only the last two dimensions of features matter here
- num_images = 2
- features = {"stage3": torch.rand(num_images, 96, 1, 2)}
- anchors = anchor_generator([features["stage3"]])
- expected_anchor_tensor = torch.tensor(
- [
- [-32.0, -8.0, 32.0, 8.0],
- [-16.0, -16.0, 16.0, 16.0],
- [-8.0, -32.0, 8.0, 32.0],
- [-64.0, -16.0, 64.0, 16.0],
- [-32.0, -32.0, 32.0, 32.0],
- [-16.0, -64.0, 16.0, 64.0],
- [-28.0, -8.0, 36.0, 8.0], # -28.0 == -32.0 + STRIDE (4)
- [-12.0, -16.0, 20.0, 16.0],
- [-4.0, -32.0, 12.0, 32.0],
- [-60.0, -16.0, 68.0, 16.0],
- [-28.0, -32.0, 36.0, 32.0],
- [-12.0, -64.0, 20.0, 64.0],
- ]
- )
-
- self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor))
-
- def test_default_anchor_generator_centered(self):
- # test explicit args
- anchor_generator = DefaultAnchorGenerator(
- sizes=[32, 64], aspect_ratios=[0.25, 1, 4], strides=[4]
- )
-
- # only the last two dimensions of features matter here
- num_images = 2
- features = {"stage3": torch.rand(num_images, 96, 1, 2)}
- expected_anchor_tensor = torch.tensor(
- [
- [-30.0, -6.0, 34.0, 10.0],
- [-14.0, -14.0, 18.0, 18.0],
- [-6.0, -30.0, 10.0, 34.0],
- [-62.0, -14.0, 66.0, 18.0],
- [-30.0, -30.0, 34.0, 34.0],
- [-14.0, -62.0, 18.0, 66.0],
- [-26.0, -6.0, 38.0, 10.0],
- [-10.0, -14.0, 22.0, 18.0],
- [-2.0, -30.0, 14.0, 34.0],
- [-58.0, -14.0, 70.0, 18.0],
- [-26.0, -30.0, 38.0, 34.0],
- [-10.0, -62.0, 22.0, 66.0],
- ]
- )
-
- anchors = anchor_generator([features["stage3"]])
- self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor))
-
- anchors = torch.jit.script(anchor_generator)([features["stage3"]])
- self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor))
-
- def test_rrpn_anchor_generator(self):
- cfg = get_cfg()
- cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]]
- cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]]
- cfg.MODEL.ANCHOR_GENERATOR.ANGLES = [0, 45] # test single list[float]
- anchor_generator = RotatedAnchorGenerator(cfg, [ShapeSpec(stride=4)])
-
- # only the last two dimensions of features matter here
- num_images = 2
- features = {"stage3": torch.rand(num_images, 96, 1, 2)}
- anchors = anchor_generator([features["stage3"]])
- expected_anchor_tensor = torch.tensor(
- [
- [0.0, 0.0, 64.0, 16.0, 0.0],
- [0.0, 0.0, 64.0, 16.0, 45.0],
- [0.0, 0.0, 32.0, 32.0, 0.0],
- [0.0, 0.0, 32.0, 32.0, 45.0],
- [0.0, 0.0, 16.0, 64.0, 0.0],
- [0.0, 0.0, 16.0, 64.0, 45.0],
- [0.0, 0.0, 128.0, 32.0, 0.0],
- [0.0, 0.0, 128.0, 32.0, 45.0],
- [0.0, 0.0, 64.0, 64.0, 0.0],
- [0.0, 0.0, 64.0, 64.0, 45.0],
- [0.0, 0.0, 32.0, 128.0, 0.0],
- [0.0, 0.0, 32.0, 128.0, 45.0],
- [4.0, 0.0, 64.0, 16.0, 0.0], # 4.0 == 0.0 + STRIDE (4)
- [4.0, 0.0, 64.0, 16.0, 45.0],
- [4.0, 0.0, 32.0, 32.0, 0.0],
- [4.0, 0.0, 32.0, 32.0, 45.0],
- [4.0, 0.0, 16.0, 64.0, 0.0],
- [4.0, 0.0, 16.0, 64.0, 45.0],
- [4.0, 0.0, 128.0, 32.0, 0.0],
- [4.0, 0.0, 128.0, 32.0, 45.0],
- [4.0, 0.0, 64.0, 64.0, 0.0],
- [4.0, 0.0, 64.0, 64.0, 45.0],
- [4.0, 0.0, 32.0, 128.0, 0.0],
- [4.0, 0.0, 32.0, 128.0, 45.0],
- ]
- )
-
- self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/chaocai/superbot/maxlang_prompt.py b/spaces/chaocai/superbot/maxlang_prompt.py
deleted file mode 100644
index 43b64f89d2815e1aa44140327b3852002640a373..0000000000000000000000000000000000000000
--- a/spaces/chaocai/superbot/maxlang_prompt.py
+++ /dev/null
@@ -1,3033 +0,0 @@
-prompt_maxlang = """
-
-You can learn MaxLang programming from the following doc. It includes the data types and syntax of Maxlang,
-and when implementing the program, you can only use the data types, struct, statements in the doc.
-Attention, when writing a program with MaxLang, don't use any statements not listed in the following contents (such as "for" statement.)
---- MaxLang Document
-# MaxLang Quick Start
-
-## -- All New Design, All For DevOps
-MaxLang is the dedicated programming language for automating the DevOps tasks, which is implemented and maintained by SpotMax team.
-## Primitive Data Types
-
-### Number
-
-In Maxlang, it is not necessary to distinguish the numerical value's type, such as integer and float. Maxlang can recoginze and convert values automatically.
-a = 11
-
-b = 100
-
-a + b
-a = 10
-
-b = 1.2
-
-a + b
-### String
-
-A string value is quoted by " or ` .
-a = "Hello World"
-
-a
-a = `Hello World!
-
- I love China!`
-
-a
-a = "Hello"
-
-b = `World`
-
-a+" "+ b
-### Boolean
-
-The boolean value is either "true" or "false".
-a = true;
-
-a
-2 > 3
-### Function
-
-Function is treated as the basic data type and the first class citizen in Maxlang. If you are familiar with Lisp, you can treat it as the function in Lisp.
-
-
-
-In Maxlang, function can be used as the arguments or the return value of other functions.
-
-add = fn(x,y){x+y}
-
-add(1,1)
-sub = fn(x,y){
-
- return x-y;
-
- }
-
-sub(2,1)
-The following example is about how you can implement "Command Patter" (GoF) very easily by leveraging Maxlang's function.
-cmdPattern = fn(x, y){y(x)}
-
-cmdPattern(10, fn(x){x*10})
-
-cmdPattern(10, fn(x){x/10})
-## Statements
-
-### Conditional statement
-
-The typical conditional statements, "if" and "if-else" are both supported in Maxlang.
-if (true) {
-
- "TRUE"
-
-}else{
-
- "FALSE"
-
-}
-### Loop statement
-For loop structure, currently, only "while" statement is supported. The "break" and "continue" statements are supported, which can be used in the loop structure to change the execution flow.
-i = 1;
-
-sum = 0;
-
-while (i < 10) {
-
- sum = sum + i;
-
- i = i + 1
-
-}
-
-sum
-i = 1
-
-sum = 0
-
-while (i<10) {
-
- if (i/2 * 2 != i) { /* 2 + 4 + 6 + 8 */
-
- i = i + 1
-
- continue
-
- }
-
- sum = sum + i
-
- i = i+1
-
-}
-
-sum
-## Collections
-
-### Array
-
-The elements in an array can be any primitive type object, even a function, also the different type of the elements could be mixed in an array.
-
-
-
-The index of an array starts from 0.
-myArray = [1,2, fn(x,y){x+y}, "Hello ", "World"]
-myArray[2](myArray[0],myArray[1])
-myArray[2](myArray[3],myArray[4])
-#### Built-in functions for array
-##### len ( < array > )
-
-Get the number of the elements in the array.
-arr = [1,2,3,4,5]
-len(arr)
-##### first ( < array > )
-
-Get the first element of the array
-first(arr)
-##### last ( < array > )
-
-Get the last element of the array
-last(arr)
-##### rest (< array >)
-
-return a new array with removing the first element
-rest(arr)
-The following example is to travel an array with the built-in functions.
-nums = [1,2,3,4,5,6,7,8,9,10]
-
-sum = 0
-
-while(len(nums)>0){
-
- sum = sum + first(nums)
-
- nums = rest(nums)
-
-}
-
-sum
-### HashTable
-
-Like the mainstream programming language, HashTable is the collection of the key-value pairs.
-
-
-
-In a key-value pair :
-
-
-
-The key can be a primitive object except a function.
-
-
-
-The value be a primitive object, a collection or a hashtable.
-dic = {1:"one","one":1,"inc":fn(x){x+1}, "arr":[1,2,3,5], "table":{2:"two","sub":fn(x){x-1}}}
-dic[1]
-dic["inc"](dic["one"])
-#### Built-in functions for Hashtable
-
-##### keys (< hashtable >)
-
-Get the keys of a hashtable.
-keys(dic)
-##### values (< hashtable >)
-
-Get the values of a hashtable
-values(dic)
-### Built-in functions
-
-#### Conversion
-
-##### ntos (< number >)
-
-Convert a numeric value to the related string value.
-"result = " + ntos(100)
-"result = " + ntos(10.19)
-##### ston (< string >)
-
-Convert a string value to the related numeric value.
-1 + ston("10")
-1 + ston("11.11")
-#### JSON query
-
-For one simple example is more expressive than thousands of the words, I'd like to use several simple examples to explain how the built-in functions work.
-json=`{"name":"sam",
-
- "skills": {
-
- "coding":["C","Java","php"],
-
- "habits":["eating","sleeping"],
-
- "scores":[80,90,100]
-
- }
-
- }`
-
-jpath("skills.coding.[1]", json)
-jpath("skills",json)["coding"][2]
-## Programming Introduction
-1. "for" statement and for-loop is not supported by Maxlang. So, please, you use while-loop
-## MaxLang is growing fast, still more to come!
-"""
-
-prompt_exp_maxlang = """
-Also, here are some Maxlang examples about using some of Maxlang built-in functions. You can learn from it.
----
-# Example1 :全链路状态检查工具-Bundle版
-### 登录MaxCloud
-```maxlang
-loginMaxcloud("maxcloud")
-```
-### 展示登录用户可以访问的所有团队
-```maxlang
-print_teams()
-```
-### 获取团队下的所有BundleGroup
-```maxlang
-print_bundle_group(3)
-```
-### 根据团队ID和BundleGroupId打印链路监控状态
-```maxlang
-print_bundle_group_health({"team_id":3,"bundle_group_id":7})
-```
-# Example2:全链路锁机器-bundle版
-### 登录MaxCloud
-```maxlang
-loginMaxcloud("maxcloud")
-```
-### 展示登录用户可以访问的所有团队
-```maxlang
-print_teams()
-```
-### 获取团队下的所有BundleGroup
-```maxlang
-print_bundle_group(3)
-```
-### 构建bundle_group的伸缩计划
-```maxlang
-plan=build_bundle_group_scale_plan({
- "team_id":3,
- "bundle_group_id":8,
- "scare_source":"same_time_last_week",
- "scare_ratio":1.1
-})
-print_bundle_group_by_plan(plan)
-```
-### 根据bundle伸缩计划进行bundle的伸缩
-```maxlang
-bundle_group_scare(plan)
-```
-# Example3:ASG 扩缩容
-### 定义目标节点数
-```maxlang
-env_des=5
-```
-### 扩缩容
-```maxlang
-asginfo=getAwsASG("aws_credential","eu-central-1","fk-asg-nexttracking-deliver-asg-test") // 获取asg当前状态信息
-min=asginfo["MinSize"]
-max=asginfo["MaxSize"]
-current = asginfo["DesiredSize"]
-des=env_des
-if(des < current){
- "修改值小于当前capacity值,无法修改,请修正,当前capacity值为:"+ntos(current)
-}else{
- if(des < max){
- updateAwsASG("aws_credential","eu-central-1", "fk-asg-nexttracking-deliver-asg-test", min, max, des) // 更改ASG的最小容量、最大容量、所需容量
- }else{
- updateAwsASG("aws_credential","eu-central-1", "fk-asg-nexttracking-deliver-asg-test", min, des, des)
- }
- asginfo=getAwsASG("aws_credential","eu-central-1","fk-asg-nexttracking-deliver-asg-test")
- asginfo
-}
-```
----
-"""
-
-prompt_maxlang_builtin_fns = """
-The following content are about the built-in functions of Maxlang.
-You can use them to get/set the resource configurations of kubernetes or maxcloud.
-----
-## addOrUpdateRepo
-#### 方法描述:
-```
-Helm 安装添加Repo
- addOrUpdateRepo(bj_demo_crazywolf, "bitnami", "https://charts.bitnami.com/bitnami")
-
- 参数1:集群环境变量
- example bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
-
- 参数2:repo name
-
- 参数3:repo URL
-```
-
-#### 示例:
-```
-addOrUpdateRepo(参数1, 参数2, 参数3)
-```
-
-## applyYaml
-#### 方法描述:
-```
-获取Pod详情
- applyYaml(bj_demo_crazywolf, yaml)
- 可以与fillTemp(configStr, data)配合使用
-
- 参数0:指定集群环境
- example bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
-
- 参数1:资源类型
- "apiVersion: v1
- kind: ConfigMap
- metadata:
- name: myconfigmap-b64ded
- namespace: crazywolf
- labels:
- app: myapplication
- data:
- data: b64ded"
-```
-
-#### 示例:
-```
-applyYaml(参数1, 参数2)
-```
-
-## build_bundle_group_scale_plan
-#### 方法描述:
-```
-构建bundle_group的伸缩计划
-```
-
-#### 示例:
-```
-build_bundle_group_scale_plan({
- "team_id":3,
- "bundle_group_id":8,
- "scare_source":"same_time_yesterday",
- "scare_ratio":"1.3"
-})
-```
-
-## bundle_group_scale
-#### 方法描述:
-```
-根据bundle伸缩计划进行bundle的伸缩
-```
-
-#### 示例:
-```
-bundle_group_scare(plan)
-```
-
-## createCluster
-#### 方法描述:
-```
-创建K8S集群
-
- 参数会替换模版中相应的字段。 如name参数会替换template中的name字段
- subnetIDs:
- 字符串数组, 指定集群所在的subnets 例如
- credential = "aliCredit"
- provider = "aliyun"// aliyun, aws, huawei
- region = "cn-beijing"
- vpcID = "xxxdfsdfsdf"
- setCredential(credential, "keyid", <>)
- subnetIDs = ["vsw-2zen61tp041lskzzzhukq","vsw-2ze9gtpk7rgmugk2rphli"]
- createCluster(credential, provider, region, name, vpcId, subnetIds, template string)
-```
-
-#### 示例:
-```
-createCluster(credential, provider, region, name, vpcId, subnetIds, template string)
-```
-
-## createNamespace
-#### 方法描述:
-```
-创建命名空间
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- createNamespace(bj_demo_crazywolf, "gjw-0910")
-
- 参数1:集群环境变量
-
- 参数2:新命名空间名称
-```
-
-#### 示例:
-```
-createNamespace(参数1, 参数2)
-```
-
-## createNodegroup
-#### 方法描述:
-```
-创建nodegoup
-
- provider: aliyun, aws, huawei
- region: example cn-beijing
- clusterID: 要创建节点组的集群ID
- name: 节点组名字
- odOrSpot: od, spot
- instanceCount:新节点组的节点数量
- instanceTypes: 数组["ecs.n1.medium","ecs.sn1.medium"]
- subnetIDs: 数组["vsw-2zen61tp041lskzzzhukq","vsw-2ze9gtpk7rgmugk2rphli"]
-
- 参数会替换模版中相应的字段。 如name参数会替换template中的name字段
-```
-
-#### 示例:
-```
-createNodegroup(provider, region, clusterID, name, odOrSpot, instanceCount, instanceTypes, subnetIDs, template)
-```
-
-## deleteCluster
-#### 方法描述:
-```
-删除K8S集群
- credential = "aliCredit"
- provider = "aliyun"// aliyun, aws, huawei
- region = "cn-beijing"
- clusterID = "xxxxx"
- setCredential(credential, "keyid", <>)
- deleteCluster(credential, provider, region, clusterID)
-```
-
-#### 示例:
-```
-deleteCluster(credential, provider, region, clusterID)
-```
-
-## deleteNodegroup
-#### 方法描述:
-```
-删除K8S集群的节点组
-```
-
-#### 示例:
-```
-deleteNodegroup(credential, provider, region, clusterID, nodegroupID)
-```
-
-## deleteResource
-#### 方法描述:
-```
-创建命名空间
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- deleteResource(bj_demo_crazywolf, "deployment","ngxin-dep-0")
-
- 参数1:集群环境变量
-
- 参数2:资源类型
- deployment
- job
- cronjob
- daemonset
- statefulset
- service
- ingress
- persistentvolumeclaim
- configmap
- secret
- gateway
- namespace
- pod
- horizontalpodautoscaler
- serviceaccount
- replicaset
- poddisruptionbudget
- node
- storageclass
-
- 参数3:资源名称
-```
-
-#### 示例:
-```
-deleteResource(参数1, 参数2, 参数3)
-```
-
-## describeResource
-#### 方法描述:
-```
-Describe 资源
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- describeResource(bj_demo_crazywolf, "deployment","ngxin-dep-0")
-
- 参数1:集群环境变量
-
- 参数2:资源类型
- deployment
- job
- cronjob
- daemonset
- statefulset
- service
- ingress
- persistentvolumeclaim
- configmap
- secret
- gateway
- namespace
- pod
- horizontalpodautoscaler
- serviceaccount
- replicaset
- poddisruptionbudget
- node
- storageclass
-
- 参数3:资源名称
-```
-
-#### 示例:
-```
-describeResource(参数1, 参数2, 参数3)
-```
-
-## detailBundle
-#### 方法描述:
-```
-获取bundle详情
-```
-
-#### 示例:
-```
-detailBundle(env,bundleId)
-```
-
-## detailBundleGroup
-#### 方法描述:
-```
-获取bundleGroup详情
-```
-
-#### 示例:
-```
-detailBundleGroup(teamId,bundleGroupId)
-```
-
-## detailPod
-#### 方法描述:
-```
-获取Pod详情
- detailPod(bj_demo_crazywolf, "my-wordpress-mariadb-0")
- 相当于调用detailResource(bj_demo_crazywolf, "deployment", "ngxin-dep-0")
-
- 参数0:指定集群环境
- example bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
-
- 参数1:资源类型
- 资源为:
- deployment
- job
- cronjob
- daemonset
- statefulset
- service
- ingress
- persistentvolumeclaim
- configmap
- secret
- gateway
- namespace
- pod
- horizontalpodautoscaler
- serviceaccount
- replicaset
- poddisruptionbudget
- node
- storageclass
-```
-
-#### 示例:
-```
-detailPod(参数1, 参数2)
-```
-
-## detailResource
-#### 方法描述:
-```
-获取资源详情
- detailResource(bj_demo_crazywolf, "deployment", "ngxin-dep-0")
-
- 参数0:指定集群环境
- example bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
-
- 参数1:资源类型
- 资源为:
- deployment
- job
- cronjob
- daemonset
- statefulset
- service
- ingress
- persistentvolumeclaim
- configmap
- secret
- gateway
- namespace
- pod
- horizontalpodautoscaler
- serviceaccount
- replicaset
- poddisruptionbudget
- node
- storageclass
-
- 参数2:资源名称
-```
-
-#### 示例:
-```
-detailResource(参数1, 参数2,参数3)
-```
-
-## exec
-#### 方法描述:
-```
-execute shell cmd and return the output
-
- arg1:
- string object, executable shell cmds
- example
- exec("ls | grep 'build'")
-
- return:
- output of shell cmd arg1
-```
-
-#### 示例:
-```
-exec( arg1 )
-```
-
-## execSql
-#### 方法描述:
-```
-执行SQL语句
- example
- host="spotmaxxxxxx.mazonaws.com"
- user="admin"
- password=<>
- port=3306
- dbName="maxcloud_group"
- openMysql(host, port, dbName, user, password)
- sql="select * from db.table where limit 1";
- execSql(sql)
-```
-
-#### 示例:
-```
-execSql(参数1)
-```
-
-## fillTemp
-#### 方法描述:
-```
-填充模版
-
- 参数1:
- 模版字符串,待填充字符占位语法同golang
- example
- configStr = “apiVersion: v1
- kind: ConfigMap
- metadata:
- name: myconfigmap-{{.nameSuffix}}
- namespace: crazywolf
- labels:
- app: myapplication
- data:
- data: {{.randData}}”
-
- 参数2:
- HashTable example {"nameSuffix":randStr(6), "randData":randStr(6)}
-
- 返回值:
- 变量被替换之后的字符串
-```
-
-#### 示例:
-```
-fillTemp( arg1, arg2 )
-```
-
-## first
-#### 方法描述:
-```
-return first element of array
-
- arg1: Array
-
- return null if arg1 is empty array
-```
-
-#### 示例:
-```
-first(arg1)
-```
-
-## fromBase64
-#### 方法描述:
-```
-对string进行Base64解码
- fromBase64("bW9idmlzdGE")
- 输出 mobvista
-```
-
-#### 示例:
-```
-fromBase64(string)
-```
-
-## getASG
-#### 方法描述:
-```
-获取asg
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:云商 aliyun, aws, huawei
-
- 参数3:region
-
- 参数4:asgName
-
- getASG("credential", "us-west-2", "asgName")
-```
-
-#### 示例:
-```
-getASG(参数1,参数2,参数3,参数4)
-```
-
-## getAliASG
-#### 方法描述:
-```
-获取asg
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName
-
- getAliASG("credential", "us-west-2", "asgName")
-```
-
-#### 示例:
-```
-getAliASG(参数1,参数2,参数3)
-```
-
-## getAwsASG
-#### 方法描述:
-```
-获取asg
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName
-
- getAwsASG("credential", "us-west-2", "asgName")
-```
-
-#### 示例:
-```
-getAwsASG(参数1,参数2,参数3)
-```
-
-## getClusterKubeConf
-#### 方法描述:
-```
-获取集群的kubeConfig
-```
-
-#### 示例:
-```
-getClusterKubeConf(credential, provider, region, clusterID)
-```
-
-## getCreateClusterTemplate
-#### 方法描述:
-```
-获取创建集群的模版字符串
- terwayPlugin:
- bool类型, 只对aliyun ACK起作用的参数
- true: 使用terway 网络插件
- false: 使用默认网络插件(Flannel)
-```
-
-#### 示例:
-```
-getCreateClusterTemplate(credential, provider)
-getCreateClusterTemplate(credential, provider, terwayPlugin)
-```
-
-## getCreateNodegroupTemplate
-#### 方法描述:
-```
-获取创建集群节点组的模版字符串
-```
-
-#### 示例:
-```
-getCreateNodegroupTemplate(credential, provider)
-```
-
-## getHpaCurrent
-#### 方法描述:
-```
-获取HPA的 currentReplicas
- 参数1:集群环境
-
- 参数2:HPA名称
-
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- getHpaCurrent(bj_demo_crazywolf, "gjw-test")
-```
-
-#### 示例:
-```
-getHpaCurrent(参数1,参数2)
-```
-
-## getHpaMax
-#### 方法描述:
-```
-获取HPA的maxReplicas
- 参数1:集群环境
-
- 参数2:HPA名称
-
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- getHpaMax(bj_demo_crazywolf, "gjw-test")
-```
-
-#### 示例:
-```
-getHpaMax(参数1,参数2)
-```
-
-## getHpaMin
-#### 方法描述:
-```
-获取HPA的minReplicas
- 参数1:集群环境
-
- 参数2:HPA名称
-
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- getHpaMin(bj_demo_crazywolf, "gjw-test")
-```
-
-#### 示例:
-```
-getHpaMin(参数1,参数2)
-```
-
-## getHwASG
-#### 方法描述:
-```
-获取asg
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName
-
- getHwASG("credential", "us-west-2", "asgName")
-```
-
-#### 示例:
-```
-getHwASG(参数1,参数2,参数3)
-```
-
-## getUserSecret
-#### 方法描述:
-```
-获取签名需要的秘钥
-```
-
-#### 示例:
-```
-getUserSecret()
-```
-
-## getYaml
-#### 方法描述:
-```
-创建命名空间
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- getYaml(bj_demo_crazywolf, "deployment","ngxin-dep-0")
-
- 参数1:集群环境变量
-
- 参数2:资源类型
- deployment
- job
- cronjob
- daemonset
- statefulset
- service
- ingress
- persistentvolumeclaim
- configmap
- secret
- gateway
- namespace
- pod
- horizontalpodautoscaler
- serviceaccount
- replicaset
- poddisruptionbudget
- node
- storageclass
-
- 参数3:资源名称
-```
-
-#### 示例:
-```
-createNamespace(参数1, 参数2)
-```
-
-## helmValues
-#### 方法描述:
-```
-Helm 查循已安装的Helm列表
- helmValues(bj_demo_crazywolf, "my-wordpress", false)
-
- 参数1:集群环境变量
-
- 参数2:releaseName
-
- 参数3:是否展示所有values
-```
-
-#### 示例:
-```
-helmValues(参数1, 参数2, 参数3)
-```
-
-## importCluster
-#### 方法描述:
-```
-导入K8S集群到MaxCloud
-
- teamID:
- 数字, 可以使用listTeam()进行查询
-
- name:
- 字符串, 该集群在MaxCloud中的名字
-
- privider:
- 字符串, 可以是
- - ali
- - aws
- - tencent
- - huawei
-
- region:
- K8S集群所在的地区, example cn-beijing
-
- k8sConfig:
- K8S集群的链接字符串, 可以使用getClusterKubeConf(credential, provider, region, clusterID)获取
-
- 返回值:
- 成功/错误信息
-```
-
-#### 示例:
-```
-importCluster(teamID, name, provider, region, k8sConfig)
-```
-
-## installOrUpgradeChart
-#### 方法描述:
-```
-Helm 安装Chart
- sets = {
- "wordpressBlogName" : "CrazyWolf3453456"
- }
- installOrUpgradeChart(bj_demo_crazywolf, "my-wordpress", "bitnami/wordpress", "15.2.5", sets)
-
- 参数1:集群环境变量
- example bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
-
- 参数2:releaseName
-
- 参数3:chart name
-
- 参数4:chart版本
-
- 参数5:sets参数 (可选)
-```
-
-#### 示例:
-```
-addOrUpdateRepo(参数1, 参数2, 参数3)
-```
-
-## isErr
-#### 方法描述:
-```
-判断value是不是maxlang的 error对象
-```
-
-#### 示例:
-```
-isErr(value)
-```
-
-## jpath
-#### 方法描述:
-```
-query the json string and return target value
-
- arg1:
- string type path e.g "." to match the root element
-
- arg2:
- the valid json string to query
-
- return:
- element(s) in the specified path
-```
-
-#### 示例:
-```
-jpath( arg1, arg2 )
-```
-
-## keys
-#### 方法描述:
-```
-get keys of HashTable as array
-
- arg1:
- HashTable object
- example
- dic = {1:"one","one":1,"inc":fn(x){x+1}, "arr":[1,2,3,5], "table":{2:"two","sub":fn(x){x-1}}}
-
- return:
- keys as array of the HashTable
- example [1,"one", "inc","arr","table"]
-```
-
-#### 示例:
-```
-keys( arg1 )
-```
-
-## last
-#### 方法描述:
-```
-return last element of array
-
- arg1: Array
-
- return null if arg1 is empty array
-```
-
-#### 示例:
-```
-last(arg1)
-```
-
-## len
-#### 方法描述:
-```
-return the length of arg for following data types
- Array:
- element count of inthe array
-
- String:
- the length of string
-
- Default:
- return argument error
-```
-
-#### 示例:
-```
-len() number
-```
-
-## listASGs
-#### 方法描述:
-```
-查询ASG(s)
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:云商 aliyun, aws, huawei
-
- 参数3:region
-
- 参数4:asgName (可选)
-
- setCredential("credential", "key_xxxxxx", <>)
- listASGs("credential", "us-west-2")
- listASGs("credential", "us-west-2", "asgName")
-```
-
-#### 示例:
-```
-listAliASGs(参数1,参数2,参数3)
-listAliASGs(参数1,参数2,参数3,参数4)
-```
-
-## listAliASGs
-#### 方法描述:
-```
-查询ASG(s)
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName (可选)
-
- setCredential("credential", "key_xxxxxx", <>)
- listAliASGs("credential", "us-west-2")
- listAliASGs("credential", "us-west-2", "asgName")
-```
-
-#### 示例:
-```
-listAliASGs(参数1,参数2)
-listAliASGs(参数1,参数2,参数3)
-```
-
-## listAwsASGs
-#### 方法描述:
-```
-查询ASG(s)
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName (可选)
-
- setCredential("credential", "key_xxxxxx", <>)
- listAwsASGs("credential", "us-west-2")
- listAwsASGs("credential", "us-west-2", "asgName")
-```
-
-#### 示例:
-```
-listAwsASGs(参数1,参数2)
-listAwsASGs(参数1,参数2,参数3)
-```
-
-## listBucket
-#### 方法描述:
-```
-列出Buckets
- 参数1:credential 请先试用setCredential(name, key, value)设置
- 参数2:云商provider aliyun、aws、huawei
- 参数3:endpoint(aliyun的时候必传)
-```
-
-#### 示例:
-```
-listBucket(参数1,参数2,参数3)
-```
-
-## listBucketFile
-#### 方法描述:
-```
-列出Bucket的文件
- 参数1:credential 请先试用setCredential(name, key, value)设置
- 参数2:云商provider aliyun、aws、huawei
- 参数3:bucketName
- 参数4:前缀 Prefix
- 参数5:endpoint(aliyun的时候必传)
-```
-
-#### 示例:
-```
-listBucketFile(参数1,参数2,参数3,参数4,参数5)
-```
-
-## listBundleGroup
-#### 方法描述:
-```
-获取BundleGroup列表
-```
-
-#### 示例:
-```
-listBundleGroup(teamId,page,page_size)
-```
-
-## listCluster
-#### 方法描述:
-```
-列出MaxCloud项目组的项目
-
- 参数1:
- 数字,项目组ID, 登陆后使用listTeam()进行查询
-
- 参数2:
- 数字,项目ID, 登陆后使用listProject(teamID)进行查询
-
- 返回值:
- 项目相关的K8S集群列表
-```
-
-#### 示例:
-```
-listCluster(arg1, arg2)
-```
-
-## listClusters
-#### 方法描述:
-```
-查寻K8S集群
- credential = "aliCredit"
- provider = "aliyun"// aliyun, aws, huawei
- region = "cn-beijing"
- setCredential(credential, "keyid", <>)
- clusters = listClusters(credential, provider, region)
-```
-
-#### 示例:
-```
-createCluster(credential, provider, region, name)
-```
-
-## listECSInstanceTypes
-#### 方法描述:
-```
-查询该region可创建的机型
-
- nodeTypes是类似的查询CPU、内存的结构
- [{"type":"general","configuration":{"cpu":2,"mem":4}},{"type":"general","configuration":{"cpu":4,"mem":8}}]
-```
-
-#### 示例:
-```
-listECSInstanceTypes(credential, provider, region, subnetZones, nodeTypeQuery)
-```
-
-## listHelmReleases
-#### 方法描述:
-```
-Helm 查循已安装的Helm列表
- listHelmReleases(bj_demo_crazywolf)
-
- 参数1:集群环境变量
-```
-
-#### 示例:
-```
-listHelmReleases(参数1)
-```
-
-## listHwASGs
-#### 方法描述:
-```
-查询ASG(s)
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName (可选)
-
- setCredential("credential", "key_xxxxxx", <>)
- listHwASGs("credential", "us-west-2")
- listHwASGs("credential", "us-west-2", "asgName")
-```
-
-#### 示例:
-```
-listHwASGs(参数1,参数2)
-listHwASGs(参数1,参数2,参数3)
-```
-
-## listNodeGroupNodes
-#### 方法描述:
-```
-查询集群的Nodegroup
-```
-
-#### 示例:
-```
-listNodegroups(credential, provider, region, clusterID)
-```
-
-## listNodegroups
-#### 方法描述:
-```
-查询K8S集群的节点组
-```
-
-#### 示例:
-```
-listNodegroups(credential, provider, region, clusterID)
-```
-
-## listPod
-#### 方法描述:
-```
-指定资源类型后可以获取上面指定集群命名空间下的所有资源
- 可以只传入一个类型参数, 相当于listResource(bj_demo_crazywolf, "pod")
- example
- ListPod(bj_demo_crazywolf)
-
- 参数1:
- 集群环境变量
- example bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
-```
-
-#### 示例:
-```
-ListPod(参数1, 参数2)
-```
-
-## listProject
-#### 方法描述:
-```
-列出MaxCloud项目组的项目
-
- arg1:
- 项目组ID, 登陆后使用listTeam()进行查询
-
- 返回值:
- 项目名字、用户组和项目ID列表
-```
-
-#### 示例:
-```
-listProject(arg1)
-```
-
-## listResource
-#### 方法描述:
-```
-指定资源类型后可以获取上面指定集群命名空间下的所有资源
- 可以只传入一个类型参数,也可以传入第二个参数作为临时命名空间(不会覆盖之前useCluster设置的命名空间)
- example
- listResource(bj_demo_crazywolf, "deployment")
-
- 参数1:
- 集群环境变量
- example bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
-
- 参数2:
- 资源类型
- 目前支持的资源为:
-
- deployment
- job
- cronjob
- daemonset
- statefulset
- service
- ingress
- persistentvolumeclaim
- configmap
- secret
- gateway
- namespace
- pod
- horizontalpodautoscaler
- serviceaccount
- replicaset
- poddisruptionbudget
- node
- storageclass
-```
-
-#### 示例:
-```
-listResource(参数1, 参数2)
-```
-
-## listSubnets
-#### 方法描述:
-```
-查询已有的subnets
- credential = "aliCredit"
- provider = "aliyun"// aliyun, aws, huawei
- region = "cn-beijing"
- vpcID = "xxxdfsdfsdf"
- setCredential(credential, "keyid", <>)
- subnets = listSubnets(credential, provider, region, vpcID)
- subnets[0]
- +-----------------------------------------------+
- | Summary |
- +----------+---------------------------+--------+
- | NAME | ID | TYPE |
- +----------+---------------------------+--------+
- | vswitch2 | vsw-2zeiz27mngyi7jxnotu5f | subnet |
- | vswitch1 | vsw-2zeu4oq1vx9ffjso35sde | subnet |
- +----------+---------------------------+--------+
-```
-
-#### 示例:
-```
-listSubnets(credential, provider, region, vpcID)
-```
-
-## listTeam
-#### 方法描述:
-```
-列出用户有权限的MaxCloud 组
-
- 返回值:
- 项目组名字和ID列表
-```
-
-#### 示例:
-```
-listTeam()
-```
-
-## listVPCs
-#### 方法描述:
-```
-查看现有VPCs
- credential = "aliCredit"
- provider = "aliyun"// aliyun, aws, huawei
- region = "cn-beijing"
- vpcName = "test"
- setCredential(credential, "keyid", <>)
- listVPCs(credential, provider, region) // 查询region里所有的VPC
- listVPCs(credential, provider, region, vpcName) // 查询名字是vpcName, 在region的VPC
-
- vpcs = listVPCs(credential, provider, region)
- vpcs[0]
-
- testVPC = listVPCs(credential, provider, region, vpcName)
- vpcID = testVPC[1]["ID"]
- vpcID
-
- 返回数组类型,
-
- 数组第一个元素是Summary列表
- 数组第2个元素开始是相应的Object, 可以进行操作
-```
-
-#### 示例:
-```
-listVPCs(credential, provider, region)
- listVPCs(credential, provider, region, vpcName)
-```
-
-## listZones
-#### 方法描述:
-```
-查看现有Zones
- credential = "aliCredit"
- provider = "aliyun"// aliyun, aws, huawei
- region = "cn-beijing"
- vpcName = "test"
- setCredential(credential, "keyid", <>)
- listZones(credential, provider, region)
-
- 返回数组类型,
-
- 数组第一个元素是Summary列表
- 数组第2个元素开始是相应的Object, 可以进行操作
-```
-
-#### 示例:
-```
-listZones(credential, provider, region)
-```
-
-## lockASG
-#### 方法描述:
-```
-锁定ASG容量
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:云商 aliyun, aws, huawei
-
- 参数3:region
-
- 参数4:asgName
-
- 参数5:锁定值
-
- 如果参数4锁定值小于1,则最小值会被修改为当前的期望值
- 如果参数4大于最大值,则最大值会被修改为锁定值
- 如果期望值小于参数4锁定值,则期望值修改为所定值
-
- lockASG("credential", "us-west-2", "asgName", 10)
-```
-
-#### 示例:
-```
-lockASG(参数1,参数2,参数3,参数4,参数5)
-```
-
-## lockAliASG
-#### 方法描述:
-```
-锁定ASG容量
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName
-
- 参数4:锁定值
-
- 如果参数4锁定值小于1,则最小值会被修改为当前的期望值
- 如果参数4大于最大值,则最大值会被修改为锁定值
- 如果期望值小于参数4锁定值,则期望值修改为所定值
- lockAliASG("credential", "us-west-2", "asgName", 10)
-```
-
-#### 示例:
-```
-lockAliASG(参数1,参数2,参数3,参数4)
-```
-
-## lockAwsASG
-#### 方法描述:
-```
-锁定ASG容量
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName
-
- 参数4:锁定值
-
- 如果参数4锁定值小于1,则最小值会被修改为当前的期望值 如果参数4大于最大值,则最大值会被修改为锁定值 如果期望值小于参数4锁定值,则期望值修改为所定值
- lockAwsASG("credential", "us-west-2", "asgName", 10)
-```
-
-#### 示例:
-```
-lockAwsASG(参数1,参数2,参数3,参数4)
-```
-
-## lockHpa
-#### 方法描述:
-```
-锁定HPA
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- lockHpa(bj_demo_crazywolf, "gjw-test", 1)
-
- 参数1:当前集群环境
- 参数2:HPA 名称
- 参数3:minReplicas
-
- 设置minReplicas值,默认MaxReplicas不变,
- 如果当前MaxReplicas小于要设置的minReplicas则修改 MaxReplicas为minReplicas一样
-
- 例如:
- demo-hap
- minReplicas: 2
- maxReplicas: 5
- 调用 localHpa(env, "demo-hpa", 3),锁定为3则执行后结果为
-
- demo-hap
- minReplicas: 3
- maxReplicas: 5
- 如果再次调用调用 localHpa(env, "demo-hpa", 10),锁定为10则执行后结果为
-
- demo-hap
- minReplicas: 10
- maxReplicas: 10
-```
-
-#### 示例:
-```
-lockHpa(参数1, 参数2, 参数3)
-```
-
-## lockHwASG
-#### 方法描述:
-```
-锁定ASG容量
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName
-
- 参数4:锁定值
-
- 如果参数4锁定值小于1,则最小值会被修改为当前的期望值
- 如果参数4大于最大值,则最大值会被修改为锁定值
- 如果期望值小于参数4锁定值,则期望值修改为所定值
-
- lockHwASG("credential", "us-west-2", "asgName", 10)
-```
-
-#### 示例:
-```
-lockHwASG(参数1,参数2,参数3,参数4)
-```
-
-## loginMaxcloud
-#### 方法描述:
-```
-登陆MaxCloud
-
- 参数1:
- 配置名字, 搭配以下函数使用
- setCredential("maxcloudab", "username", <>)
- updateCredential("awsCredential", "key_xxxxxx", <>)
- example
- loginMaxcloud("maxcloudab")
-
- 返回值:
- TRUE/ error Object
-```
-
-#### 示例:
-```
-loginMaxcloud( arg1 )
-```
-
-## mock_print
-#### 方法描述:
-```
-测试表格打印
-```
-
-#### 示例:
-```
-mock_print()
-```
-
-## mock_print
-#### 方法描述:
-```
-测试表格打印
-```
-
-#### 示例:
-```
-mock_print()
-```
-
-## newBucketDir
-#### 方法描述:
-```
-新建Bucket的文件“
- 参数1:credential 请先试用setCredential(name, key, value)设置
- 参数2:云商provider aliyun、aws、huawei
- 参数3:bucketName
- 参数4:dirName
- 参数5:endpoint(aliyun的时候必传)
-```
-
-#### 示例:
-```
-newBucketDir(参数1,参数2,参数3,参数4,参数5)
-```
-
-## nslookup
-#### 方法描述:
-```
-解析域名
- 根据给定域名解析出IP或CName, 同时查找Aws Route53 对 A 类型的别名进行查找
-
- 使用方式:
- nslookup(credential, "data.mintegral.com.")
- example
- Route53 中 的一条记录如下, 正常的nslookup无法返回dualstack.....这个域名, 因为Route53并不是标准的DNS服务器
-
- de01.mintegral.com A 简单 - dualstack.adn-tktracking-frankfurt-13341082.eu-central-1.elb.amazonaws.com.
- detailroi.mintegral.com A 简单 - 47.93.30.190
- 使用Playbook中的nslookup会返回如下结果
-
- [
- de01.mintegral.com. in A NAME: 3.67.205.211,
- de01.mintegral.com. in A NAME: 18.198.96.205,
- de01.mintegral.com. in A NAME: 3.126.117.230,
- de01.mintegral.com. in A NAME: 3.65.53.117,
- de01.mintegral.com. in A NAME: 18.184.234.38,
- de01.mintegral.com. in A NAME: 3.125.187.240,
- de01.mintegral.com. in A NAME: 3.120.59.220,
- de01.mintegral.com. in A NAME: 3.120.47.234,
- ======records from aws route53=====,
- de01.mintegral.com. A dualstack.adn-tktracking-frankfurt-13341082.eu-central-1.elb.amazonaws.com.
- ]
-```
-
-#### 示例:
-```
-nslookup(credential, "data.mintegral.com.")
-```
-
-## ntos
-#### 方法描述:
-```
-conver number to string
-
- arg1:
- number object
-
- return:
- string format of the number object
-```
-
-#### 示例:
-```
-ntos( arg1 )
-```
-
-## openMysql
-#### 方法描述:
-```
-打开数据库连接
- example
- host="spotmaxxxxxx.mazonaws.com"
- user="admin"
- password=<>
- port=3306
- dbName="maxcloud_group"
- openMysql(host, port, dbName, user, password)
- sql="select * from db.table where limit 1";
- execSql(sql)
-```
-
-#### 示例:
-```
-execSql(参数1)
-```
-
-## podExecShell
-#### 方法描述:
-```
-在pod里面执行shell
-```
-
-#### 示例:
-```
-podExecShell(env,"podName","container",["sh","-c","ls -al"])
-```
-
-## print
-#### 方法描述:
-```
-output the args to Stdout
-
- arg1, .....:
- 0 or more element to output
-
- return:
- the print output string
-```
-
-#### 示例:
-```
-print(arg1, ....)
-```
-
-## print_bundle_group
-#### 方法描述:
-```
-根据团队ID打印bundleGroup列表
-```
-
-#### 示例:
-```
-print_bundle_group(teamId)
-```
-
-## print_bundle_group
-#### 方法描述:
-```
-根据团队ID打印bundleGroup列表
-```
-
-#### 示例:
-```
-print_bundle_group(teamId)
-```
-
-## print_bundle_group_by_plan
-#### 方法描述:
-```
-基于伸缩计划,进行伸缩
-```
-
-#### 示例:
-```
-print_bundle_group_by_plan(plan)
-```
-
-## print_bundle_group_by_plan
-#### 方法描述:
-```
-基于伸缩计划,进行伸缩
-```
-
-#### 示例:
-```
-print_bundle_group_by_plan(plan)
-```
-
-## print_bundle_group_health
-#### 方法描述:
-```
-根据团队ID和BundleGroupId打印链路监控状态
-```
-
-#### 示例:
-```
-print_bundle_group_health({"team_id":3,"bundle_group_id":7})
-```
-
-## print_bundle_group_health
-#### 方法描述:
-```
-根据团队ID和BundleGroupId打印链路监控状态
-```
-
-#### 示例:
-```
-print_bundle_group_health({"team_id":3,"bundle_group_id":7})
-```
-
-## print_teams
-#### 方法描述:
-```
-打印团队列表
-```
-
-#### 示例:
-```
-print_teams()
-```
-
-## print_teams
-#### 方法描述:
-```
-打印团队列表
-```
-
-#### 示例:
-```
-print_teams()
-```
-
-## println
-#### 方法描述:
-```
-output the args to Stdout with newline at the end
-
- arg1, .....:
- 0 or more element to output
-
- return:
- the print output string with new line at the end
-```
-
-#### 示例:
-```
-println(arg1, ....)
-```
-
-## push
-#### 方法描述:
-```
-append arg2 to array arg1 return the new array
-
- arg1:
- Array to append element
-
- arg2:
- element to append to the array
-
- return:
- the new array
-```
-
-#### 示例:
-```
-rest(arg1, arg2)
-```
-
-## put
-#### 方法描述:
-```
-
-```
-
-#### 示例:
-```
-
-```
-
-## randStr
-#### 方法描述:
-```
-generate random string with length = arg1
-
- arg1:
- number object, length of random string
- example
- randStr(6)
-
- return:
- output the generated string
-```
-
-#### 示例:
-```
-randStr( arg1 )
-```
-
-## rest
-#### 方法描述:
-```
-return last element of array
-
- arg1: Array
- return null if arg1 element count <= 1
-```
-
-#### 示例:
-```
-rest(arg1)
-```
-
-## scaleDeployment
-#### 方法描述:
-```
-Scale Deployment
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- scaleDeployment(bj_demo_crazywolf, "ngxin-dep-0", 1)
-
- 参数1:集群环境变量
-
- 参数2:Deployment 名称
-
- 参数3: Replicas 数量
-```
-
-#### 示例:
-```
-scaleDeployment(参数1, 参数2, 参数3)
-```
-
-## scaleNodeGroup
-#### 方法描述:
-```
-Scale nodegroup机器数量
-
- scaleNodeGroup(credential, provider, region, clusterID, nodeGroupID, newSize)
-
- 也可以使用一下ASG参数进行操作
-
- listAliASGs(credential, region)
-
- updateAliASG(credential, "us-west-2", "kmax-demo-asg-small", 2, 2, 2)
-
- lockAliASG(credential, "us-west-2","asgName",10)
-
- 具体使用方法见 MaxLang MaxCloud.ipynb
-```
-
-#### 示例:
-```
-scaleNodeGroup(credential, provider, region, clusterID, nodegroupID, newSize)
-```
-
-## scaleStatefulset
-#### 方法描述:
-```
-Scale Statefulset
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- scaleStatefulset(bj_demo_crazywolf, "xxxx", 2)
-
- 参数1:集群环境变量
-
- 参数2:Deployment 名称
-
- 参数3: Replicas 数量
-```
-
-#### 示例:
-```
-scaleStatefulset(参数1, 参数2, 参数3)
-```
-
-## setCredential
-#### 方法描述:
-```
-新增Credential
- setCredential(name, key, value)
-
- 参数1:credential 唯一名称
-
- 参数2:credential key,如果是Aws credential则为 aws_access_key_id
-
- 参数3:credential value,如果是Aws credential则为 aws_secret_access_key
-
- PS:不能重复设置,如需修改请使用updateCredential
-```
-
-#### 示例:
-```
-setCredential(name, key, value)
-```
-
-## setHpaReplicas
-#### 方法描述:
-```
-锁定HPA
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
- setHpaReplicas(bj_demo_crazywolf, "gjw-test", 1, 2)
-
- 参数1:当前集群环境
- 参数2:HPA名称
- 参数3:minReplicas
- 参数4:maxReplicas
-```
-
-#### 示例:
-```
-lockHpa(参数1, 参数2, 参数3, 参数4)
-```
-
-## sharePlaybook
-#### 方法描述:
-```
-分享Playbook 文件给其他人
- sharePlaybook("filename") && sharePlaybook("filename","user@mobvista.com")
- 参数1:文件名
- 参数2:用户 (可选,如果不填则公开给所有MaxCloud用户)
-```
-
-#### 示例:
-```
-sharePlaybook("filename") && sharePlaybook("filename","user@mobvista.com")
-```
-
-## sleep
-#### 方法描述:
-```
-等待N秒
- sleep(second)
-
- 参数1:休眠秒数
-```
-
-#### 示例:
-```
-sleep(second)
-```
-
-## ston
-#### 方法描述:
-```
-conver string to number
-
- arg1:
- string object
-
- return:
- number object of the input string
- or
- error object
-```
-
-#### 示例:
-```
-ston( arg1 )
-```
-
-## syncPlaybook
-#### 方法描述:
-```
-同步所有公共PlayBook 和 分享个给我的PlayBook
-```
-
-#### 示例:
-```
-syncPlaybook()
-```
-
-## toBase64
-#### 方法描述:
-```
-对string进行Base64编码
- toBase64("mobvista")
- 输出 bW9idmlzdGE
-```
-
-#### 示例:
-```
-toBase64(string)
-```
-
-## toJson
-#### 方法描述:
-```
-把Maxlang Map/Array 对象转换成Json
-
- mapObj = {
- "name" : "CrazyWolf",
- "age" : 18,
- "address" : "beiijng"
- }
- toJson(mapObj)
-
- arrayObj = [1, true, "stringObj"]
- toJson(arrayObj)
-```
-
-#### 示例:
-```
-toJson(参数1)
-```
-
-## trigger
-#### 方法描述:
-```
-如果您需要外部触发执行某个PlayBook脚本,可以执行 trigger 方法把PlayBook文件公布为可外部触发的,然后按照输出提示触发执行
-```
-
-#### 示例:
-```
-trigger("filename")
-```
-
-## uninstallReleaseByName
-#### 方法描述:
-```
-Helm 卸载Release
-
- 参数1:指定集群环境
- example bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
-
- 参数2:ReleaseName
-```
-
-#### 示例:
-```
-uninstallReleaseByName(参数1, 参数2)
-```
-
-## updateASG
-#### 方法描述:
-```
-更改ASG的最小容量、最大容量、所需容量
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:云商 aliyun, aws, huawei
-
- 参数3:region
-
- 参数4:asgName
-
- 参数5:最小
-
- 参数6:最大
-
- 参数7:期望值
-
- updateASG("credential", "us-west-2", "asgName" ,1, 100, 50)
-```
-
-#### 示例:
-```
-updateASG(参数1,参数2,参数3,参数4,参数5,参数6,参数7)
-```
-
-## updateAliASG
-#### 方法描述:
-```
-更改ASG的最小容量、最大容量、所需容量
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName
-
- 参数4:最小
-
- 参数5:最大
-
- 参数6:期望值
-
- updateAliASG("credential", "us-west-2", "asgName", 1, 100, 50)
-```
-
-#### 示例:
-```
-updateAliASG(参数1,参数2,参数3,参数4,参数5,参数6)
-```
-
-## updateAwsASG
-#### 方法描述:
-```
-更改ASG的最小容量、最大容量、所需容量
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName
-
- 参数4:最小
-
- 参数5:最大
-
- 参数6:期望值
-
- updateAwsASG("credential", "us-west-2", "asgName", 1, 100, 50)
-```
-
-#### 示例:
-```
-updateAwsASG(参数1,参数2,参数3,参数4,参数5,参数6)
-```
-
-## updateCredential
-#### 方法描述:
-```
-修改Credential
- updateCredential(name, key, value)
-
- 参数1:credential 唯一名称
-
- 参数2:credential key,如果是Aws credential则为 aws_access_key_id
-
- 参数3:credential value,如果是Aws credential则为 aws_secret_access_key
-```
-
-#### 示例:
-```
-updateCredential(name, key, value)
-```
-
-## updateHwASG
-#### 方法描述:
-```
-更改ASG的最小容量、最大容量、所需容量
- 参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
- 参数2:region
-
- 参数3:asgName
-
- 参数4:最小
-
- 参数5:最大
-
- 参数6:期望值
-
- updateHwASG("credential", "us-west-2", "asgName" ,1, 100, 50)
-```
-
-#### 示例:
-```
-updateHwASG(参数1,参数2,参数3,参数4,参数5,参数6)
-```
-
-## useCluster
-#### 方法描述:
-```
-切换操作上下文到MaxCloud集群
-
- 参数1:
- HashTable,可以从MaxCloud项目管理页面复制
- bj_demo_crazywolf = {
- "teamId":1,
- "projectId":79,
- "clusterId":69,
- "namespace":"crazywolf"
- }
-
- 返回值:
- TRUE/错误信息
-```
-
-#### 示例:
-```
-useCluster(arg1)
-```
-
-## values
-#### 方法描述:
-```
-get values of HashTable as array
-
- arg1:
- HashTable object
- example
- dic = {1:"one","one":1,"inc":fn(x){x+1}, "arr":[1,2,3,5], "table":{2:"two","sub":fn(x){x-1}}}
-
- return:
- keys as array of the HashTable
- example ["one", 1, fn(x){x+1}, [1,2,3,5], {2:"two","sub":fn(x){x-1}}]
-```
-
-#### 示例:
-```
-values( arg1 )
-```
-
-## 登录MaxCloud
-loginMaxCloud("用户名","密码")
-
-参数1:用户名(登录MaxCloud账户)
-
-参数2:密码(登录MaxCloud密码)
-```
-loginMaxcloud("jianwen.gao@mobvista.com", <>)
-```
-
-## 使用Credential 方式登录MaxCloud
-setCredential("maxcloud", "jianwen.gao@mobvista.com", <\>)
-
-loginMaxcloud("maxcloud")
-```
-setCredential("maxcloudab", "jianwen.gao@mobvista.com", <>)
-
-loginMaxcloud("maxcloudab")
-```
-
-## 获取Team列表
-获取登录账号可以使用的所有Team列表
-```
-listTeam()
-```
-
-## 获取Team下的所有项目列表
-listProject(teamId)
-
-参数1:团队ID
-```
-listProject(1)
-```
-
-## 获取项目下的所有集群列表
-listCluster(teamId, projectId)
-
-参数1:团队ID
-
-参数2:项目ID
-```
-listCluster(1, 79)
-```
-
-## 初始化集群参数变量
-使用一个map变量保存teamId、projectId,clusterId、namespace 等参数,使用一个方便识别的名称,后续集群相关方法调用第一个参数都传入这个变量,用于指定集群环境。
-
-teamId,projectId,clusterId 等参数可以直接从MaxCloud项目管理页面复制,如下图所示
-
-
-
-
-从网页中复制的一下内容,只有 `teamId`、`projectId`、 `clusterId`、 `namespace` 几个参数是必填的,Name相关参数只是为了方便您识别
-```
-Crazywolf_test_ack_maxcloud_bj_demo = {
- "teamId": 1,
- "teamName": "MaxCloud",
- "projectId": 62,
- "projectName": "Crazywolf-test",
- "clusterId": 22,
- "clusterName": "ack-maxcloud-bj-demo",
- "namespace": "crazywolf"
-}
-```
-
-## 设置当前环境的kubeconfig
-参数1:集群环境变量
-
-此函数执行后会把当前集群环境变量指定集群的kubeconfig放在当前环境的~/.kube/config 文件中,后续可以使用 `exec(xxx)`执行shell 命令,如果使用`exec(kubectl get pod -n namespaceName)` 就可以获取指定namespace下的pod列表
-
-如果需要切换kubeconfig需要重新执行useCluster,该操作会用新集群kubeconfig覆盖之前的文件
-```
-useCluster(bj_demo_crazywolf)
-```
-
-### 展示设置集群kubeconfig后使用shell命令操作集群
-```
-exec("kubectl get deployment -n crazywolf")
-```
-
-## 创建命名空间
-参数1:集群环境变量
-
-参数2:新命名空间名称
-```
-createNamespace(bj_demo_crazywolf, "gjw-0910")
-```
-
-## 获取资源Yaml
-参数1:集群环境变量
-
-参数2:资源类型
-
-参数3:资源名称
-```
-getYaml(bj_demo_crazywolf, "deployment","ngxin-dep-0")
-```
-
-## Describe 资源
-参数1:集群环境变量
-
-参数2: 资源类型
-
-参数3:资源名称
-```
-describeResource(bj_demo_crazywolf, "deployment", "ngxin-dep-0")
-```
-
-## Scale Deployment
-参数1:集群环境变量
-
-参数2:Deployment 名称
-
-参数3: Replicas 数量
-```
-scaleDeployment(bj_demo_crazywolf, "ngxin-dep-0", 1)
-```
-
-## Scale Statefulset
-参数1:集群环境变量
-
-参数2:Statefulset 名称
-
-参数3:Replicas 数量
-```
-scaleStatefulset(bj_demo_crazywolf, "xxxx", 2)
-```
-
-## 列举所有资源
-参数1:集群环境变量
-
-参数2:资源类型
-
-listResource 指定资源类型后可以获取上面指定集群命名空间下的所有资源
-listResource 可以只传入一个类型参数,也可以传入第二个参数作为零时命名空间(不会覆盖之前useCluster设置的命名空间)
-
-目前支持的资源为:
-
-- deployment
-- job
-- cronjob
-- daemonset
-- statefulset
-- service
-- ingress
-- persistentvolumeclaim
-- configmap
-- secret
-- gateway
-- namespace
-- pod
-- horizontalpodautoscaler
-- serviceaccount
-- replicaset
-- poddisruptionbudget
-- node
-- storageclass
-```
-listResource(bj_demo_crazywolf, "deployment")
-```
-
-## 获取随机字符串
-randStr参数为需要获取随机字符串长度,如果不传参数默认为6
-```
-randStr(6)
-```
-
-## 把Maxlang对象转换成Json
-
-### 测试把Map转换为json
-```
-/*
-声明一个MAP 对象
-*/
-mapObj = {
- "name" : "CrazyWolf",
- "age" : 18,
- "address" : "beiijng"
-}
-/* 把map 对象转换成json */
-toJson(mapObj)
-```
-
-### 测试把数组对象转换成json
-```
-/*
-声明一个Array 对象
-*/
-
-arrayObj = [1, true, "stringObj"]
-
-/*
-把map 对象转换成json
-*/
-toJson(arrayObj)
-```
-
-### 测试把函数返回对象转为json
-```
-toJson(listTeam())
-```
-
-## 等待N秒
-sleep(second)
-
-参数1:休眠秒数
-```
-sleep(5)
-```
-
-## 使用模版Apply Yaml
-
-### 声明字符串模版
-模版语法同Golang 语法
-```
-configStr = `apiVersion: v1
-kind: ConfigMap
-metadata:
- name: myconfigmap-{{.nameSuffix}}
- namespace: crazywolf
- labels:
- app: myapplication
-data:
- data: {{.randData}}`
-```
-
-### 声明一个Map数据用于替换上面模版的占位
-Map 数据key、value 都必须是字符串,暂不支持其他类型
-```
-randstr = randStr(6)
-data = {"nameSuffix":randstr, "randData":randstr}
-```
-
-### 使用fillTemp 方法用map 替换模版中的占位符,获取最终可执行的yaml字符串
-```
-yaml = fillTemp(configStr, data)
-yaml
-```
-
-### 调用applyYaml 方法在集群中部署yaml
-```
-applyYaml(bj_demo_crazywolf, yaml)
-```
-
-# HPA
-
-## 锁定HPA
-参数1:当前集群环境
-参数2:HPA 名称
-参数3:minReplicas
-
-设置minReplicas值,默认MaxReplicas不变,如果当前MaxReplicas小于要设置的minReplicas则修改 MaxReplicas为minReplicas一样
-例如:
-
-```yaml
-demo-hap
- minReplicas: 2
- maxReplicas: 5
-```
-
-调用 localHpa(env, "demo-hpa", 3),锁定为3则执行后结果为
-```yaml
-demo-hap
- minReplicas: 3
- maxReplicas: 5
-```
-
-如果再次调用调用 localHpa(env, "demo-hpa", 10),锁定为10则执行后结果为
-```yaml
-demo-hap
- minReplicas: 10
- maxReplicas: 10
-```
-```
-lockHpa(bj_demo_crazywolf, "gjw-test", 1)
-```
-
-## 设置HPA 的Replicas
-参数1:HPA名称
-
-参数2:minReplicas
-
-参数3:maxReplicas
-```
-setHpaReplicas(bj_demo_crazywolf, "gjw-test", 1, 2)
-```
-
-### 获取HPA的minReplicas
-参数1:集群环境
-
-参数2:HPA名称
-```
-getHpaMin(bj_demo_crazywolf, "gjw-test")
-```
-
-### 获取HPA的maxReplicas
-参数1:集群环境
-
-参数2:HPA名称
-```
-getHpaMax(bj_demo_crazywolf, "gjw-test")
-```
-
-### 获取HPA当前Replicas
-参数1:集群环境
-
-参数2:HPA名称
-```
-getHpaCurrent(bj_demo_crazywolf, "gjw-test")
-```
-
-## Helm 安装
-
-### 添加Repo
-
-参数1:集群环境变量
-
-参数2:repo name
-
-参数3:repo URL
-```
-addOrUpdateRepo(bj_demo_crazywolf, "bitnami", "https://charts.bitnami.com/bitnami")
-```
-
-### 设置sets
-```
-sets = {
-"wordpressBlogName" : "CrazyWolf3453456"
-}
-```
-
-### 安装Chart
-参数1:集群环境变量
-
-参数2:releaseName
-
-参数3:chart name
-
-参数4:chart版本
-
-参数5:sets参数 (可选)
-```
-installOrUpgradeChart(bj_demo_crazywolf, "my-wordpress", "bitnami/wordpress", "15.2.5", sets)
-```
-
-## 查循已安装的Helm列表
-参…-stage-old.detailroi.mintegral.com")
-
-
-## AWS ASG Size查询和更改
-
-ASG 相关方法都支持通过公共方法执行云商,或者用云商的方法名,以获取asg为例
-
-- 参数中指定云商(aws、aliyun、huawei)
-```
-
- getASG("credential", "aws","us-west-2", "kmax-demo-asg-small")
-```
-
-- 直接用云商的方法
-```
- getAwsASG("credential", "us-west-2", "kmax-demo-asg-small")
-```
-
-listASGs、getASG、updateASG、lockASG 都支持上述使用方式
-
-### usage:
-
-#### 列出region的ASG, 如果asgName是空, 列出所有的ASG
-```
-listASGs("credential", "aws", region, asgName)
-```
-
-#### 更新ASG的最小容量、最大容量、所需容量
-```
-updateASG("credential","aws", region, asgName, miniSize, maxSize, desiredSize)
-```
-#### 获取ASG的最大、最小、所需容量
-```
-getASG("credential","aws", region, asgName)
-```
-#### 锁定ASG到lockSize
-```
-lockASG("credential","aws", region, asgName, lockSize)
-```
-e.g.
-```
- listASGs("credential", "aws", "us-west-2", "kmax-demo-asg-small")
- getASG("credential", "aws","us-west-2", "kmax-demo-asg-small")
- updateASG("credential", "aws", "us-west-2", "kmax-demo-asg-small", 2, 2, 2)
- lockASG("credential", "aws", "us-west-2", "kmax-demo-asg-small", 2)
-
- listAwsASGs("credential", "us-west-2", "kmax-demo-asg-small")
- getAwsASG("credential", "us-west-2", "kmax-demo-asg-small")
- updateAwsASG("credential", "us-west-2", "kmax-demo-asg-small", 2, 2, 2)
- lockAwsASG("credential", "us-west-2", "kmax-demo-asg-small", 2)
-```
-
-### 查询ASG(s)
-
-参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
-参数2:region
-
-参数3:asgName (可选)
-
-- 获取Aws asg列表
-```
-listAwsASGs("credential", "us-west-2")
-
-listAwsASGs("credential", "us-west-2", "asgName")
-```
-- 获取阿里云 asg 列表
-```
-listAliASGs("credential", "us-west-2")
-
-listAliASGs("credential", "us-west-2", "asgName")
-```
-- 获取huawei asg 列表
-```
-listHwASGs("credential", "us-west-2")
-
-listHwASGs("credential", "us-west-2", "asgName")
-```
-**使用参数指定云商**
-```
-listHwASGs("credential", "aws", "us-west-2")
-
-listHwASGs("credential", "aws", "us-west-2", "asgName")
-
-listAwsASGs("credential", "us-west-2")
-listAliASGs("credential", "us-west-2")
-listHwASGs("credential", "us-west-2")
-```
-
-### 更改ASG的最小容量、最大容量、所需容量
-
-参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
-参数2:region
-
-参数3:asgName
-
-参数4:最小
-
-参数5:最大
-
-参数6:期望值
-
-- 修改Aws asg
-```
-updateAwsASG("credential", "us-west-2", "asgName", 1, 100, 50)
-```
-- 修改阿里云 asg
-```
-updateAliASG("credential", "us-west-2", "asgName", 1, 100, 50)
-```
-- 修改 huawei asg
-```
-updateHwASG("credential", "us-west-2", "asgName" ,1, 100, 50)
-```
-**使用参数指定云商**
-```
-updateHwASG("credential", "aws", "us-west-2", "asgName" ,1, 100, 50)
-updateAwsASG("credential", "us-west-2", "kmax-demo-asg-small", 2, 2, 2)
-updateAliASG("credential", "us-west-2", "kmax-demo-asg-small", 2, 2, 2)
-updateHwASG("credential", "us-west-2", "kmax-demo-asg-small", 2, 2, 2)
-```
-
-### 锁定ASG容量
-
-参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
-参数2:region
-
-参数3:asgName
-
-参数4:锁定值
-
-如果参数4锁定值小于1,则最小值会被修改为当前的期望值
-如果参数4大于最大值,则最大值会被修改为锁定值
-如果期望值小于参数4锁定值,则期望值修改为所定值
-
-- 锁定Aws asg
-```
-lockAwsASG("credential", "us-west-2", "asgName", 10)
-```
-- 锁定aliyun asg
-```
-lockAliASG("credential", "us-west-2", "asgName", 10)
-```
-- 锁定huawei asg
-```
-lockHwASG("credential", "us-west-2", "asgName", 10)
-
-lockAwsASG("credential", "us-west-2", "asgName", 10)
-lockAliASG("credential", "us-west-2", "asgName", 10)
-lockHwASG("credential", "us-west-2", "asgName", 10)
-```
-
-## 获取asg
-参数1:云商credential ,请先试用setCredential(name, key, value 设置)
-
-参数2:region
-
-参数3:asgName
-
-- 获取Aws asg列表
-```
-getAwsASG("credential", "us-west-2", "asgName")
-```
-- 获取阿里云 asg 列表
-```
-getAliASG("credential", "us-west-2", "asgName")
-```
-- 获取huawei asg 列表
-```
-getHwASG("credential", "us-west-2", "asgName")
-
-getAwsASG("credential", "us-west-2", "asgName")
-getAliASG("credential", "us-west-2", "asgName")
-getHwASG("credential", "us-west-2", "asgName")
-```
-
-# Bucket 管理
-
-## 列出Bucket
-```
-listBucket
-```
-
-参数1:credential
-
-参数2:云商provider
-
-参数3:endpoint(aliyun的时候必传)
-
-- 列出Aws bucket
-```
-listBucket("credential", "aws")
-```
-- 列出Aliyun bucket
-```
-listBucket("credential", "aliyun", "https://oss-cn-beijing.aliyuncs.com")
-
-listBucket("credential", "aws")
-listBucket("credential", "aliyun", "https://oss-cn-beijing.aliyuncs.com")
-```
-
-## 创建文件夹
-```
-newBucketDir
-```
-
-参数1:credential
-
-参数2:云商provider (aws,aliyun)
-
-参数3:bucketName
-
-参数4:dirName
-
-参数5:endpoint(aliyun的时候必传)
-
-- aws 指定bucket下创建文件
-```
-newBucketDir("credential", "aws", "bucketName", "test_dirname")
-```
-- aliyun 指定bucket下创建文件
-```
-newBucketDir("credential", "aliyun", "bucketName", "test_dirname", "https://oss-cn-beijing.aliyuncs.com")
-
-newBucketDir("credential", "aws", "bucketName", "test_dirname")
-newBucketDir("credential", "aliyun", "bucketName", "test_dirname", "https://oss-cn-beijing.aliyuncs.com")
-```
-
-## 列出Bucket 中的文件
-```
-listBucketFile
-```
-
-参数1:credential
-
-参数2:云商provider
-
-参数3:bucketName
-
-参数4:前缀 Prefix
-
-参数5:endpoint(aliyun的时候必传)
-
-- 列出aws 指定bucket 下的文件
-```
-listBucketFile("credential", "aws", "bucketName", "test_dirname/")
-```
-- 列出aliyun 指定bucket 下的文件
-```
-listBucketFile("credential", "aliyun", "bucketName", "test_dirname/", "https://oss-cn-beijing.aliyuncs.com")
-
-listBucketFile("credential", "aws", "bucketName", "test_dirname/")
-listBucketFile("credential", "aliyun", "bucketName", "test_dirname/", "https://oss-cn-beijing.aliyuncs.com")
-```
-
-"""
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/multimodal/tools/make_mmc4_global_table.py b/spaces/chendl/compositional_test/multimodal/tools/make_mmc4_global_table.py
deleted file mode 100644
index 9701f099d7e0d7c7653bac18eefce0068c40088c..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/tools/make_mmc4_global_table.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import webdataset as wds
-import glob
-import os
-from tqdm import tqdm
-from tqdm.contrib.concurrent import process_map
-import pickle as pkl
-
-
-def single_thread(filename):
- id_table = {}
- dataset = wds.WebDataset(filename).decode().to_tuple("json")
- for data in dataset:
- data = data[0]
- image_id = data["caption"].split(".")[0]
- image_key = data["key"]
- tarfile = os.path.basename(filename)
- if image_id not in id_table:
- id_table[image_id] = [tarfile, image_key]
- return id_table
-
-if __name__ == "__main__":
- filenames = sorted(glob.glob("/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/mmc4/images/*.tar"))[:16000]
- print("start from", filenames[0])
- print("to", filenames[-1])
- id_tables = process_map(single_thread, filenames, max_workers=64)
- id_table = {}
- for table in tqdm(id_tables):
- id_table.update(table)
- print("total unique image:", len(id_table))
- pkl.dump(id_table, open("mmc4_id_table.pkl", "wb"))
- print("DONE")
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/visual_bert/utils.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/visual_bert/utils.py
deleted file mode 100644
index 2fc6ea2062efd2412dbd121f2f72c8aec75d36cf..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/visual_bert/utils.py
+++ /dev/null
@@ -1,554 +0,0 @@
-"""
- coding=utf-8
- Copyright 2018, Antonio Mendoza Hao Tan, Mohit Bansal, Huggingface team :)
- Adapted From Facebook Inc, Detectron2
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.import copy
- """
-
-import copy
-import fnmatch
-import json
-import os
-import pickle as pkl
-import shutil
-import sys
-import tarfile
-import tempfile
-from collections import OrderedDict
-from contextlib import contextmanager
-from functools import partial
-from hashlib import sha256
-from io import BytesIO
-from pathlib import Path
-from urllib.parse import urlparse
-from zipfile import ZipFile, is_zipfile
-
-import cv2
-import numpy as np
-import requests
-import wget
-from filelock import FileLock
-from PIL import Image
-from tqdm.auto import tqdm
-from yaml import Loader, dump, load
-
-
-try:
- import torch
-
- _torch_available = True
-except ImportError:
- _torch_available = False
-
-
-try:
- from torch.hub import _get_torch_home
-
- torch_cache_home = _get_torch_home()
-except ImportError:
- torch_cache_home = os.path.expanduser(
- os.getenv("TORCH_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), "torch"))
- )
-
-default_cache_path = os.path.join(torch_cache_home, "transformers")
-
-CLOUDFRONT_DISTRIB_PREFIX = "https://cdn.huggingface.co"
-S3_BUCKET_PREFIX = "https://s3.amazonaws.com/models.huggingface.co/bert"
-PATH = "/".join(str(Path(__file__).resolve()).split("/")[:-1])
-CONFIG = os.path.join(PATH, "config.yaml")
-ATTRIBUTES = os.path.join(PATH, "attributes.txt")
-OBJECTS = os.path.join(PATH, "objects.txt")
-PYTORCH_PRETRAINED_BERT_CACHE = os.getenv("PYTORCH_PRETRAINED_BERT_CACHE", default_cache_path)
-PYTORCH_TRANSFORMERS_CACHE = os.getenv("PYTORCH_TRANSFORMERS_CACHE", PYTORCH_PRETRAINED_BERT_CACHE)
-TRANSFORMERS_CACHE = os.getenv("TRANSFORMERS_CACHE", PYTORCH_TRANSFORMERS_CACHE)
-WEIGHTS_NAME = "pytorch_model.bin"
-CONFIG_NAME = "config.yaml"
-
-
-def load_labels(objs=OBJECTS, attrs=ATTRIBUTES):
- vg_classes = []
- with open(objs) as f:
- for object in f.readlines():
- vg_classes.append(object.split(",")[0].lower().strip())
-
- vg_attrs = []
- with open(attrs) as f:
- for object in f.readlines():
- vg_attrs.append(object.split(",")[0].lower().strip())
- return vg_classes, vg_attrs
-
-
-def load_checkpoint(ckp):
- r = OrderedDict()
- with open(ckp, "rb") as f:
- ckp = pkl.load(f)["model"]
- for k in copy.deepcopy(list(ckp.keys())):
- v = ckp.pop(k)
- if isinstance(v, np.ndarray):
- v = torch.tensor(v)
- else:
- assert isinstance(v, torch.tensor), type(v)
- r[k] = v
- return r
-
-
-class Config:
- _pointer = {}
-
- def __init__(self, dictionary: dict, name: str = "root", level=0):
- self._name = name
- self._level = level
- d = {}
- for k, v in dictionary.items():
- if v is None:
- raise ValueError()
- k = copy.deepcopy(k)
- v = copy.deepcopy(v)
- if isinstance(v, dict):
- v = Config(v, name=k, level=level + 1)
- d[k] = v
- setattr(self, k, v)
-
- self._pointer = d
-
- def __repr__(self):
- return str(list((self._pointer.keys())))
-
- def __setattr__(self, key, val):
- self.__dict__[key] = val
- self.__dict__[key.upper()] = val
- levels = key.split(".")
- last_level = len(levels) - 1
- pointer = self._pointer
- if len(levels) > 1:
- for i, l in enumerate(levels):
- if hasattr(self, l) and isinstance(getattr(self, l), Config):
- setattr(getattr(self, l), ".".join(levels[i:]), val)
- if l == last_level:
- pointer[l] = val
- else:
- pointer = pointer[l]
-
- def to_dict(self):
- return self._pointer
-
- def dump_yaml(self, data, file_name):
- with open(f"{file_name}", "w") as stream:
- dump(data, stream)
-
- def dump_json(self, data, file_name):
- with open(f"{file_name}", "w") as stream:
- json.dump(data, stream)
-
- @staticmethod
- def load_yaml(config):
- with open(config) as stream:
- data = load(stream, Loader=Loader)
- return data
-
- def __str__(self):
- t = " "
- if self._name != "root":
- r = f"{t * (self._level-1)}{self._name}:\n"
- else:
- r = ""
- level = self._level
- for i, (k, v) in enumerate(self._pointer.items()):
- if isinstance(v, Config):
- r += f"{t * (self._level)}{v}\n"
- self._level += 1
- else:
- r += f"{t * (self._level)}{k}: {v} ({type(v).__name__})\n"
- self._level = level
- return r[:-1]
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path: str, **kwargs):
- config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
- return cls(config_dict)
-
- @classmethod
- def get_config_dict(cls, pretrained_model_name_or_path: str, **kwargs):
- cache_dir = kwargs.pop("cache_dir", None)
- force_download = kwargs.pop("force_download", False)
- resume_download = kwargs.pop("resume_download", False)
- proxies = kwargs.pop("proxies", None)
- local_files_only = kwargs.pop("local_files_only", False)
-
- if os.path.isdir(pretrained_model_name_or_path):
- config_file = os.path.join(pretrained_model_name_or_path, CONFIG_NAME)
- elif os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path):
- config_file = pretrained_model_name_or_path
- else:
- config_file = hf_bucket_url(pretrained_model_name_or_path, filename=CONFIG_NAME, use_cdn=False)
-
- try:
- # Load from URL or cache if already cached
- resolved_config_file = cached_path(
- config_file,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- local_files_only=local_files_only,
- )
- # Load config dict
- if resolved_config_file is None:
- raise EnvironmentError
-
- config_file = Config.load_yaml(resolved_config_file)
-
- except EnvironmentError:
- msg = "Can't load config for"
- raise EnvironmentError(msg)
-
- if resolved_config_file == config_file:
- print("loading configuration file from path")
- else:
- print("loading configuration file cache")
-
- return Config.load_yaml(resolved_config_file), kwargs
-
-
-# quick compare tensors
-def compare(in_tensor):
- out_tensor = torch.load("dump.pt", map_location=in_tensor.device)
- n1 = in_tensor.numpy()
- n2 = out_tensor.numpy()[0]
- print(n1.shape, n1[0, 0, :5])
- print(n2.shape, n2[0, 0, :5])
- assert np.allclose(n1, n2, rtol=0.01, atol=0.1), (
- f"{sum([1 for x in np.isclose(n1, n2, rtol=0.01, atol=0.1).flatten() if x is False])/len(n1.flatten())*100:.4f} %"
- " element-wise mismatch"
- )
- raise Exception("tensors are all good")
-
- # Hugging face functions below
-
-
-def is_remote_url(url_or_filename):
- parsed = urlparse(url_or_filename)
- return parsed.scheme in ("http", "https")
-
-
-def hf_bucket_url(model_id: str, filename: str, use_cdn=True) -> str:
- endpoint = CLOUDFRONT_DISTRIB_PREFIX if use_cdn else S3_BUCKET_PREFIX
- legacy_format = "/" not in model_id
- if legacy_format:
- return f"{endpoint}/{model_id}-{filename}"
- else:
- return f"{endpoint}/{model_id}/{filename}"
-
-
-def http_get(
- url,
- temp_file,
- proxies=None,
- resume_size=0,
- user_agent=None,
-):
- ua = "python/{}".format(sys.version.split()[0])
- if _torch_available:
- ua += "; torch/{}".format(torch.__version__)
- if isinstance(user_agent, dict):
- ua += "; " + "; ".join("{}/{}".format(k, v) for k, v in user_agent.items())
- elif isinstance(user_agent, str):
- ua += "; " + user_agent
- headers = {"user-agent": ua}
- if resume_size > 0:
- headers["Range"] = "bytes=%d-" % (resume_size,)
- response = requests.get(url, stream=True, proxies=proxies, headers=headers)
- if response.status_code == 416: # Range not satisfiable
- return
- content_length = response.headers.get("Content-Length")
- total = resume_size + int(content_length) if content_length is not None else None
- progress = tqdm(
- unit="B",
- unit_scale=True,
- total=total,
- initial=resume_size,
- desc="Downloading",
- )
- for chunk in response.iter_content(chunk_size=1024):
- if chunk: # filter out keep-alive new chunks
- progress.update(len(chunk))
- temp_file.write(chunk)
- progress.close()
-
-
-def get_from_cache(
- url,
- cache_dir=None,
- force_download=False,
- proxies=None,
- etag_timeout=10,
- resume_download=False,
- user_agent=None,
- local_files_only=False,
-):
- if cache_dir is None:
- cache_dir = TRANSFORMERS_CACHE
- if isinstance(cache_dir, Path):
- cache_dir = str(cache_dir)
-
- os.makedirs(cache_dir, exist_ok=True)
-
- etag = None
- if not local_files_only:
- try:
- response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)
- if response.status_code == 200:
- etag = response.headers.get("ETag")
- except (EnvironmentError, requests.exceptions.Timeout):
- # etag is already None
- pass
-
- filename = url_to_filename(url, etag)
-
- # get cache path to put the file
- cache_path = os.path.join(cache_dir, filename)
-
- # etag is None = we don't have a connection, or url doesn't exist, or is otherwise inaccessible.
- # try to get the last downloaded one
- if etag is None:
- if os.path.exists(cache_path):
- return cache_path
- else:
- matching_files = [
- file
- for file in fnmatch.filter(os.listdir(cache_dir), filename + ".*")
- if not file.endswith(".json") and not file.endswith(".lock")
- ]
- if len(matching_files) > 0:
- return os.path.join(cache_dir, matching_files[-1])
- else:
- # If files cannot be found and local_files_only=True,
- # the models might've been found if local_files_only=False
- # Notify the user about that
- if local_files_only:
- raise ValueError(
- "Cannot find the requested files in the cached path and outgoing traffic has been"
- " disabled. To enable model look-ups and downloads online, set 'local_files_only'"
- " to False."
- )
- return None
-
- # From now on, etag is not None.
- if os.path.exists(cache_path) and not force_download:
- return cache_path
-
- # Prevent parallel downloads of the same file with a lock.
- lock_path = cache_path + ".lock"
- with FileLock(lock_path):
- # If the download just completed while the lock was activated.
- if os.path.exists(cache_path) and not force_download:
- # Even if returning early like here, the lock will be released.
- return cache_path
-
- if resume_download:
- incomplete_path = cache_path + ".incomplete"
-
- @contextmanager
- def _resumable_file_manager():
- with open(incomplete_path, "a+b") as f:
- yield f
-
- temp_file_manager = _resumable_file_manager
- if os.path.exists(incomplete_path):
- resume_size = os.stat(incomplete_path).st_size
- else:
- resume_size = 0
- else:
- temp_file_manager = partial(tempfile.NamedTemporaryFile, dir=cache_dir, delete=False)
- resume_size = 0
-
- # Download to temporary file, then copy to cache dir once finished.
- # Otherwise you get corrupt cache entries if the download gets interrupted.
- with temp_file_manager() as temp_file:
- print(
- "%s not found in cache or force_download set to True, downloading to %s",
- url,
- temp_file.name,
- )
-
- http_get(
- url,
- temp_file,
- proxies=proxies,
- resume_size=resume_size,
- user_agent=user_agent,
- )
-
- os.replace(temp_file.name, cache_path)
-
- meta = {"url": url, "etag": etag}
- meta_path = cache_path + ".json"
- with open(meta_path, "w") as meta_file:
- json.dump(meta, meta_file)
-
- return cache_path
-
-
-def url_to_filename(url, etag=None):
- url_bytes = url.encode("utf-8")
- url_hash = sha256(url_bytes)
- filename = url_hash.hexdigest()
-
- if etag:
- etag_bytes = etag.encode("utf-8")
- etag_hash = sha256(etag_bytes)
- filename += "." + etag_hash.hexdigest()
-
- if url.endswith(".h5"):
- filename += ".h5"
-
- return filename
-
-
-def cached_path(
- url_or_filename,
- cache_dir=None,
- force_download=False,
- proxies=None,
- resume_download=False,
- user_agent=None,
- extract_compressed_file=False,
- force_extract=False,
- local_files_only=False,
-):
- if cache_dir is None:
- cache_dir = TRANSFORMERS_CACHE
- if isinstance(url_or_filename, Path):
- url_or_filename = str(url_or_filename)
- if isinstance(cache_dir, Path):
- cache_dir = str(cache_dir)
-
- if is_remote_url(url_or_filename):
- # URL, so get it from the cache (downloading if necessary)
- output_path = get_from_cache(
- url_or_filename,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- user_agent=user_agent,
- local_files_only=local_files_only,
- )
- elif os.path.exists(url_or_filename):
- # File, and it exists.
- output_path = url_or_filename
- elif urlparse(url_or_filename).scheme == "":
- # File, but it doesn't exist.
- raise EnvironmentError("file {} not found".format(url_or_filename))
- else:
- # Something unknown
- raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename))
-
- if extract_compressed_file:
- if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path):
- return output_path
-
- # Path where we extract compressed archives
- # We avoid '.' in dir name and add "-extracted" at the end: "./model.zip" => "./model-zip-extracted/"
- output_dir, output_file = os.path.split(output_path)
- output_extract_dir_name = output_file.replace(".", "-") + "-extracted"
- output_path_extracted = os.path.join(output_dir, output_extract_dir_name)
-
- if os.path.isdir(output_path_extracted) and os.listdir(output_path_extracted) and not force_extract:
- return output_path_extracted
-
- # Prevent parallel extractions
- lock_path = output_path + ".lock"
- with FileLock(lock_path):
- shutil.rmtree(output_path_extracted, ignore_errors=True)
- os.makedirs(output_path_extracted)
- if is_zipfile(output_path):
- with ZipFile(output_path, "r") as zip_file:
- zip_file.extractall(output_path_extracted)
- zip_file.close()
- elif tarfile.is_tarfile(output_path):
- tar_file = tarfile.open(output_path)
- tar_file.extractall(output_path_extracted)
- tar_file.close()
- else:
- raise EnvironmentError("Archive format of {} could not be identified".format(output_path))
-
- return output_path_extracted
-
- return output_path
-
-
-def get_data(query, delim=","):
- assert isinstance(query, str)
- if os.path.isfile(query):
- with open(query) as f:
- data = eval(f.read())
- else:
- req = requests.get(query)
- try:
- data = requests.json()
- except Exception:
- data = req.content.decode()
- assert data is not None, "could not connect"
- try:
- data = eval(data)
- except Exception:
- data = data.split("\n")
- req.close()
- return data
-
-
-def get_image_from_url(url):
- response = requests.get(url)
- img = np.array(Image.open(BytesIO(response.content)))
- return img
-
-
-# to load legacy frcnn checkpoint from detectron
-def load_frcnn_pkl_from_url(url):
- fn = url.split("/")[-1]
- if fn not in os.listdir(os.getcwd()):
- wget.download(url)
- with open(fn, "rb") as stream:
- weights = pkl.load(stream)
- model = weights.pop("model")
- new = {}
- for k, v in model.items():
- new[k] = torch.from_numpy(v)
- if "running_var" in k:
- zero = torch.tensor([0])
- k2 = k.replace("running_var", "num_batches_tracked")
- new[k2] = zero
- return new
-
-
-def get_demo_path():
- print(f"{os.path.abspath(os.path.join(PATH, os.pardir))}/demo.ipynb")
-
-
-def img_tensorize(im, input_format="RGB"):
- assert isinstance(im, str)
- if os.path.isfile(im):
- img = cv2.imread(im)
- else:
- img = get_image_from_url(im)
- assert img is not None, f"could not connect to: {im}"
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- if input_format == "RGB":
- img = img[:, :, ::-1]
- return img
-
-
-def chunk(images, batch=1):
- return (images[i : i + batch] for i in range(0, len(images), batch))
diff --git a/spaces/chilge/taoli/README.md b/spaces/chilge/taoli/README.md
deleted file mode 100644
index 31b33c52f82a9841c3a31247d4558f3d92da5105..0000000000000000000000000000000000000000
--- a/spaces/chilge/taoli/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 姬宫桃李
-emoji: 🌍
-colorFrom: pink
-colorTo: pink
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-duplicated_from: chilge/m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/expr/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/expr/__init__.py
deleted file mode 100644
index 6ba7f8b8b96e28e4f0f7f143f29023d1bc0e58ba..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/expr/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-"""Tools for creating transform & filter expressions with a python syntax"""
-# ruff: noqa
-from typing import Any
-
-from .core import datum, Expression
-from .funcs import *
-from .consts import *
-from ..vegalite.v5.schema.core import ExprRef as _ExprRef
-
-
-class _ExprType:
- def __init__(self, expr):
- vars(self).update(expr)
-
- def __call__(self, expr, **kwargs):
- return _ExprRef(expr, **kwargs)
-
-
-expr: Any = _ExprType(globals())
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/click/shell_completion.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/click/shell_completion.py
deleted file mode 100644
index 5de124702ec711c7fc7e8244d95812aee41747a0..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/click/shell_completion.py
+++ /dev/null
@@ -1,593 +0,0 @@
-import os
-import re
-import typing as t
-from gettext import gettext as _
-
-from .core import Argument
-from .core import BaseCommand
-from .core import Context
-from .core import MultiCommand
-from .core import Option
-from .core import Parameter
-from .core import ParameterSource
-from .parser import split_arg_string
-from .utils import echo
-
-
-def shell_complete(
- cli: BaseCommand,
- ctx_args: t.MutableMapping[str, t.Any],
- prog_name: str,
- complete_var: str,
- instruction: str,
-) -> int:
- """Perform shell completion for the given CLI program.
-
- :param cli: Command being called.
- :param ctx_args: Extra arguments to pass to
- ``cli.make_context``.
- :param prog_name: Name of the executable in the shell.
- :param complete_var: Name of the environment variable that holds
- the completion instruction.
- :param instruction: Value of ``complete_var`` with the completion
- instruction and shell, in the form ``instruction_shell``.
- :return: Status code to exit with.
- """
- shell, _, instruction = instruction.partition("_")
- comp_cls = get_completion_class(shell)
-
- if comp_cls is None:
- return 1
-
- comp = comp_cls(cli, ctx_args, prog_name, complete_var)
-
- if instruction == "source":
- echo(comp.source())
- return 0
-
- if instruction == "complete":
- echo(comp.complete())
- return 0
-
- return 1
-
-
-class CompletionItem:
- """Represents a completion value and metadata about the value. The
- default metadata is ``type`` to indicate special shell handling,
- and ``help`` if a shell supports showing a help string next to the
- value.
-
- Arbitrary parameters can be passed when creating the object, and
- accessed using ``item.attr``. If an attribute wasn't passed,
- accessing it returns ``None``.
-
- :param value: The completion suggestion.
- :param type: Tells the shell script to provide special completion
- support for the type. Click uses ``"dir"`` and ``"file"``.
- :param help: String shown next to the value if supported.
- :param kwargs: Arbitrary metadata. The built-in implementations
- don't use this, but custom type completions paired with custom
- shell support could use it.
- """
-
- __slots__ = ("value", "type", "help", "_info")
-
- def __init__(
- self,
- value: t.Any,
- type: str = "plain",
- help: t.Optional[str] = None,
- **kwargs: t.Any,
- ) -> None:
- self.value: t.Any = value
- self.type: str = type
- self.help: t.Optional[str] = help
- self._info = kwargs
-
- def __getattr__(self, name: str) -> t.Any:
- return self._info.get(name)
-
-
-# Only Bash >= 4.4 has the nosort option.
-_SOURCE_BASH = """\
-%(complete_func)s() {
- local IFS=$'\\n'
- local response
-
- response=$(env COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD=$COMP_CWORD \
-%(complete_var)s=bash_complete $1)
-
- for completion in $response; do
- IFS=',' read type value <<< "$completion"
-
- if [[ $type == 'dir' ]]; then
- COMPREPLY=()
- compopt -o dirnames
- elif [[ $type == 'file' ]]; then
- COMPREPLY=()
- compopt -o default
- elif [[ $type == 'plain' ]]; then
- COMPREPLY+=($value)
- fi
- done
-
- return 0
-}
-
-%(complete_func)s_setup() {
- complete -o nosort -F %(complete_func)s %(prog_name)s
-}
-
-%(complete_func)s_setup;
-"""
-
-_SOURCE_ZSH = """\
-#compdef %(prog_name)s
-
-%(complete_func)s() {
- local -a completions
- local -a completions_with_descriptions
- local -a response
- (( ! $+commands[%(prog_name)s] )) && return 1
-
- response=("${(@f)$(env COMP_WORDS="${words[*]}" COMP_CWORD=$((CURRENT-1)) \
-%(complete_var)s=zsh_complete %(prog_name)s)}")
-
- for type key descr in ${response}; do
- if [[ "$type" == "plain" ]]; then
- if [[ "$descr" == "_" ]]; then
- completions+=("$key")
- else
- completions_with_descriptions+=("$key":"$descr")
- fi
- elif [[ "$type" == "dir" ]]; then
- _path_files -/
- elif [[ "$type" == "file" ]]; then
- _path_files -f
- fi
- done
-
- if [ -n "$completions_with_descriptions" ]; then
- _describe -V unsorted completions_with_descriptions -U
- fi
-
- if [ -n "$completions" ]; then
- compadd -U -V unsorted -a completions
- fi
-}
-
-if [[ $zsh_eval_context[-1] == loadautofunc ]]; then
- # autoload from fpath, call function directly
- %(complete_func)s "$@"
-else
- # eval/source/. command, register function for later
- compdef %(complete_func)s %(prog_name)s
-fi
-"""
-
-_SOURCE_FISH = """\
-function %(complete_func)s
- set -l response (env %(complete_var)s=fish_complete COMP_WORDS=(commandline -cp) \
-COMP_CWORD=(commandline -t) %(prog_name)s)
-
- for completion in $response
- set -l metadata (string split "," $completion)
-
- if test $metadata[1] = "dir"
- __fish_complete_directories $metadata[2]
- else if test $metadata[1] = "file"
- __fish_complete_path $metadata[2]
- else if test $metadata[1] = "plain"
- echo $metadata[2]
- end
- end
-end
-
-complete --no-files --command %(prog_name)s --arguments \
-"(%(complete_func)s)"
-"""
-
-
-class ShellComplete:
- """Base class for providing shell completion support. A subclass for
- a given shell will override attributes and methods to implement the
- completion instructions (``source`` and ``complete``).
-
- :param cli: Command being called.
- :param prog_name: Name of the executable in the shell.
- :param complete_var: Name of the environment variable that holds
- the completion instruction.
-
- .. versionadded:: 8.0
- """
-
- name: t.ClassVar[str]
- """Name to register the shell as with :func:`add_completion_class`.
- This is used in completion instructions (``{name}_source`` and
- ``{name}_complete``).
- """
-
- source_template: t.ClassVar[str]
- """Completion script template formatted by :meth:`source`. This must
- be provided by subclasses.
- """
-
- def __init__(
- self,
- cli: BaseCommand,
- ctx_args: t.MutableMapping[str, t.Any],
- prog_name: str,
- complete_var: str,
- ) -> None:
- self.cli = cli
- self.ctx_args = ctx_args
- self.prog_name = prog_name
- self.complete_var = complete_var
-
- @property
- def func_name(self) -> str:
- """The name of the shell function defined by the completion
- script.
- """
- safe_name = re.sub(r"\W*", "", self.prog_name.replace("-", "_"), re.ASCII)
- return f"_{safe_name}_completion"
-
- def source_vars(self) -> t.Dict[str, t.Any]:
- """Vars for formatting :attr:`source_template`.
-
- By default this provides ``complete_func``, ``complete_var``,
- and ``prog_name``.
- """
- return {
- "complete_func": self.func_name,
- "complete_var": self.complete_var,
- "prog_name": self.prog_name,
- }
-
- def source(self) -> str:
- """Produce the shell script that defines the completion
- function. By default this ``%``-style formats
- :attr:`source_template` with the dict returned by
- :meth:`source_vars`.
- """
- return self.source_template % self.source_vars()
-
- def get_completion_args(self) -> t.Tuple[t.List[str], str]:
- """Use the env vars defined by the shell script to return a
- tuple of ``args, incomplete``. This must be implemented by
- subclasses.
- """
- raise NotImplementedError
-
- def get_completions(
- self, args: t.List[str], incomplete: str
- ) -> t.List[CompletionItem]:
- """Determine the context and last complete command or parameter
- from the complete args. Call that object's ``shell_complete``
- method to get the completions for the incomplete value.
-
- :param args: List of complete args before the incomplete value.
- :param incomplete: Value being completed. May be empty.
- """
- ctx = _resolve_context(self.cli, self.ctx_args, self.prog_name, args)
- obj, incomplete = _resolve_incomplete(ctx, args, incomplete)
- return obj.shell_complete(ctx, incomplete)
-
- def format_completion(self, item: CompletionItem) -> str:
- """Format a completion item into the form recognized by the
- shell script. This must be implemented by subclasses.
-
- :param item: Completion item to format.
- """
- raise NotImplementedError
-
- def complete(self) -> str:
- """Produce the completion data to send back to the shell.
-
- By default this calls :meth:`get_completion_args`, gets the
- completions, then calls :meth:`format_completion` for each
- completion.
- """
- args, incomplete = self.get_completion_args()
- completions = self.get_completions(args, incomplete)
- out = [self.format_completion(item) for item in completions]
- return "\n".join(out)
-
-
-class BashComplete(ShellComplete):
- """Shell completion for Bash."""
-
- name = "bash"
- source_template = _SOURCE_BASH
-
- def _check_version(self) -> None:
- import subprocess
-
- output = subprocess.run(
- ["bash", "-c", 'echo "${BASH_VERSION}"'], stdout=subprocess.PIPE
- )
- match = re.search(r"^(\d+)\.(\d+)\.\d+", output.stdout.decode())
-
- if match is not None:
- major, minor = match.groups()
-
- if major < "4" or major == "4" and minor < "4":
- raise RuntimeError(
- _(
- "Shell completion is not supported for Bash"
- " versions older than 4.4."
- )
- )
- else:
- raise RuntimeError(
- _("Couldn't detect Bash version, shell completion is not supported.")
- )
-
- def source(self) -> str:
- self._check_version()
- return super().source()
-
- def get_completion_args(self) -> t.Tuple[t.List[str], str]:
- cwords = split_arg_string(os.environ["COMP_WORDS"])
- cword = int(os.environ["COMP_CWORD"])
- args = cwords[1:cword]
-
- try:
- incomplete = cwords[cword]
- except IndexError:
- incomplete = ""
-
- return args, incomplete
-
- def format_completion(self, item: CompletionItem) -> str:
- return f"{item.type},{item.value}"
-
-
-class ZshComplete(ShellComplete):
- """Shell completion for Zsh."""
-
- name = "zsh"
- source_template = _SOURCE_ZSH
-
- def get_completion_args(self) -> t.Tuple[t.List[str], str]:
- cwords = split_arg_string(os.environ["COMP_WORDS"])
- cword = int(os.environ["COMP_CWORD"])
- args = cwords[1:cword]
-
- try:
- incomplete = cwords[cword]
- except IndexError:
- incomplete = ""
-
- return args, incomplete
-
- def format_completion(self, item: CompletionItem) -> str:
- return f"{item.type}\n{item.value}\n{item.help if item.help else '_'}"
-
-
-class FishComplete(ShellComplete):
- """Shell completion for Fish."""
-
- name = "fish"
- source_template = _SOURCE_FISH
-
- def get_completion_args(self) -> t.Tuple[t.List[str], str]:
- cwords = split_arg_string(os.environ["COMP_WORDS"])
- incomplete = os.environ["COMP_CWORD"]
- args = cwords[1:]
-
- # Fish stores the partial word in both COMP_WORDS and
- # COMP_CWORD, remove it from complete args.
- if incomplete and args and args[-1] == incomplete:
- args.pop()
-
- return args, incomplete
-
- def format_completion(self, item: CompletionItem) -> str:
- if item.help:
- return f"{item.type},{item.value}\t{item.help}"
-
- return f"{item.type},{item.value}"
-
-
-ShellCompleteType = t.TypeVar("ShellCompleteType", bound=t.Type[ShellComplete])
-
-
-_available_shells: t.Dict[str, t.Type[ShellComplete]] = {
- "bash": BashComplete,
- "fish": FishComplete,
- "zsh": ZshComplete,
-}
-
-
-def add_completion_class(
- cls: ShellCompleteType, name: t.Optional[str] = None
-) -> ShellCompleteType:
- """Register a :class:`ShellComplete` subclass under the given name.
- The name will be provided by the completion instruction environment
- variable during completion.
-
- :param cls: The completion class that will handle completion for the
- shell.
- :param name: Name to register the class under. Defaults to the
- class's ``name`` attribute.
- """
- if name is None:
- name = cls.name
-
- _available_shells[name] = cls
-
- return cls
-
-
-def get_completion_class(shell: str) -> t.Optional[t.Type[ShellComplete]]:
- """Look up a registered :class:`ShellComplete` subclass by the name
- provided by the completion instruction environment variable. If the
- name isn't registered, returns ``None``.
-
- :param shell: Name the class is registered under.
- """
- return _available_shells.get(shell)
-
-
-def _is_incomplete_argument(ctx: Context, param: Parameter) -> bool:
- """Determine if the given parameter is an argument that can still
- accept values.
-
- :param ctx: Invocation context for the command represented by the
- parsed complete args.
- :param param: Argument object being checked.
- """
- if not isinstance(param, Argument):
- return False
-
- assert param.name is not None
- # Will be None if expose_value is False.
- value = ctx.params.get(param.name)
- return (
- param.nargs == -1
- or ctx.get_parameter_source(param.name) is not ParameterSource.COMMANDLINE
- or (
- param.nargs > 1
- and isinstance(value, (tuple, list))
- and len(value) < param.nargs
- )
- )
-
-
-def _start_of_option(ctx: Context, value: str) -> bool:
- """Check if the value looks like the start of an option."""
- if not value:
- return False
-
- c = value[0]
- return c in ctx._opt_prefixes
-
-
-def _is_incomplete_option(ctx: Context, args: t.List[str], param: Parameter) -> bool:
- """Determine if the given parameter is an option that needs a value.
-
- :param args: List of complete args before the incomplete value.
- :param param: Option object being checked.
- """
- if not isinstance(param, Option):
- return False
-
- if param.is_flag or param.count:
- return False
-
- last_option = None
-
- for index, arg in enumerate(reversed(args)):
- if index + 1 > param.nargs:
- break
-
- if _start_of_option(ctx, arg):
- last_option = arg
-
- return last_option is not None and last_option in param.opts
-
-
-def _resolve_context(
- cli: BaseCommand,
- ctx_args: t.MutableMapping[str, t.Any],
- prog_name: str,
- args: t.List[str],
-) -> Context:
- """Produce the context hierarchy starting with the command and
- traversing the complete arguments. This only follows the commands,
- it doesn't trigger input prompts or callbacks.
-
- :param cli: Command being called.
- :param prog_name: Name of the executable in the shell.
- :param args: List of complete args before the incomplete value.
- """
- ctx_args["resilient_parsing"] = True
- ctx = cli.make_context(prog_name, args.copy(), **ctx_args)
- args = ctx.protected_args + ctx.args
-
- while args:
- command = ctx.command
-
- if isinstance(command, MultiCommand):
- if not command.chain:
- name, cmd, args = command.resolve_command(ctx, args)
-
- if cmd is None:
- return ctx
-
- ctx = cmd.make_context(name, args, parent=ctx, resilient_parsing=True)
- args = ctx.protected_args + ctx.args
- else:
- sub_ctx = ctx
-
- while args:
- name, cmd, args = command.resolve_command(ctx, args)
-
- if cmd is None:
- return ctx
-
- sub_ctx = cmd.make_context(
- name,
- args,
- parent=ctx,
- allow_extra_args=True,
- allow_interspersed_args=False,
- resilient_parsing=True,
- )
- args = sub_ctx.args
-
- ctx = sub_ctx
- args = [*sub_ctx.protected_args, *sub_ctx.args]
- else:
- break
-
- return ctx
-
-
-def _resolve_incomplete(
- ctx: Context, args: t.List[str], incomplete: str
-) -> t.Tuple[t.Union[BaseCommand, Parameter], str]:
- """Find the Click object that will handle the completion of the
- incomplete value. Return the object and the incomplete value.
-
- :param ctx: Invocation context for the command represented by
- the parsed complete args.
- :param args: List of complete args before the incomplete value.
- :param incomplete: Value being completed. May be empty.
- """
- # Different shells treat an "=" between a long option name and
- # value differently. Might keep the value joined, return the "="
- # as a separate item, or return the split name and value. Always
- # split and discard the "=" to make completion easier.
- if incomplete == "=":
- incomplete = ""
- elif "=" in incomplete and _start_of_option(ctx, incomplete):
- name, _, incomplete = incomplete.partition("=")
- args.append(name)
-
- # The "--" marker tells Click to stop treating values as options
- # even if they start with the option character. If it hasn't been
- # given and the incomplete arg looks like an option, the current
- # command will provide option name completions.
- if "--" not in args and _start_of_option(ctx, incomplete):
- return ctx.command, incomplete
-
- params = ctx.command.get_params(ctx)
-
- # If the last complete arg is an option name with an incomplete
- # value, the option will provide value completions.
- for param in params:
- if _is_incomplete_option(ctx, args, param):
- return param, incomplete
-
- # It's not an option name or value. The first argument without a
- # parsed value will provide value completions.
- for param in params:
- if _is_incomplete_argument(ctx, param):
- return param, incomplete
-
- # There were no unparsed arguments, the command may be a group that
- # will provide command name completions.
- return ctx.command, incomplete
diff --git a/spaces/cihyFjudo/fairness-paper-search/Introduction To Probability Models 10th Pdf 11.md b/spaces/cihyFjudo/fairness-paper-search/Introduction To Probability Models 10th Pdf 11.md
deleted file mode 100644
index 3858f1ade7819875c86b2797492f8237632e0561..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Introduction To Probability Models 10th Pdf 11.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
Our aim was to investigate in a joint cohort how important role pre-pregnancy obesity of the mother plays in the likelihood of developing isolated gestational hypertension (GH) and preeclampsia (PE), compared to the role of other risk factors. Our analysis was based on the measurement of several statistical indicators, in the evaluation of multivariate probability models. In order to broaden the assessment of the quality of the prediction of potential GH and PE risk markers, in addition to AUC, we used two newer coefficients: Integrated Discrimination Improvement (IDI) and Net Reclassification Improvement (NRI). The IDI measures the mean change in disease probability when a new marker is added to the model. The NRI, on the other hand, provides a clinically very favorable interpretation by calculating the percentage of persons in whom the addition of the marker under examination improves or worsens the prediction (classification). Missing data were treated as an additional category, and each analysis was based on the same dataset [23,24,25]. We have not found a similar study in the literature.
-
Subsequently, multi-factor predictive regression models (separately for GH and PE) were built. A small basic regression model was built, which included age and primiparity. Subsequent models were extended, and one additional (tested) variable was added to the base model. Three prediction indexes were used to assess the improvement in prediction (change in disease probability) in the subsequent extended multivariate models (compared to the base model): Integrated Discrimination Improvement (IDI), Net Reclassification Improvement (NRI), and area under receiver operating characteristic curve (AUC under ROC curve) of the basic and extended model. For each of the three indicators, 95% confidence intervals were calculated, and their statistical significance (p-value) was checked. High and statistically significant values obtained for the difference of AUC and for IDI and NRI prove good predictive ability of the variable added to the basic regression model [23,24,25].
AUC is a known prediction factor in regression models; the greater the difference between the AUC of the extended model and the AUC of the base model, the greater the improvement in the prediction when a new variable is added to the model. The IDI index shows the difference between the value of the mean change in the predicted probability between the group of sick women and the group of healthy women. The NRI index focuses on the reclassification table describing the number of women in whom an upward or downward shift in the disease probability value occurred after a new factor had been added to the model.
-
Importantly, we assessed multivariate prediction models to investigate the importance of pre-pregnancy obesity/overweight, and not to confirm that the greater number of predictors increases the prediction. Our analysis has several advantages over the widely used odds ratio calculations or AUC analysis. IDI determines the mean change in disease probability due to the addition of a new potential marker to the model. NRI gives a clinically very favorable interpretation by providing the percentage of people in whom the addition of the marker under study improves or worsens the prediction (classification). By summing up the marker hierarchy obtained in the AUC, IDI, and NRI examination, we established the final predictor hierarchy taking into consideration many mathematical results.
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cmudrc/microstructure-data-explorer/README.md b/spaces/cmudrc/microstructure-data-explorer/README.md
deleted file mode 100644
index 6ba5e3c2054654f7a87e4e90b13588d9f4eeaf7d..0000000000000000000000000000000000000000
--- a/spaces/cmudrc/microstructure-data-explorer/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Microstructure Data Explorer
-emoji: 🔢 📏 🧭
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
-license: mit
----
\ No newline at end of file
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/sbrdsp_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/sbrdsp_init_arm.c
deleted file mode 100644
index 4fb69f922b07b5fc61334a05e229f23ad47beed6..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/sbrdsp_init_arm.c
+++ /dev/null
@@ -1,73 +0,0 @@
-/*
- * Copyright (c) 2012 Mans Rullgard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "config.h"
-#include "libavutil/arm/cpu.h"
-#include "libavutil/attributes.h"
-#include "libavcodec/sbrdsp.h"
-
-void ff_sbr_sum64x5_neon(float *z);
-float ff_sbr_sum_square_neon(float (*x)[2], int n);
-void ff_sbr_neg_odd_64_neon(float *x);
-void ff_sbr_qmf_pre_shuffle_neon(float *z);
-void ff_sbr_qmf_post_shuffle_neon(float W[32][2], const float *z);
-void ff_sbr_qmf_deint_neg_neon(float *v, const float *src);
-void ff_sbr_qmf_deint_bfly_neon(float *v, const float *src0, const float *src1);
-void ff_sbr_hf_g_filt_neon(float (*Y)[2], const float (*X_high)[40][2],
- const float *g_filt, int m_max, intptr_t ixh);
-void ff_sbr_hf_gen_neon(float (*X_high)[2], const float (*X_low)[2],
- const float alpha0[2], const float alpha1[2],
- float bw, int start, int end);
-void ff_sbr_autocorrelate_neon(const float x[40][2], float phi[3][2][2]);
-
-void ff_sbr_hf_apply_noise_0_neon(float Y[64][2], const float *s_m,
- const float *q_filt, int noise,
- int kx, int m_max);
-void ff_sbr_hf_apply_noise_1_neon(float Y[64][2], const float *s_m,
- const float *q_filt, int noise,
- int kx, int m_max);
-void ff_sbr_hf_apply_noise_2_neon(float Y[64][2], const float *s_m,
- const float *q_filt, int noise,
- int kx, int m_max);
-void ff_sbr_hf_apply_noise_3_neon(float Y[64][2], const float *s_m,
- const float *q_filt, int noise,
- int kx, int m_max);
-
-av_cold void ff_sbrdsp_init_arm(SBRDSPContext *s)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_neon(cpu_flags)) {
- s->sum64x5 = ff_sbr_sum64x5_neon;
- s->sum_square = ff_sbr_sum_square_neon;
- s->neg_odd_64 = ff_sbr_neg_odd_64_neon;
- s->qmf_pre_shuffle = ff_sbr_qmf_pre_shuffle_neon;
- s->qmf_post_shuffle = ff_sbr_qmf_post_shuffle_neon;
- s->qmf_deint_neg = ff_sbr_qmf_deint_neg_neon;
- s->qmf_deint_bfly = ff_sbr_qmf_deint_bfly_neon;
- s->hf_g_filt = ff_sbr_hf_g_filt_neon;
- s->hf_gen = ff_sbr_hf_gen_neon;
- s->autocorrelate = ff_sbr_autocorrelate_neon;
- s->hf_apply_noise[0] = ff_sbr_hf_apply_noise_0_neon;
- s->hf_apply_noise[1] = ff_sbr_hf_apply_noise_1_neon;
- s->hf_apply_noise[2] = ff_sbr_hf_apply_noise_2_neon;
- s->hf_apply_noise[3] = ff_sbr_hf_apply_noise_3_neon;
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbrt_tablegen.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbrt_tablegen.c
deleted file mode 100644
index 8c2235e9876b733ddee04072db483e665afd512d..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbrt_tablegen.c
+++ /dev/null
@@ -1,24 +0,0 @@
-/*
- * Generate a header file for hardcoded AAC cube-root table
- *
- * Copyright (c) 2010 Reimar Döffinger
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#define USE_FIXED 0
-#include "cbrt_tablegen_template.c"
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideoencdsp_init_mips.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideoencdsp_init_mips.c
deleted file mode 100644
index 3efbeec34aaa038137ae92906d2dd10026469c58..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideoencdsp_init_mips.c
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Copyright (c) 2015 Manojkumar Bhosale (Manojkumar.Bhosale@imgtec.com)
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/attributes.h"
-#include "libavutil/mips/cpu.h"
-#include "libavcodec/bit_depth_template.c"
-#include "h263dsp_mips.h"
-
-av_cold void ff_mpegvideoencdsp_init_mips(MpegvideoEncDSPContext *c,
- AVCodecContext *avctx)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_msa(cpu_flags)) {
-#if BIT_DEPTH == 8
- c->pix_sum = ff_pix_sum_msa;
-#endif
- }
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Blow Out Candles with Blower - The Best Air Blowing App for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Blow Out Candles with Blower - The Best Air Blowing App for Android.md
deleted file mode 100644
index 38029677319d6fea6f06ad7ca8a1d454e1e896cb..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Blow Out Candles with Blower - The Best Air Blowing App for Android.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
Blower the Air APK: A Fun App to Blow Out Candles
-
Have you ever wanted to blow out candles on your phone? Well, now you can with Blower the Air APK, a free app that lets you use your smartphone's microphone to blow out virtual candles. In this article, we will review Blower the Air APK and show you how to download and use it.
-
What is Blower the Air APK?
-
Blower the Air APK is an app that simulates blowing out candles on your phone screen. You can choose from different types of candles, such as birthday candles, romantic candles, or festive candles. You can also adjust the number and size of the candles, as well as the background color and music.
Blower the Air APK uses your phone's microphone to detect your breath and make the candles flicker and go out. You can also use a boost option to blow out all the candles at once. The app also shows you how many candles you have blown out and how long it took you.
-
How to download and install Blower the Air APK?
-
To download and install Blower the Air APK, follow these steps:
If prompted, allow the installation of apps from unknown sources.
-
Follow the instructions on the screen to complete the installation.
-
Launch the app and enjoy blowing out candles.
-
-
Why should you try Blower the Air APK?
-
Blower the Air APK is a fun and simple app that can help you relax and have fun. You can use it to:
-
-
Celebrate your birthday or someone else's birthday with virtual candles.
-
Create a romantic atmosphere with candlelight and music.
-
Enjoy some festive fun with holiday-themed candles.
-
Challenge yourself or your friends to see who can blow out more candles in less time.
-
Prank your friends by pretending to blow out their real candles.
-
-
What are some features of Blower the Air APK?
-
Blower the Air APK has some features that make it more enjoyable and customizable. Some of these features are:
-
-
-
Feature
-
Description
-
-
-
Different types of candles
-
You can choose from various shapes, colors, and designs of candles, such as stars, hearts, flowers, numbers, letters, etc.
-
-
-
Different backgrounds and music
-
You can change the background color and music of the app to suit your mood or occasion. You can also mute the music if you prefer silence.
-
-
-
Boost option
-
You can use a boost button to blow out all the candles at once. This is useful if you have many candles or if you want to save time.
-
-
-
Candle counter and timer
-
You can see how many candles you have blown out and how long it took you. You can also reset the counter and timer if you want to start over.
-
-
-
Screenshot option
-
You can take a screenshot of your candle display and share it with your friends or family via social media or messaging apps.
-
-
-
Frequently Asked Questions (FAQs)
-
Is Blower the Air APK safe to use?
-
Yes, Blower the Air APK is safe to use. It does not contain any viruses or malware. It also does not require any special permissions or access to your personal data.
-
Does Blower the Air APK work on all devices?
-
Blower the Air APK works on most
Blower the Air APK works on most Android devices that have a microphone and a speaker. However, some older devices or devices with low-quality microphones may not work well with the app. You can test your device's compatibility by downloading the app and trying it out.
-
How can I get more candles for Blower the Air APK?
-
You can get more candles for Blower the Air APK by watching ads or by purchasing them with real money. You can also unlock more candles by completing achievements or by sharing the app with your friends.
-
blower the air apk download
-blower the air apk free
-blower the air apk latest version
-blower the air apk for android
-blower the air apk mod
-blower the air apk offline
-blower the air apk no ads
-blower the air apk review
-blower the air apk features
-blower the air apk tips
-blowly - candle blower apk[^1^]
-blowly - candle blower apk download
-blowly - candle blower apk free
-blowly - candle blower apk latest version
-blowly - candle blower apk for android
-blowly - candle blower apk mod
-blowly - candle blower apk offline
-blowly - candle blower apk no ads
-blowly - candle blower apk review
-blowly - candle blower apk features
-blowly - candle blower apk tips
-blower - candle blower lite apk[^2^]
-blower - candle blower lite apk download
-blower - candle blower lite apk free
-blower - candle blower lite apk latest version
-blower - candle blower lite apk for android
-blower - candle blower lite apk mod
-blower - candle blower lite apk offline
-blower - candle blower lite apk no ads
-blower - candle blower lite apk review
-blower - candle blower lite apk features
-blower - candle blower lite apk tips
-best air blower apps for android
-best air blower apps for android 2023
-best air blower apps for android free
-best air blower apps for android download
-best air blower apps for android review
-best air blower apps for android features
-best air blower apps for android tips
-how to use air blower apps on android
-how to use air blower apps on android 2023
-how to use air blower apps on android free
-how to use air blower apps on android download
-how to use air blower apps on android review
-how to use air blower apps on android features
-
Can I use Blower the Air APK offline?
-
Yes, you can use Blower the Air APK offline. However, you will not be able to watch ads or make purchases without an internet connection. You will also not be able to share your screenshots or access some of the online features.
-
What are some alternatives to Blower the Air APK?
-
If you are looking for other apps that let you blow out candles on your phone, you can try these alternatives:
-
-
Candle Cake - Blow Out Candles: This app lets you blow out candles on a cake and make a wish. You can also customize your cake and candles.
-
Blow Out Candles Simulator: This app lets you blow out candles on various objects, such as a pumpkin, a skull, or a pizza. You can also create your own objects and candles.
-
Candle Light - Blow Out Candles: This app lets you blow out candles on a realistic candle holder. You can also change the color and shape of the candles and the holder.
-
-
Conclusion
-
Blower the Air APK is a fun and simple app that lets you blow out candles on your phone screen. You can choose from different types of candles, backgrounds, and music, and use your phone's microphone to blow out the candles. You can also take screenshots and share them with your friends or family. Blower the Air APK is free to download and use, but you can also buy more candles or watch ads to get them. Blower the Air APK is compatible with most Android devices, but you can also try other similar apps if you want more options.
-
We hope this article has helped you learn more about Blower the Air APK and how to use it. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Blue Lock The Ultimate Soccer Survival Game - Download Now on Otakudesu.md b/spaces/congsaPfin/Manga-OCR/logs/Blue Lock The Ultimate Soccer Survival Game - Download Now on Otakudesu.md
deleted file mode 100644
index 2418b5c738c358a10391f8e464c9160d3bde7170..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Blue Lock The Ultimate Soccer Survival Game - Download Now on Otakudesu.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
Download Blue Lock Otakudesu: How to Watch the Hottest Soccer Anime of 2022
-
If you are a fan of sports anime, you have probably heard of Blue Lock, the latest sensation that has taken the anime world by storm. Blue Lock is a soccer anime that follows a group of talented but egoistic strikers who are selected for a controversial project that aims to create the best striker in Japan. The anime is based on a popular manga by Muneyuki Kaneshiro and Yusuke Nomura, which has won several awards and accolades for its thrilling and unique story.
Blue Lock is not your typical sports anime that focuses on teamwork and friendship. Instead, it explores the dark and ruthless side of soccer, where only the strongest and most selfish can survive. The anime features intense matches, dynamic animation, and charismatic characters that will keep you hooked from start to finish.
-
But where can you watch this amazing anime? And how can you download it for offline viewing? The answer is simple: Otakudesu. Otakudesu is a website that offers free streaming and downloading of anime with Indonesian subtitles. In this article, we will show you how to download Blue Lock from Otakudesu in 3 easy steps, as well as the benefits and tips of doing so.
-
What is Otakudesu and why should you use it to watch Blue Lock?
-
Otakudesu is a website that provides free streaming and downloading of anime with Indonesian subtitles. It is one of the best sources for watching anime online, especially if you are looking for the latest and ongoing series like Blue Lock.
-
Otakudesu has several advantages over other websites, such as:
-
-
It has a large collection of anime, ranging from classics to new releases, in various genres and categories.
-
It has fast servers that ensure smooth streaming and downloading without buffering or lagging.
-
It has high-quality videos that offer clear audio and visuals for an optimal viewing experience.
-
It has a user-friendly interface that makes it easy to navigate and find your favorite anime.
-
It has regular updates that add new episodes and series as soon as they are available.
-
-
With Otakudesu, you can watch Blue Lock online without any hassle or cost. But if you want to download it for offline viewing, you can also do that with just a few clicks.
-
How to
How to download Blue Lock from Otakudesu in 3 easy steps?
-
Downloading Blue Lock from Otakudesu is very simple and convenient. You just need to follow these 3 easy steps:
-
-
Go to the official website of Otakudesu and search for Blue Lock. You can use the search bar on the top right corner or browse the anime list by alphabetical order or genre.
-
Choose the episode you want to download and click on the download button below the video player. You will see a list of servers and quality options to choose from.
-
Select the server and quality you prefer and wait for the download to start. Depending on your internet speed and the size of the file, it may take a few minutes to finish.
-
-
That's it! You have successfully downloaded Blue Lock from Otakudesu. You can now enjoy watching it offline anytime and anywhere you want.
-
download blue lock otakudesu sub indo
-download blue lock otakudesu batch
-download blue lock otakudesu 360p
-download blue lock otakudesu episode 1
-download blue lock otakudesu anime
-download blue lock otakudesu manga
-download blue lock otakudesu english sub
-download blue lock otakudesu full episode
-download blue lock otakudesu mp4
-download blue lock otakudesu bluray
-download blue lock otakudesu 480p
-download blue lock otakudesu episode 2
-download blue lock otakudesu sub indo batch
-download blue lock otakudesu season 1
-download blue lock otakudesu streaming
-download blue lock otakudesu 720p
-download blue lock otakudesu episode 3
-download blue lock otakudesu sub indo mp4
-download blue lock otakudesu ost
-download blue lock otakudesu opening
-download blue lock otakudesu 1080p
-download blue lock otakudesu episode 4
-download blue lock otakudesu sub indo 360p
-download blue lock otakudesu review
-download blue lock otakudesu ending
-download blue lock otakudesu 240p
-download blue lock otakudesu episode 5
-download blue lock otakudesu sub indo 480p
-download blue lock otakudesu characters
-download blue lock otakudesu trailer
-download blue lock otakudesu 144p
-download blue lock otakudesu episode 6
-download blue lock otakudesu sub indo 720p
-download blue lock otakudesu synopsis
-download blue lock otakudesu wallpaper
-download blue lock otakudesu hd
-download blue lock otakudesu episode 7
-download blue lock otakudesu sub indo 1080p
-download blue lock otakudesu rating
-download blue lock otakudesu theme song
-
What are the benefits of downloading Blue Lock from Otakudesu?
-
There are many benefits of downloading Blue Lock from Otakudesu, such as:
-
-
You can watch Blue Lock offline anytime and anywhere. You don't need to worry about internet connection, data usage, or battery life. You can watch it on your laptop, tablet, smartphone, or any other device that supports video playback.
-
You can save data and bandwidth by downloading instead of streaming. Streaming consumes more data and bandwidth than downloading, especially if you watch in high quality. By downloading, you can save your data plan and avoid extra charges.
-
You can avoid ads and pop-ups that may interrupt your viewing experience. Streaming websites often have annoying ads and pop-ups that may distract you from the anime or even harm your device with malware. By downloading, you can watch Blue Lock without any interruption or risk.
-
-
Downloading Blue Lock from Otakudesu is a smart and convenient way to enjoy this awesome anime. You can watch it at your own pace and convenience, without any compromise on quality or safety.
-
What are some tips and tricks to enjoy Blue Lock more?
-
Blue Lock is an anime that will keep you on the edge of your seat with its thrilling and innovative plot. But if you want to enjoy it more, here are some tips and tricks that you can try:
-
-
Read the manga that inspired the anime to get more background and details on the characters and story. The manga is still ongoing and has more chapters than the anime, so you can get ahead of the anime and see what happens next. You can also compare the differences and similarities between the manga and the anime.
-
Join online communities and forums where you can discuss and share your opinions on Blue Lock with other fans. You can find many websites and social media platforms where you can interact with other people who love Blue Lock as much as you do. You can also get updates, news, fan art, memes, theories, and more.
-
Check out other sports anime that are similar to Blue Lock, such as Haikyu!!, Kuroko no Basket, and Captain Tsubasa. These are some of the most popular and acclaimed sports anime that feature exciting matches, realistic animation, and memorable characters. They will also inspire you to love sports more and maybe even try them yourself.
-
-
Blue Lock is an anime that will make you fall in love with soccer and anime more than ever before. It is a must-watch for any sports anime fan or anyone who loves a good story with action, drama, and comedy.
-
Conclusion
-
In conclusion, Blue Lock is an exciting and innovative soccer anime that you can watch and download from Otakudesu. Otakudesu is a website that offers free streaming and downloading of anime with Indonesian subtitles. It has many advantages over other websites, such as fast servers, high-quality videos, user-friendly interface, large collection of anime, and regular updates.
-
To download Blue Lock from Otakudesu, you just need to follow 3 easy steps: go to the website, choose the episode, and select the server and quality. By downloading Blue Lock from Otakudesu, you can enjoy watching it offline anytime and anywhere, save data and bandwidth, and avoid ads and pop-ups.
-
If you want to enjoy Blue Lock more, you can also read the manga that inspired the anime, join online communities and forums where you can discuss it with other fans, and check out other sports anime that are similar to it.
-
Don't miss out on this amazing anime and start downloading it today from Otakudesu. You will not regret it!
-
Frequently Asked Questions
-
-
How
How many episodes are there in Blue Lock?
-
Blue Lock has 12 episodes in its first season, which aired from January to March 2022. The second season has been announced and is expected to air in 2023.
-
Is Blue Lock based on a true story?
-
No, Blue Lock is a fictional story that is inspired by the manga of the same name. However, some of the characters and events may be loosely based on real-life soccer players and situations.
-
What is the meaning of the title Blue Lock?
-
Blue Lock is the name of the project that aims to create the best striker in Japan. It is also the name of the facility where the selected strikers are isolated and trained. The name implies that the strikers are locked in a blue prison, where they have to compete and survive.
-
Who are the main characters of Blue Lock?
-
The main characters of Blue Lock are:
-
-
Yoichi Isagi: The protagonist of the story, who is a striker with a high sense of teamwork and vision. He is chosen for the Blue Lock project because he lacks selfishness and killer instinct.
-
Jinpachi Ego: The creator and director of the Blue Lock project, who is a former soccer player and coach. He is a ruthless and eccentric man who believes that soccer is a game of individualism and ego.
-
Rin Itoshi: The ace striker of the Blue Lock project, who is a genius with exceptional skills and charisma. He is Isagi's rival and friend, who challenges him to become better.
-
Bachira Meguru: A striker who is a former track and field athlete, who has incredible speed and stamina. He is Isagi's teammate and friend, who supports him throughout the project.
-
Shidou Nagi: A striker who is a former basketball player, who has amazing dribbling and passing abilities. He is Isagi's teammate and friend, who helps him develop his skills.
-
-
Where can I read the manga of Blue Lock?
-
You can read the manga of Blue Lock online on various websites, such as MangaPlus, MangaDex, and MangaRock. You can also buy the physical volumes or digital copies from official sources, such as Amazon, BookWalker, and Comixology.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download FB Lite 2019 APK and Enjoy Facebook on Any Android Phone.md b/spaces/congsaPfin/Manga-OCR/logs/Download FB Lite 2019 APK and Enjoy Facebook on Any Android Phone.md
deleted file mode 100644
index 43f07e9f4d57a4a304d957b556be3b8ed28a0bfc..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download FB Lite 2019 APK and Enjoy Facebook on Any Android Phone.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Download FB Lite 2019 APK: A Faster Facebook Experience for Android
-
Do you want to use Facebook on your Android device without consuming too much data or storage space? Do you want to access your profile and communicate with your friends even on slow or unstable internet connections? Do you want to enjoy a faster and lighter Facebook experience? If you answered yes to any of these questions, then you should download FB Lite 2019 APK.
FB Lite is an official Facebook client that lets you use this popular social network through a much lighter app that's better suited for low-power Android devices or ones with limited internet connections. In this article, we will show you how to download FB Lite 2019 APK from Uptodown, how to use it on your Android device, and what are the benefits and drawbacks of using it. We will also compare it with other Facebook apps and answer some frequently asked questions about it. Let's get started!
-
How to Download FB Lite 2019 APK from Uptodown
-
Uptodown is a website and an app store that offers a huge catalog of apps for Android, Windows, Mac, and iOS. You can download any app you want without any restrictions or limitations. You can also get the latest updates and versions of your favorite apps, as well as older or discontinued ones. Uptodown is a safe and reliable source for downloading APK files, which are the installation files for Android apps.
-
To download FB Lite 2019 APK from Uptodown, follow these simple steps:
-
-
Go to Uptodown website or app. You can access Uptodown from any browser or device, or you can download the Uptodown app from Google Play Store or Apple App Store.
-
Search for Facebook Lite. You can use the search bar on the top of the page or app, or you can browse the categories and find Facebook Lite under Communication.
-
Choose the latest version and tap on download. You will see a list of different versions of Facebook Lite, along with their release dates and sizes. Choose the one that says 2019 and has the smallest size (around 1 MB). Tap on the green download button and wait for the download to finish.
-
Install the APK file on your device. Once the download is complete, you will need to install the APK file on your device. You may need to enable unknown sources in your settings to allow installation from third-party sources. To do this, go to Settings > Security > Unknown sources and toggle it on. Then, locate the APK file in your downloads folder or notification bar and tap on it. Follow the instructions on the screen and wait for the installation to finish.
-
-
Congratulations! You have successfully downloaded FB Lite 2019 APK from Uptodown. Now you can use it on your Android device.
-
How to Use FB Lite 2019 APK on Your Android Device
-
Using FB Lite 2019 APK on your Android device is very easy and intuitive. Here are some steps to help you get started:
-
-
Log in with your email, phone number, or username. When you open FB Lite for the first time, you will see a login screen where you can enter your email address, phone number, or username and password. If you don't have a Facebook account yet, you can create one by tapping on Create New Account.
-
Explore the features of FB Lite. FB Lite has all the basic features of Facebook that you need to stay connected with your friends and family. You can post status updates, photos, videos, and stories; like, comment, and share posts; chat with your contacts; join groups; follow pages; watch videos; play games; and more. You can also access other Facebook services like Marketplace, Dating, Watch, Gaming, News, and more by tapping on the menu icon on the top right corner of the screen.
-
Adjust your settings and preferences. You can customize your FB Lite experience by changing your settings and preferences according to your needs and preferences. You can edit your profile, change your privacy settings, manage your notifications, control your data usage, switch languages, and more. To access your settings and preferences, tap on the menu icon on the top right corner of the screen and scroll down to Settings & Privacy.
-
Enjoy a faster and lighter Facebook experience. FB Lite is designed to work faster and consume less data and storage space than the regular Facebook app. You can use it even on slow or unstable internet connections or old and low-end devices. You can also save battery life by using FB Lite instead of Facebook app.
-
-
That's it! You are now ready to use FB Lite 2019 APK on your Android device.
-
download facebook lite 2019 apk
-fb lite 2019 apk download free
-how to download fb lite 2019 apk
-download fb lite 2019 apk for android
-fb lite 2019 apk latest version download
-download fb lite 2019 apk from uptodown
-fb lite 2019 apk file download
-download fb lite 2019 apk mod
-fb lite 2019 apk update download
-download fb lite 2019 apk old version
-fb lite 2019 apk offline download
-download fb lite 2019 apk for pc
-fb lite 2019 apk direct download
-download fb lite 2019 apk pro
-fb lite 2019 apk mirror download
-download fb lite 2019 apk pure
-fb lite 2019 apk hack download
-download fb lite 2019 apk terbaru
-fb lite 2019 apk original download
-download fb lite 2019 apk new version
-fb lite 2019 apk beta download
-download fb lite 2019 apk no ads
-fb lite 2019 apk premium download
-download fb lite 2019 apk cracked
-fb lite 2019 apk full download
-download fb lite 2019 apk for ios
-fb lite 2019 apk install download
-download fb lite 2019 apk without play store
-fb lite 2019 apk online download
-download fb lite 2019 apk for windows phone
-fb lite 2019 apk fast download
-download fb lite 2019 apk unlimited likes
-fb lite 2019 apk secure download
-download fb lite 2019 apk dark mode
-fb lite 2019 apk low mb download
-download fb lite 2019 apk with messenger
-fb lite 2019 apk high quality download
-download fb lite 2019 apk transparent
-fb lite 2019 apk small size download
-download fb lite 2019 apk latest update
-fb lite 2019 apk easy download
-download fb lite 2019 apk no root
-fb lite 2019 apk best version download
-download fb lite 2019 apk modded
-fb lite 2019 apk official download
-download fb lite 2019 apk for tablet
-fb lite 2019 apk working download
-download fb lite 2019 apk with video downloader
-fb lite 2019 apk smooth download
-
Benefits of Using FB Lite 2019 APK
-
FB Lite 2019 APK has many benefits that make it a great alternative to the regular Facebook app. Here are some of them:
-
-
Saves data and storage space. FB Lite is only around 1 MB in size, which means it takes up very little space on your device's memory. It also uses less data than Facebook app, which means it saves you money on your data plan. You can also control how much data you want to use by adjusting your data saver mode in settings.
-
Works on old and low-end devices. FB Lite is compatible with almost any Android device, even those that run on older versions of Android or have low RAM or CPU. It runs smoothly and efficiently on any device, without crashing or freezing.
-
Loads quickly and works on all networks. FB Lite is optimized to load faster and work better on any network condition, even 2G or EDGE. It also works well on areas with poor or no signal, as it can store some data offline and sync it when the connection is restored. You can also use FB Lite on Wi-Fi networks without any problem.
-
-
These are some of the benefits of using FB Lite 2019 APK. As you can see, FB Lite is a great app for anyone who wants to use Facebook on their Android device without compromising on speed, performance, or quality.
-
Drawbacks of Using FB Lite 2019 APK
-
However, FB Lite 2019 APK is not perfect. It also has some drawbacks that you should be aware of before downloading it. Here are some of them:
-
-
May have some bugs and glitches. FB Lite is still a relatively new app, and it may not be as stable or bug-free as the regular Facebook app. You may encounter some errors, crashes, or malfunctions while using it. You may also notice some lagging or slow loading times occasionally.
-
May not support some features and functions of the regular Facebook app. FB Lite is a stripped-down version of Facebook, which means it does not have all the features and functions that the regular Facebook app has. For example, you may not be able to use some stickers, GIFs, filters, live videos, stories, reactions, or other advanced options on FB Lite. You may also miss out on some updates or notifications from your friends or pages.
-
May not have the same quality and resolution of images and videos. FB Lite compresses the images and videos that you upload or view on the app to save data and space. This means that the quality and resolution of the images and videos may be lower than what you see on the regular Facebook app. You may also notice some pixelation or blurriness on some images and videos.
-
-
These are some of the drawbacks of using FB Lite 2019 APK. As you can see, FB Lite is not a perfect app, and it may not suit everyone's needs or preferences.
-
Comparison of FB Lite 2019 APK with Other Facebook Apps
-
FB Lite 2019 APK is not the only Facebook app that you can use on your Android device. There are other Facebook apps that you can choose from, depending on your needs and preferences. Here are some of them:
-
-
-
App
-
Description
-
Pros
-
Cons
-
-
-
Facebook app
-
The original and official Facebook app that lets you access all the features and functions of Facebook on your Android device.
-
- Has all the features and functions of Facebook - Has high quality and resolution of images and videos - Has regular updates and improvements
-
- Consumes a lot of data and storage space - May not work well on old or low-end devices - May not work well on slow or unstable internet connections
-
-
-
Facebook Messenger Lite
-
A lighter version of Facebook Messenger that lets you chat with your Facebook contacts without using the main Facebook app.
-
- Saves data and storage space - Works on old and low-end devices - Works on all networks - Has basic chat features like text, voice, video, photos, stickers, etc.
-
- May have some bugs and glitches - May not support some features and functions of Facebook Messenger - May not have the same quality and resolution of images and videos
-
-
-
Facebook Web
-
The web version of Facebook that lets you access Facebook from any browser on your Android device.
-
- Does not require any installation or download - Works on any device or browser - Has most of the features and functions of Facebook
-
- Consumes a lot of data - May not work well on slow or unstable internet connections - May not have the same quality and resolution of images and videos - May not have some features and functions of Facebook app
-
-
-
These are some of the other Facebook apps that you can use on your Android device. As you can see, each app has its own advantages and disadvantages, and you should choose the one that best suits your needs and preferences.
-
Conclusion: Is FB Lite 2019 APK Worth Downloading?
-
In conclusion, FB Lite 2019 APK is a great app for anyone who wants to use Facebook on their Android device without consuming too much data or storage space. It is a lighter and faster version of Facebook that works on any Android device or network condition. It has all the basic features and functions of Facebook that you need to stay connected with your friends and family. It also saves you money on your data plan and battery life. However, FB Lite 2019 APK also has some drawbacks that you should be aware of before downloading it. It may not have all the features and functions of the regular Facebook app, and it may not have the same quality and resolution of images and videos. It may also have some bugs and glitches that may affect your user experience. Therefore, whether FB Lite 2019 APK is worth downloading or not depends on your needs and preferences. If you value speed, performance, and efficiency over quality, functionality, and aesthetics, then FB Lite 2019 APK is a good choice for you. But if you want to enjoy the full Facebook experience with all its bells and whistles, then you may want to stick with the regular Facebook app or try other alternatives. We hope this article has helped you learn more about FB Lite 2019 APK and how to download and use it on your Android device. If you have any questions or feedback, feel free to leave a comment below. And if you liked this article, please share it with your friends and family who may find it useful. Thank you for reading!
FAQs about FB Lite 2019 APK
-
Here are some frequently asked questions about FB Lite 2019 APK that you may find helpful:
-
-
How do I update FB Lite?
-To update FB Lite, you can either go to Uptodown website or app and download the latest version of FB Lite 2019 APK, or you can go to Google Play Store and update it from there. You can also enable automatic updates in your settings to get the latest updates automatically.
-
How do I delete FB Lite?
-To delete FB Lite, you can either go to your device's settings and uninstall it from there, or you can go to Uptodown website or app and tap on the uninstall button next to FB Lite 2019 APK. You can also delete the APK file from your downloads folder or notification bar.
-
How do I switch between FB Lite and Facebook app?
-To switch between FB Lite and Facebook app, you can either tap on the menu icon on the top right corner of the screen and choose Switch App, or you can go to your device's settings and choose the default app for opening Facebook links.
-
How do I contact FB Lite support?
-To contact FB Lite support, you can either go to the menu icon on the top right corner of the screen and choose Help & Support, or you can go to this link: https://www.facebook.com/help/lite/
-
How do I report a problem with FB Lite?
-To report a problem with FB Lite, you can either go to the menu icon on the top right corner of the screen and choose Report a Problem, or you can go to this link: https://www.facebook.com/help/contact/268228883256323
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Grand City Street Mafia Gangster Mod APK and Rule the Crime World.md b/spaces/congsaPfin/Manga-OCR/logs/Download Grand City Street Mafia Gangster Mod APK and Rule the Crime World.md
deleted file mode 100644
index 7bcbaa7f0bad19974deb840d53b462f7081adef7..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Grand City Street Mafia Gangster Mod APK and Rule the Crime World.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
Grand City Street Mafia Gangster Mod APK: A Review
-
If you are a fan of action-packed games that let you explore the dark and ruthless world of crime, then you might want to check out Grand City Street Mafia Gangster. This game is a 3D open-world adventure that puts you in the shoes of a gangster who has to complete various missions, evade the cops, and fight against rival gangs. But what if you want to enjoy the game without any limitations or restrictions? That's where Grand City Street Mafia Gangster mod apk comes in. In this article, we will review this mod apk and tell you why you should download it, what features it offers, and how to install it on your device.
Grand City Street Mafia Gangster is a game developed by Fun World Games Studio. It is available for Android devices and has over 10 million downloads on Google Play Store. The game is rated 4.1 out of 5 stars by more than 50 thousand users. The game is set in a fictional city called Grand City, where you can roam freely and interact with various characters and objects. You can also drive different vehicles, such as cars, bikes, helicopters, and tanks. The game has a storyline that involves you working for a crime boss named Mr. K, who assigns you different tasks, such as stealing cars, robbing banks, kidnapping people, and more. You can also choose to ignore the story and just cause chaos in the city. The game has a realistic physics engine that makes the gameplay more immersive and fun.
-
What is a mod apk?
-
A mod apk is a modified version of an original app or game that has been altered by someone to add or remove some features. A mod apk can also be called a hacked or cracked app or game. A mod apk can give you access to premium features, unlimited resources, unlocked items, or other advantages that are not available in the original app or game. However, not all mod apks are safe or legal to use. Some mod apks may contain viruses, malware, or spyware that can harm your device or steal your personal information. Some mod apks may also violate the terms and conditions of the original app or game developer and result in your account being banned or suspended.
-
Why download Grand City Street Mafia Gangster mod apk?
-
Grand City Street Mafia Gangster mod apk is one of the best mod apks for this game because it offers many benefits that can enhance your gaming experience. Some of the reasons why you should download Grand City Street Mafia Gangster mod apk are:
-
grand city street mafia gangster mod apk download
-grand city street mafia gangster mod apk unlimited money
-grand city street mafia gangster mod apk latest version
-grand city street mafia gangster mod apk android 1
-grand city street mafia gangster mod apk free
-grand city street mafia gangster mod apk offline
-grand city street mafia gangster mod apk hack
-grand city street mafia gangster mod apk revdl
-grand city street mafia gangster mod apk 2023
-grand city street mafia gangster mod apk obb
-grand city street mafia gangster mod apk rexdl
-grand city street mafia gangster mod apk no ads
-grand city street mafia gangster mod apk happymod
-grand city street mafia gangster mod apk data
-grand city street mafia gangster mod apk full version
-grand city street mafia gangster mod apk mega
-grand city street mafia gangster mod apk premium
-grand city street mafia gangster mod apk pro
-grand city street mafia gangster mod apk vip
-grand city street mafia gangster mod apk unlocked
-grand city street mafia gangster mod apk 3d
-grand city street mafia gangster mod apk cheats
-grand city street mafia gangster mod apk gameplay
-grand city street mafia gangster mod apk online
-grand city street mafia gangster mod apk update
-grand city street mafia gangster mod apk new version
-grand city street mafia gangster mod apk for pc
-grand city street mafia gangster mod apk for ios
-grand city street mafia gangster mod apk for windows 10
-grand city street mafia gangster mod apk for mac
-grand city street mafia gangster mod apk for laptop
-grand city street mafia gangster mod apk for chromebook
-grand city street mafia gangster mod apk for tablet
-grand city street mafia gangster mod apk for firestick
-grand city street mafia gangster mod apk for smart tv
-grand city street mafia gangster mod apk for android tv box
-grand city street mafia gangster mod apk for roku
-grand city street mafia gangster mod apk for xbox one
-grand city street mafia gangster mod apk for ps4
-grand city street mafia gangster mod apk for nintendo switch
-how to install grand city street mafia gangster mod apk
-how to play grand city street mafia gangster mod apk
-how to get grand city street mafia gangster mod apk
-how to download grand city street mafia gangster mod apk
-how to update grand city street mafia gangster mod apk
-how to hack grand city street mafia gangster mod apk
-how to uninstall grand city street mafia gangster mod apk
-how to use grand city street mafia gangster mod apk
-how to run grand city street mafia gangster mod apk
-
-
You can get unlimited money and weapons that can help you buy anything you want and equip yourself with the best guns and explosives.
-
You can enjoy realistic graphics and sound effects that make the game more lifelike and exciting.
-
You can complete various missions and challenges that test your skills and abilities as a gangster.
-
You can download and install the mod apk for free and easily without any hassle or complications.
-
-
Features of Grand City Street Mafia Gangster mod apk
-
Unlimited money and weapons
-
One of the main features of Grand City Street Mafia Gangster mod apk is that it gives you unlimited money and weapons. Money is the currency of the game that you can use to buy different items, such as clothes, accessories, vehicles , and weapons. Weapons are the tools that you can use to fight against your enemies and complete your missions. With unlimited money and weapons, you can buy anything you want and equip yourself with the best guns and explosives. You can also upgrade your weapons and customize them according to your preferences. You can choose from a variety of weapons, such as pistols, rifles, shotguns, snipers, rocket launchers, grenades, and more. You can also use melee weapons, such as knives, bats, hammers, and chainsaws. With unlimited money and weapons, you can dominate the city and become the most powerful gangster.
-
Realistic graphics and sound effects
-
Another feature of Grand City Street Mafia Gangster mod apk is that it has realistic graphics and sound effects that make the game more lifelike and exciting. The game has 3D graphics that show the details of the city, the characters, and the vehicles. The game also has realistic physics that make the movements and actions of the game more natural and smooth. The game has sound effects that match the scenes and events of the game. You can hear the sounds of gunshots, explosions, car engines, sirens, screams, and more. The game also has a background music that suits the mood and atmosphere of the game. The game has a voice-over that narrates the story and dialogues of the game. The game has a realistic graphics and sound effects that make you feel like you are in the middle of a gangster movie.
-
Various missions and challenges
-
A third feature of Grand City Street Mafia Gangster mod apk is that it has various missions and challenges that test your skills and abilities as a gangster. The game has a storyline that involves you working for a crime boss named Mr. K, who assigns you different tasks, such as stealing cars, robbing banks, kidnapping people, and more. You can also choose to ignore the story and just cause chaos in the city. The game has different types of missions, such as shooting, driving, racing, fighting, stealth, and more. The game also has different levels of difficulty, from easy to hard. The game has different rewards for completing missions, such as money, weapons, reputation points, and more. The game also has different challenges that you can complete to earn extra money or unlock new items. The game has various missions and challenges that keep you entertained and engaged.
-
Free and easy to install
-
A fourth feature of Grand City Street Mafia Gangster mod apk is that it is free and easy to install on your device. You do not need to pay any money to download or play this mod apk. You also do not need to root or jailbreak your device to install this mod apk. You just need to follow some simple steps that we will explain later in this article. You also do not need to worry about any viruses, malware, or spyware that can harm your device or steal your personal information. This mod apk is safe and secure to use. You also do not need to worry about any updates or patches that can affect the performance or compatibility of this mod apk. This mod apk is always updated and compatible with the latest version of the original game. This mod apk is free and easy to install on your device.
-
How to download and install Grand City Street Mafia Gangster mod apk
-
Step 1: Enable unknown sources on your device
-
The first step to download and install Grand City Street Mafia Gangster mod apk on your device is to enable unknown sources on your device. This will allow you to install apps or games that are not from the official Google Play Store or App Store. To enable unknown sources on your device, you need to follow these steps:
-
-
Go to your device settings.
-
Find the security or privacy option.
-
Look for the unknown sources option.
-
Turn it on or check it.
-
A warning message may appear on your screen.
-
Tap OK or confirm.
-
-
You have now enabled unknown sources on your device.
-
Step 2: Download the mod apk file from a trusted source
-
The second step to download and install Grand City Street Mafia Gangster mod apk on your device is to download the mod apk file from a trusted source. You need to be careful when downloading any mod apk file from the internet because some sources may contain viruses, malware, or spyware that can harm your device or steal your personal information. You need to find a reliable source that offers a safe and secure download link for this mod apk file. One of the trusted sources that we recommend is [text]. This source provides a direct download link for this mod apk file without any surveys or ads. To download the mod apk file from this source, you need to follow these steps:
-
-
Go to [text] on your device browser.
-
Scroll down and find the download button.
-
Tap on the download button and wait for the download to start.
-
The mod apk file will be downloaded to your device storage.
-
-
You have now downloaded the mod apk file from a trusted source.
-
Step 3: Locate and install the mod apk file
-
The third step to download and install Grand City Street Mafia Gangster mod apk on your device is to locate and install the mod apk file. You need to find the mod apk file that you have downloaded in your device storage and install it on your device. To locate and install the mod apk file, you need to follow these steps:
-
-
Go to your device file manager or explorer.
-
Find the download folder or the folder where you have saved the mod apk file.
-
Look for the mod apk file with the name Grand City Street Mafia Gangster mod apk or something similar.
-
Tap on the mod apk file and a pop-up window will appear on your screen.
-
Tap on the install button and wait for the installation to finish.
-
A confirmation message will appear on your screen when the installation is done.
-
-
You have now located and installed the mod apk file on your device.
-
Step 4: Enjoy the game
-
The fourth and final step to download and install Grand City Street Mafia Gangster mod apk on your device is to enjoy the game. You can now launch the game from your device app drawer or home screen and start playing it with all the features and benefits that this mod apk offers. You can explore the city, complete missions, buy items, use weapons, and have fun as a gangster. You can also invite your friends to play with you online or offline and compete with them in different modes. You can also share your achievements and screenshots with other players on social media platforms. You can now enjoy the game with Grand City Street Mafia Gangster mod apk.
-
Conclusion
-
In conclusion, Grand City Street Mafia Gangster mod apk is a great mod apk for this game that offers many advantages and improvements that can make your gaming experience more enjoyable and satisfying. You can get unlimited money and weapons, realistic graphics and sound effects, various missions and challenges, and free and easy installation. You can also download and install this mod apk safely and securely from a trusted source that we have provided in this article. You can also follow our simple steps to install this mod apk on your device without any hassle or complications. If you are looking for a fun and thrilling game that lets you live the life of a gangster, then you should try Grand City Street Mafia Gangster mod apk.
-
FAQs
-
Here are some of the frequently asked questions about Grand City Street Mafia Gangster mod apk:
-
-
Q: Is Grand City Street Mafia Gangster mod apk safe to use?
-
A: Yes, Grand City Street Mafia Gangster mod apk is safe to use as long as you download it from a trusted source that we have provided in this article. This source does not contain any viruses, malware, or spyware that can harm your device or steal your personal information. However, you should always be careful when downloading any mod apk from the internet and scan it with an antivirus app before installing it on your device.
-
Q: Is Grand City Street Mafia Gangster mod apk legal to use?
-
A: No, Grand City Street Mafia Gangster mod apk is not legal to use because it violates the terms and conditions of the original game developer. By using this mod apk, you are modifying or altering the original game without the permission or consent of the developer. This can result in your account being banned or suspended by the developer or Google Play Store. Therefore, you should use this mod apk at your own risk and responsibility.
-
Q: Does Grand City Street Mafia Gangster mod apk require root or jailbreak?
-
A: No, Grand City Street Mafia Gangster mod apk does not require root or jailbreak to install or run on your device. You just need to enable unknown sources on your device settings and follow our simple steps to install this mod apk on your device. However, some features of this mod apk may not work properly on some devices or versions of Android or iOS. Therefore, you should check the compatibility of this mod apk with your device before installing it.
-
Q: Can I play Grand City Street Mafia Gangster online or offline?
-
A: Yes , you can play Grand City Street Mafia Gangster online or offline depending on your preference and internet connection. You can play online with your friends or other players from around the world and compete with them in different modes, such as deathmatch, team deathmatch, capture the flag, and more. You can also chat with them and send them messages or emojis. You can also play offline without any internet connection and enjoy the game without any interruptions or ads. You can also save your progress and resume it later when you go online.
-
Q: How can I contact the developer of Grand City Street Mafia Gangster?
-
A: If you have any questions, feedback, suggestions, or complaints about Grand City Street Mafia Gangster, you can contact the developer of this game by using the following methods:
You can also rate and review this game on Google Play Store or App Store and share your opinions and experiences with other users.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Helper Cars and Learn with Street Vehicles for Kids.md b/spaces/congsaPfin/Manga-OCR/logs/Download Helper Cars and Learn with Street Vehicles for Kids.md
deleted file mode 100644
index d579eec6b1523c54dca19c20ca54cf2823e32a83..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Helper Cars and Learn with Street Vehicles for Kids.md
+++ /dev/null
@@ -1,165 +0,0 @@
-
-
Download Helper Cars: A Fun and Educational App for Kids
-
If your kids love cars and trucks, they will love Download Helper Cars. This is an amazing app that lets them watch car cartoons for kids, play with cars and trucks for kids, and learn vehicles for kids with Helper Cars. Download Helper Cars is more than just an entertainment app. It is also a learning app that helps your kids to develop their imagination, creativity, cognitive skills, motor skills, and values. In this article, we will tell you everything you need to know about Download Helper Cars, including how to download it, what you can do with it, why it is good for your kids, what parents and experts say about it, and how to get more out of it.
Downloading Helper Cars is very easy. You can download it from different platforms depending on your device. Here are the steps to follow:
-
-
If you have an Android device, go to Google Play Store and search for "Download Helper Cars". Tap on the app icon and then tap on "Install". Wait for the app to download and install on your device.
-
If you have an iOS device, go to App Store and search for "Download Helper Cars". Tap on the app icon and then tap on "Get". Wait for the app to download and install on your device.
-
If you have a Windows device, go to Microsoft Store and search for "Download Helper Cars". Tap on the app icon and then tap on "Get". Wait for the app to download and install on your device.
-
-
Once you have downloaded the app, you can open it and start enjoying it.
-
What You Can Do with Download Helper Cars
-
Watch Car Cartoons for Kids
-
One of the best things you can do with Download Helper Cars is to watch car cartoons for kids. These are fun and educational cartoons that feature different cars and trucks for kids. You can watch various episodes of Helper Cars cartoons, such as:
-
-
Helper cars cartoons full episodes & street vehicles - Baby cartoon & cars for kids
-
Helper Cars cartoons full episodes & street vehicles - Baby cartoon & cars for kids
-
Helper Cars cartoons full episodes & street vehicles - Baby cartoon & cars for kids
-
-
These cartoons are not only entertaining, but also educational. They teach your kids the names and functions of different vehicles, such as fire truck, tow truck, crane, ambulance, police car, etc. They also show your kids how these vehicles work together to help others and solve problems. They also introduce your kids to colors, shapes, numbers, letters, and other concepts.
-
Play with Cars and Trucks for Kids
-
Another thing you can do with Download Helper Cars is to play with cars and trucks for kids. This is a fun and interactive feature that lets your kids interact with different cars and trucks on the screen. You can choose from different modes of play, such as:
-
download helper cars cartoons full episodes
-download helper cars videos for kids
-download helper cars and trucks for children
-download helper cars learn colors with vehicles
-download helper cars youtube channel
-download helper cars app for android
-download helper cars games online
-download helper cars coloring pages
-download helper cars songs and nursery rhymes
-download helper cars toys and merchandise
-download helper cars fire truck and ambulance
-download helper cars car wash and repair
-download helper cars farm vehicles and tractor
-download helper cars mountain road and bridge
-download helper cars snow plow and bulldozer
-download helper cars cement mixer and crane
-download helper cars police car and tow truck
-download helper cars street vehicles and traffic
-download helper cars new year and christmas episodes
-download helper cars winter and snow episodes
-download helper cars english subtitles and audio
-download helper cars free trial and subscription
-download helper cars offline mode and streaming
-download helper cars ratings and reviews
-download helper cars updates and news
-download helper cars fun facts and trivia
-download helper cars quizzes and puzzles
-download helper cars stickers and wallpapers
-download helper cars birthday party and invitations
-download helper cars costumes and masks
-download helper cars crafts and activities
-download helper cars stories and books
-download helper cars jokes and riddles
-download helper cars tips and tricks
-download helper cars feedback and support
-download helper cars community and forum
-download helper cars fan art and memes
-download helper cars challenges and contests
-download helper cars awards and nominations
-download helper cars collaborations and partnerships
-download helper cars behind the scenes and interviews
-download helper cars bloopers and outtakes
-download helper cars spin-offs and specials
-download helper cars merchandise discounts and coupons
-download helper cars gift cards and vouchers
-download helper cars newsletter signup and notifications
-download helper cars social media accounts and hashtags
-download helper cars podcast episodes and transcripts
-download helper cars blog posts and articles
-
-
Free mode: In this mode, you can move the cars and trucks around the screen and make them do different actions, such as honking, flashing lights, spraying water, lifting objects, etc.
-
Puzzle mode: In this mode, you can solve different puzzles with the cars and trucks, such as matching them with their shadows, finding the missing parts, sorting them by color or size, etc.
-
Game mode: In this mode, you can play different games with the cars and trucks, such as racing them, parking them, avoiding obstacles, collecting coins, etc.
-
-
This feature is not only fun, but also interactive. It helps your kids to develop their hand-eye coordination, fine motor skills, spatial awareness, and reaction time. It also stimulates their imagination and creativity as they create their own stories and scenarios with the cars and trucks.
-
Learn Vehicles for Kids with Helper Cars
-
The third thing you can do with Download Helper Cars is to learn vehicles for kids with Helper Cars. This is an educational feature that helps your kids to learn more about different vehicles and their characteristics. You can choose from different categories of vehicles, such as:
-
-
Cars: In this category, you can learn about different types of cars, such as sedan, hatchback, SUV, sports car, etc.
-
Trucks: In this category, you can learn about different types of trucks, such as pickup truck, dump truck, garbage truck, delivery truck, etc.
-
Emergency vehicles: In this category, you can learn about different types of emergency vehicles, such as fire truck, ambulance, police car, helicopter, etc.
-
Construction vehicles: In this category , you can learn about different types of construction vehicles, such as crane, bulldozer, excavator, forklift, etc.
-
Public transport vehicles: In this category, you can learn about different types of public transport vehicles, such as bus, taxi, train, subway, etc.
-
-
For each vehicle, you can learn its name, function, sound, and appearance. You can also see a picture of the vehicle and hear how it is pronounced. You can also test your knowledge by taking a quiz or playing a memory game.
-
This feature is not only educational, but also engaging. It helps your kids to expand their vocabulary, improve their pronunciation, and enhance their memory and attention. It also fosters their curiosity and interest in different vehicles and their roles in society.
-
Why Download Helper Cars is Good for Kids
-
It Stimulates Their Imagination and Creativity
-
One of the reasons why Download Helper Cars is good for kids is that it stimulates their imagination and creativity. The app allows your kids to explore different worlds and situations with the cars and trucks. They can create their own stories and scenarios with the vehicles, such as saving someone from a fire, building a house, delivering a package, etc. They can also customize the vehicles by changing their colors, accessories, stickers, etc. They can also use their own voice to narrate the stories or add sound effects.
-
This helps your kids to express themselves and their ideas in a fun and creative way. It also enhances their storytelling and communication skills. It also boosts their confidence and self-esteem as they see their creations come to life on the screen.
-
It Develops Their Cognitive and Motor Skills
-
Another reason why Download Helper Cars is good for kids is that it develops their cognitive and motor skills. The app challenges your kids to think logically, solve problems, make decisions, and follow instructions. It also helps your kids to learn various concepts and skills, such as colors, shapes, numbers, letters, vehicles, etc. It also trains your kids to use their fingers and hands to control the vehicles on the screen. It also improves their hand-eye coordination, fine motor skills, spatial awareness, and reaction time.
-
This helps your kids to sharpen their minds and improve their abilities in a fun and interactive way. It also prepares them for school and life by giving them a solid foundation of knowledge and skills. It also supports their brain development and growth by stimulating different areas of the brain.
-
It Teaches Them Valuable Lessons and Values
-
The third reason why Download Helper Cars is good for kids is that it teaches them valuable lessons and values. The app shows your kids how the cars and trucks work together to help others and solve problems. It also shows your kids how the vehicles have different personalities and characteristics that make them unique and special. It also shows your kids how the vehicles face different challenges and difficulties that they overcome with courage and perseverance.
-
This helps your kids to learn important lessons and values that they can apply in their own lives, such as teamwork, friendship , kindness, responsibility, etc. It also inspires them to be helpful, caring, respectful, and brave in their own ways. It also cultivates their moral and ethical values by showing them the difference between right and wrong.
-
What Parents and Experts Say about Download Helper Cars
-
Download Helper Cars is not only loved by kids, but also by parents and experts who have used or evaluated the app. Here are some of the testimonials and reviews from them:
-
-
"My son is obsessed with cars and trucks, and he loves this app. He watches the cartoons every day and plays with the vehicles on the screen. He also learns a lot from the app, such as the names and functions of different vehicles, colors, shapes, numbers, etc. He also enjoys creating his own stories and scenarios with the vehicles. He has improved his imagination, creativity, vocabulary, and communication skills a lot since he started using this app. I highly recommend this app to any parent who has a kid who loves cars and trucks."
-- Jennifer, mother of a 4-year-old boy
-
-
-
"This app is amazing. It is not only fun and entertaining, but also educational and developmental. It stimulates different areas of the brain and helps kids to develop their cognitive and motor skills. It also teaches kids valuable lessons and values that they can apply in their own lives. It is a great app for kids who love cars and trucks, as well as for kids who want to learn more about vehicles and their roles in society. It is one of the best apps for kids I have ever seen."
-- Dr. Smith, child psychologist and expert reviewer
-
-
How to Get More Out of Download Helper Cars
-
Subscribe to Helper Cars YouTube Channel
-
If you want to get more out of Download Helper Cars, you can subscribe to Helper Cars YouTube channel. This is where you can watch more videos and cartoons featuring the cars and trucks for kids. You can also watch behind-the-scenes videos, bloopers, trailers, previews, and more. You can also leave comments and likes on the videos and interact with other fans of Helper Cars.
-
To subscribe to Helper Cars YouTube channel, you can follow these steps:
-
-
Go to YouTube and search for "Helper Cars".
-
Click on the channel icon that says "Helper Cars - Car Cartoons & Games for Kids".
-
Click on the red button that says "Subscribe".
-
Click on the bell icon next to the subscribe button to turn on notifications.
-
-
Once you have subscribed to Helper Cars YouTube channel, you can enjoy more content from Helper Cars anytime and anywhere.
-
Join Helper Cars Facebook Community
-
Another way to get more out of Download Helper Cars is to join Helper Cars Facebook community. This is where you can connect with other fans of Helper Cars and share your feedback and suggestions on the app. You can also post your own pictures and videos of your creations with the cars and trucks for kids. You can also participate in polls, quizzes, contests, giveaways, and more. You can also get updates on the latest news and events from Helper Cars.
-
To join Helper Cars Facebook community , you can follow these steps:
-
-
Go to Facebook and search for "Helper Cars".
-
Click on the page icon that says "Helper Cars - Car Cartoons & Games for Kids".
-
Click on the blue button that says "Like".
-
Click on the three dots next to the like button and select "Follow".
-
-
Once you have joined Helper Cars Facebook community, you can interact with other fans of Helper Cars and get more involved with the app.
-
Buy Helper Cars Toys and Merchandise
-
The third way to get more out of Download Helper Cars is to buy Helper Cars toys and merchandise. This is where you can get your own Helper Cars toys and merchandise that you can play with and wear. You can choose from different products, such as:
-
-
Helper Cars plush toys: These are soft and cuddly toys that look like the cars and trucks for kids. You can hug them, squeeze them, and take them anywhere.
-
Helper Cars action figures: These are plastic toys that look like the cars and trucks for kids. You can move them, pose them, and make them do different actions.
-
Helper Cars puzzles: These are cardboard puzzles that feature the cars and trucks for kids. You can put them together, take them apart, and challenge your brain.
-
Helper Cars t-shirts: These are cotton t-shirts that feature the cars and trucks for kids. You can wear them, wash them, and show your love for Helper Cars.
-
Helper Cars stickers: These are vinyl stickers that feature the cars and trucks for kids. You can stick them, peel them, and decorate your stuff with them.
-
-
To buy Helper Cars toys and merchandise, you can follow these steps:
-
-
Go to the official website of Helper Cars at www.helpercars.com.
-
Click on the tab that says "Shop".
-
Browse through the different products and add the ones you want to your cart.
-
Click on the cart icon and proceed to checkout.
-
Enter your shipping and payment details and confirm your order.
-
-
Once you have bought Helper Cars toys and merchandise, you can enjoy more fun and excitement with Helper Cars in real life.
-
Conclusion
-
Download Helper Cars is a fun and educational app for kids who love cars and trucks. It lets them watch car cartoons for kids, play with cars and trucks for kids, and learn vehicles for kids with Helper Cars. It also helps them to develop their imagination, creativity, cognitive skills, motor skills, and values. It is also loved by parents and experts who have used or evaluated the app. It is also easy to download from different platforms depending on your device. You can also get more out of Download Helper Cars by subscribing to Helper Cars YouTube channel, joining Helper Cars Facebook community, and buying Helper Cars toys and merchandise. If you want to give your kids a great app that they will love and learn from, download Download Helper Cars today!
-
Frequently Asked Questions
-
-
What is Download Helper Cars?
-
Download Helper Cars is an app that lets kids watch car cartoons for kids, play with cars and trucks for kids, and learn vehicles for kids with Helper Cars.
-
How do I download Download Helper Cars?
-
You can download Download Helper Cars from Google Play Store, App Store, or Microsoft Store depending on your device.
-
What are the benefits of Download Helper Cars?
-
Download Helper Cars helps kids to develop their imagination, creativity, cognitive skills, motor skills, and values. It also teaches them valuable lessons and values that they can apply in their own lives.
-
What are some of the features of Download Helper Cars?
-
Some of the features of Download Helper Cars are watching car cartoons for kids, playing with cars and trucks for kids, learning vehicles for kids with Helper Cars, subscribing to Helper Cars YouTube channel, joining Helper Cars Facebook community, and buying Helper Cars toys and merchandise.
-
Is Download Helper Cars safe for kids?
-
Yes, Download Helper Cars is safe for kids. It has no ads, no in-app purchases, no violence, no inappropriate content, no personal data collection, no internet connection required. It is also approved by parents and experts who have used or evaluated the app.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Internet Cafe Simulator 2 Tips and Tricks to Survive the Challenges of Running a Cafe.md b/spaces/congsaPfin/Manga-OCR/logs/Internet Cafe Simulator 2 Tips and Tricks to Survive the Challenges of Running a Cafe.md
deleted file mode 100644
index 513a3b325acef24dab3e5c3f847aefb1df979055..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Internet Cafe Simulator 2 Tips and Tricks to Survive the Challenges of Running a Cafe.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
How to Download Internet Cafe 2 Simulator: A Guide for Gamers
- If you are a fan of business simulation games, you might have heard of Internet Cafe 2 Simulator, a popular game that lets you build and manage your own internet cafe. In this article, we will tell you everything you need to know about this game, including what it is, why you should play it, and how to download it for PC and Android. Let's get started!
What is Internet Cafe 2 Simulator?
-
A brief introduction to the game and its features
- Internet Cafe 2 Simulator is a game developed by Cheesecake Dev, a studio that specializes in creating realistic and immersive simulation games. It is the sequel to Internet Cafe Simulator, which was released in 2019. In this game, you will have to restore, manage, and expand your internet cafe business across the city and make profits. You will have to deal with various aspects of running an internet cafe, such as: - Installing and upgrading computers, software, and gaming consoles - Purchasing game licenses and attracting customers with popular games - Preparing meals and drinks for your customers - Hiring and training employees and assigning them tasks - Paying rent, bills, taxes, and loans - Keeping your cafe clean and secure from thieves, vandals, and hackers - Competing with other internet cafes and making alliances or enemies
The difference between Internet Cafe 2 Simulator and Internet Cafe Simulator
- Internet Cafe 2 Simulator is not just a simple update of Internet Cafe Simulator. It is a completely new game that contains much more detailed and different mechanics. Some of the new features that Internet Cafe 2 Simulator offers are: - A larger map with more locations to explore and rent - A tech tree that allows you to unlock new skills and abilities - A weather system that affects customer behavior and demand - A reputation system that affects your relationship with customers, employees, rivals, and authorities - A crime system that allows you to engage in illegal activities or fight against them
Why Should You Play Internet Cafe 2 Simulator?
-
The benefits of playing a business simulation game
- Playing a business simulation game like Internet Cafe 2 Simulator can be very beneficial for your personal and professional development. Some of the benefits are: - You can learn about business management, finance, marketing, customer service, and more. - You can improve your problem-solving, decision-making, planning, and strategic thinking skills. - You can unleash your creativity and imagination by designing your own internet cafe. - You can have fun and relax by playing games within the game.
The challenges and opportunities of running an internet cafe
- Running an internet cafe is not an easy task. You will have to face many challenges and risks in this game, such as: - Keeping up - Keeping up with the changing trends and demands of the gaming industry - Balancing your budget and cash flow - Handling customer complaints and feedback - Managing your staff and their morale - Avoiding legal troubles and fines However, running an internet cafe also comes with many opportunities and rewards, such as: - Expanding your business and opening new branches - Earning loyal customers and fans - Making partnerships and deals with other businesses - Discovering new games and genres - Becoming the best internet cafe in the city
The fun and realism of interacting with customers, employees, and rivals
- One of the most fun and realistic aspects of Internet Cafe 2 Simulator is the interaction with different characters in the game. You will have to communicate with your customers, employees, rivals, and other NPCs, each with their own personality, preferences, and behavior. You will have to: - Satisfy your customers' needs and wants by offering them games, food, drinks, and services - Motivate your employees by paying them well, giving them bonuses, and training them - Negotiate with your rivals by making offers, threats, or bribes - Deal with other NPCs such as street thugs, mobsters, hackers, police officers, journalists, and more
How to Download Internet Cafe 2 Simulator for PC?
-
The system requirements and the price of the game
- If you want to play Internet Cafe 2 Simulator on your PC, you will need to meet the following system requirements: - OS: Windows 7 or higher (64-bit) - Processor: Intel Core i5 or equivalent - Memory: 8 GB RAM - Graphics: NVIDIA GeForce GTX 750 Ti or equivalent - DirectX: Version 11 - Storage: 20 GB available space The game is currently available on Steam for $19.99. You can also get a 10% discount if you buy the game before June 30, 2023.
The steps to download the game from Steam
- To download the game from Steam, you will need to follow these steps: - Create a Steam account or log in to your existing one - Search for Internet Cafe 2 Simulator in the Steam store or click [here] - Click on the "Add to Cart" button and proceed to checkout - Choose your payment method and confirm your purchase - Wait for the game to download and install on your PC - Launch the game from your Steam library and enjoy!
The alternative ways to download the game for free
- If you don't want to pay for the game or you don't have a Steam account, you can also try some alternative ways to download the game for free. However, we do not recommend these methods as they may be illegal, unsafe, or unethical. Some of these methods are: - Downloading the game from torrent sites or file-sharing platforms - Using a cracked version of the game or a key generator - Using a VPN or a proxy to bypass regional restrictions or geo-blocking Please note that these methods may expose you to viruses, malware, spyware, or other cyber threats. They may also violate the terms of service and the intellectual property rights of the developers. You may also face legal consequences or penalties for piracy or fraud.
How to Download Internet Cafe 2 Simulator for Android?
-
The compatibility and the size of the game for mobile devices
- If you want to play Internet Cafe 2 Simulator on your Android device, you will need to have a device that meets the following requirements: - OS: Android 5.0 or higher - CPU: Quad-core 1.5 GHz or higher - RAM: 2 GB or higher - Storage: 5 GB available space The game is currently available on GameLoop for free. GameLoop is an Android emulator that allows you to play mobile games on your PC.
The steps to download the game from GameLoop
- To download the game from GameLoop, you will need to follow these steps: - Download and install GameLoop on your PC from [here] - Launch GameLoop and log in with your Google account or create a new one - Search for Internet Cafe 2 Simulator in the Game Center or click [here] - Click on the "Install" button and wait for the game to download and install on your PC - Launch the game from GameLoop and enjoy!
The tips and tricks to optimize the game performance on Android
- To optimize the game performance on Android, you can try some tips and tricks such as: - Adjusting the graphics settings to low or medium - Closing other apps or background processes that may consume memory or CPU - Clearing cache or data of the game regularly - Updating your device software or drivers <|im_end| - fighting them off with your fists, weapons, or security systems - paying them off with money or goods - making friends with them by doing favors or missions for them - avoiding them by hiding or running away You can also report them to the police or the media, but be careful as they may retaliate or blackmail you.
Q4: How can I earn more money in the game?
- A4: You can earn more money in the game by: - Increasing your customer satisfaction and loyalty by offering them quality games, food, drinks, and services - Setting your prices and fees according to the market and your competitors - Reducing your expenses and overheads by optimizing your energy consumption, staff wages, and taxes - Expanding your business and opening new branches in different locations - Engaging in illegal activities such as hacking, gambling, or selling drugs (at your own risk)
Q5: How can I contact the developers of the game?
- A5: You can contact the developers of the game by: - Visiting their official website at [here] - Following them on their social media accounts on Facebook, Twitter, Instagram, and YouTube - Sending them an email at cheesecakedev@gmail.com - Leaving a review or a comment on Steam or GameLoop
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Red Alert 2 Yuris Revenge How to Install and Play on Windows 10.md b/spaces/congsaPfin/Manga-OCR/logs/Red Alert 2 Yuris Revenge How to Install and Play on Windows 10.md
deleted file mode 100644
index b99c8470e8e0bbcc1c209d5c66e3746e35f6e0d5..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Red Alert 2 Yuris Revenge How to Install and Play on Windows 10.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
How to Download Red Alert Yuri Revenge
-
Red Alert Yuri Revenge is one of the most iconic real-time strategy games ever made. It is a standalone expansion pack for Red Alert 2, released in 2001 by Westwood Studios. It features a new storyline, new factions, new units, new maps, and new modes. In this article, we will show you how to download Red Alert Yuri Revenge and enjoy this classic game on your Windows 10 PC.
Red Alert Yuri Revenge is a real-time strategy game set in an alternate history where the Soviet Union and the Allies are locked in a global war. The game introduces a third faction, led by Yuri, a former Soviet psychic who has betrayed his former allies and created his own army of mind-controlled soldiers and futuristic weapons. The game has two campaigns, one for each side, where you have to stop Yuri's plans of world domination. The game also has a skirmish mode where you can play against the computer or other players on various maps. The game has three difficulty levels and four speed settings.
-
Why Play Red Alert Yuri Revenge?
-
Red Alert Yuri Revenge is a game that offers many benefits for gamers of all ages and preferences. Here are some of the reasons why you should play this game:
-
-
It is fun. The game has a fast-paced and addictive gameplay that will keep you entertained for hours. You can build your base, train your army, research new technologies, and unleash devastating superweapons on your enemies.
-
It is challenging. The game has a balanced and diverse gameplay that will test your skills and strategies. You have to manage your resources, defend your base, attack your enemies, and adapt to different situations.
-
It is nostalgic. The game has a retro style and charm that will bring back memories of the golden age of real-time strategy games. You can relive the epic battles and memorable moments of the Red Alert series.
-
It is online. The game has a multiplayer mode that allows you to play with or against other players from around the world. You can join or host games using CnCNet or other services. You can also chat with other players and make new friends.
-
-
How to Get Red Alert Yuri Revenge?
-
There are different ways to get Red Alert Yuri Revenge for your PC. You can buy it, download it, or stream it. Here are some of the options you have:
-
Buying the Game
-
One of the easiest ways to get Red Alert Yuri Revenge is to buy it from the EA Origin Store. You can get the game as part of the Command & Conquer Ultimate Collection, which includes 17 games from the series for $19.99. You can also buy the game separately for $4.99. You will need to create an Origin account and download the Origin client to play the game.
-
Another way to buy the game is to look for physical copies of the game on online stores or local shops. You can find the game as part of the Red Alert 2: Yuri's Revenge expansion pack or the Command & Conquer: The First Decade compilation. You will need a CD-ROM drive and a CD key to install the game.
-
How to play Yuri's Revenge online with CnCNet
-Red Alert 2 and Yuri's Revenge digital copy from EA Origin Store
-Yuri's Revenge mods and patches on CnCNet
-How to install Yuri's Revenge on Windows 10
-Red Alert 2 and Yuri's Revenge remastered version
-Yuri's Revenge cheats and trainers
-How to fix Yuri's Revenge black screen issue
-Red Alert 2 and Yuri's Revenge gameplay and tips
-Yuri's Revenge custom maps and scenarios
-How to create a Yuri's Revenge LAN game
-Red Alert 2 and Yuri's Revenge soundtrack and music
-Yuri's Revenge best units and strategies
-How to run Yuri's Revenge in full screen mode
-Red Alert 2 and Yuri's Revenge system requirements
-Yuri's Revenge multiplayer tournaments and rankings
-How to uninstall Yuri's Revenge from your PC
-Red Alert 2 and Yuri's Revenge story and characters
-Yuri's Revenge fan art and wallpapers
-How to backup your Yuri's Revenge save files
-Red Alert 2 and Yuri's Revenge Easter eggs and secrets
-How to update your Yuri's Revenge game version
-Red Alert 2 and Yuri's Revenge speedrun and challenges
-Yuri's Revenge modding tools and tutorials
-How to play Yuri's Revenge with a controller
-Red Alert 2 and Yuri's Revenge reviews and ratings
-
Downloading the Game
-
If you don't want to buy the game, you can also download it for free from various websites. One of the most popular and reliable sources is CnCNet, a community project that provides free downloads and online services for classic Command & Conquer games. You can download Red Alert Yuri Revenge from CnCNet's website and play it online or offline. You don't need a CD key or an Origin account to play the game.
-
Another website that offers free downloads of Red Alert Yuri Revenge is OldGamesDownload. You can download the game as a ZIP file and extract it to your desired location. You will need a software like WinRAR or 7-Zip to extract the file. You don't need a CD key or an Origin account to play the game.
-
Streaming the Game
-
If you don't want to buy or download the game, you can also stream it from various platforms. One of the most popular and convenient platforms is YouTube, where you can watch gameplay videos and live streams of Red Alert Yuri Revenge. You can also interact with other viewers and streamers through comments and chats. You don't need to install or download anything to watch the game.
-
Another platform that allows you to stream Red Alert Yuri Revenge is Twitch, where you can watch live broadcasts and replays of the game. You can also follow your favorite streamers and join their communities. You don't need to install or download anything to watch the game.
-
How to Install and Play Red Alert Yuri Revenge?
-
Once you have obtained Red Alert Yuri Revenge, you will need to install and play it on your Windows 10 PC. Here are the steps you need to follow:
-
Installing the Game
-
The installation process will depend on how you got the game. If you bought the game from Origin, you will need to launch the Origin client and go to your library. Then, you will need to find Red Alert 2: Yuri's Revenge and click on Download. The game will be downloaded and installed automatically.
-
If you bought a physical copy of the game, you will need to insert the disc into your CD-ROM drive and run the setup.exe file. Then, you will need to follow the instructions on the screen and enter your CD key when prompted. The game will be installed on your PC.
-
If you downloaded the game from CnCNet, you will need to run the CnCNetYRLauncher.exe file that you downloaded. Then, you will need to follow the instructions on the screen and choose your settings and preferences. The game will be installed on your PC.
-
If you downloaded the game from OldGamesDownload, you will need to extract the ZIP file that you downloaded using WinRAR or 7-Zip. Then, you will need to run the Game.exe file that you extracted. The game will be installed on your PC.
-
Applying Fixes for Windows 10
-
Red Alert Yuri Revenge is an old game that may not run smoothly on Windows 10 without some fixes and patches. Here are some of the fixes that you may need to apply:
-
-
Run the game as administrator. Right-click on the game's shortcut or executable file and choose Run as administrator.
-
Run the game in compatibility mode. Right-click on the game's shortcut or executable file and choose Properties. Then, go to Compatibility tab and check Run this program in compatibility mode for: Windows XP (Service Pack 3).
-
Disable fullscreen optimizations. Right-click on the game's shortcut or executable file and choose Properties. Then, go to Compatibility tab and check Disable fullscreen optimizations.
-
Change screen resolution. Right-click on your desktop and choose Display settings. Then, change your resolution to 800 x 600 or lower.
-
Install patches and updates. Go to CnCNet's website and download their patches and updates for Red Alert Yuri Revenge. These will fix some of the common issues and bugs, as well as add new features and enhancements to the game.
-
-
Playing Singleplayer
-
To play the singleplayer mode of Red Alert Yuri Revenge, you will need to launch the game and choose Single Player from the main menu. Then, you will need to choose your faction, either Allies or Soviets, and start the campaign. You will have to complete 14 missions for each side, each with different objectives and challenges. You can also play the skirmish mode, where you can choose your faction, map, opponents, and settings, and play against the computer or other players on the same PC.
-
Playing Multiplayer
-
To play the multiplayer mode of Red Alert Yuri Revenge, you will need to launch the game and choose Multiplayer from the main menu. Then, you will need to choose your service, either CnCNet or LAN. If you choose CnCNet, you will need to create or join a game online using CnCNet's client. You can also chat with other players and customize your settings. If you choose LAN, you will need to create or join a game on a local network using your IP address. You can also use a VPN service like Hamachi to play with your friends online.
-
How to Enhance Your Red Alert Yuri Revenge Experience?
-
Red Alert Yuri Revenge is a game that can be enhanced and improved in many ways. Here are some of the tips and tricks that you can use to make your gameplay more enjoyable and exciting:
-
-
Use mods. Mods are modifications that add new content or change existing content in the game. You can find many mods for Red Alert Yuri Revenge on websites like ModDB or CnCNet. Some of the most popular mods are Mental Omega, Yuri's Revenge: CnCD2K, and Red Resurrection.
-
Use cheats. Cheats are codes that give you an advantage or unlock hidden features in the game. You can enter cheats by pressing Enter during the game and typing the cheat code. Some of the most useful cheats are show me the money (gives you $10,000), instant build (builds everything instantly), and chrono shift (teleports selected units anywhere on the map).
-
Use strategies. Strategies are plans or tactics that help you win the game. You can learn strategies by watching gameplay videos or reading guides on websites like GameFAQs or CnCNet. Some of the basic strategies are scouting (exploring the map and finding your enemies), expanding (building more bases and resources), and rushing (attacking your enemies early and fast).
-
Use guides. Guides are resources that provide information or tips on various aspects of the game. You can find guides on websites like IGN or CnCNet. Some of the topics that guides cover are units, buildings, technologies, maps, missions, and secrets.
-
-
Conclusion
-
Red Alert Yuri Revenge is a game that deserves to be played by every real-time strategy fan. It is a game that has a great story, gameplay, graphics, sound, and multiplayer. It is a game that can be easily obtained, installed, and played on Windows 10. It is a game that can be enhanced and improved in many ways. It is a game that will give you hours of fun and challenge.
-
If you want to download Red Alert Yuri Revenge and enjoy this classic game on your PC, follow the steps and tips that we have provided in this article. You will not regret it.
-
FAQs
-
Here are some of the frequently asked questions and answers about Red Alert Yuri Revenge:
-
-
Q: Is Red Alert Yuri Revenge compatible with Windows 10? A: Yes, it is compatible with Windows 10, but you may need to apply some fixes and patches to make it run smoothly.
-
Q: Is Red Alert Yuri Revenge free? A: Yes, it is free if you download it from CnCNet or OldGamesDownload. However, if you want to support the developers, you can buy it from Origin or other sources.
-
Q: Is Red Alert Yuri Revenge multiplayer? A: Yes, it is multiplayer. You can play online or LAN mode using CnCNet or other services.
-
Q: Is Red Alert Yuri Revenge moddable? A: Yes, it is moddable. You can find many mods for Red Alert Yuri Revenge on websites like ModDB or CnCNet.
-
Q: Is Red Alert Yuri Revenge fun? A: Yes, it is fun. It is one of the best real-time strategy games ever made and it will give you hours of fun and challenge.
-
-
I hope this article has helped you learn how to download Red Alert Yuri Revenge and enjoy this classic game on your PC. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy gaming!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Agustin Lara Granada Partitura Pdf Free EXCLUSIVE.md b/spaces/contluForse/HuggingGPT/assets/Agustin Lara Granada Partitura Pdf Free EXCLUSIVE.md
deleted file mode 100644
index 499e0638c894afc62b7105164369034775746600..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Agustin Lara Granada Partitura Pdf Free EXCLUSIVE.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-the story and gave it some twist: Chris for the chemistry, the scientific English supervision, ... Another key protein that also aggregates in the brain ... has been matter of controversy, as other studies put forward the hypothesis that the α ... Whilethe answer remains unclear, for the polyQ tractto have been conserved throughout. 4d29de3e1b
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Control Engineering By Ganesh Rao Pdf Free 11.md b/spaces/diacanFperku/AutoGPT/Control Engineering By Ganesh Rao Pdf Free 11.md
deleted file mode 100644
index 934ece58a5d8b874fdfa660b661a8928a1d5e496..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Control Engineering By Ganesh Rao Pdf Free 11.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Control Engineering By Ganesh Rao Pdf Free 11: A Useful Resource for Students and Professionals
-
-
Control engineering is a branch of engineering that deals with the design, analysis, and implementation of systems that control the behavior of other systems. Control engineering is essential for various applications, such as robotics, aerospace, industrial automation, biomedical engineering, and more.
One of the popular textbooks for learning control engineering is Control Engineering by Rao Ganesh. This book covers the fundamental concepts and techniques of control engineering in a comprehensive and clear manner. It also provides numerous examples, problems, and case studies to illustrate the practical applications of control engineering.
-
-
However, buying a hard copy of this book can be expensive and inconvenient for some students and professionals. That is why many people are looking for a way to download Control Engineering by Ganesh Rao Pdf Free 11 online. This is a pdf version of the 11th edition of the book, which is the latest and most updated one.
-
-
How to download Control Engineering by Ganesh Rao Pdf Free 11?
-
-
There are several websites that claim to offer Control Engineering by Ganesh Rao Pdf Free 11 for download. However, not all of them are reliable or safe. Some of them may contain viruses, malware, or spyware that could harm your device or data. Some of them may also require you to pay money or provide your personal information to access the pdf file.
-
-
Therefore, you should be careful and cautious when downloading Control Engineering by Ganesh Rao Pdf Free 11 online. You should only use trusted and reputable websites that have positive reviews and feedback from other users. You should also scan the pdf file with an antivirus software before opening it.
-
-
One of the websites that you can use to download Control Engineering by Ganesh Rao Pdf Free 11 is thebookee.net. This website provides free pdf ebooks for various topics and subjects. You can search for Control Engineering by Ganesh Rao Pdf Free 11 on this website and download it easily. You do not need to register or pay anything to use this website.
-
-
What are the benefits of downloading Control Engineering by Ganesh Rao Pdf Free 11?
-
-
Downloading Control Engineering by Ganesh Rao Pdf Free 11 online has several benefits, such as:
-
-
-
-
You can save money and time by not buying a hard copy of the book.
-
You can access the pdf file anytime and anywhere on your device.
-
You can zoom in and out, highlight, bookmark, and annotate the pdf file as per your convenience.
-
You can print out specific pages or chapters of the book as needed.
-
You can share the pdf file with your friends or colleagues easily.
-
-
-
Conclusion
-
-
Control Engineering by Ganesh Rao Pdf Free 11 is a useful resource for students and professionals who want to learn control engineering. It covers the fundamental concepts and techniques of control engineering in a comprehensive and clear manner. It also provides numerous examples, problems, and case studies to illustrate the practical applications of control engineering.
-
-
If you want to download Control Engineering by Ganesh Rao Pdf Free 11 online, you should use a trusted and reputable website like thebookee.net. This website provides free pdf ebooks for various topics and subjects. You can search for Control Engineering by Ganesh Rao Pdf Free 11 on this website and download it easily. You do not need to register or pay anything to use this website.
-
-
By downloading Control Engineering by Ganesh Rao Pdf Free 11 online, you can enjoy several benefits, such as saving money and time, accessing the pdf file anytime and anywhere, zooming in and out, highlighting, bookmarking, and annotating the pdf file as per your convenience, printing out specific pages or chapters of the book as needed, and sharing the pdf file with your friends or colleagues easily.
-
What are the topics covered in Control Engineering by Ganesh Rao Pdf Free 11?
-
-
Control Engineering by Ganesh Rao Pdf Free 11 covers the following topics in 12 chapters:
-
-
-
Chapter 1: Prerequisite. This chapter reviews the basic concepts of Laplace transform, transfer function, and block diagram.
-
Chapter 2: Introduction to Control Systems. This chapter introduces the definition, classification, and components of control systems.
-
Chapter 3: Block Diagrams and Signal Flow Graphs. This chapter explains how to represent control systems using block diagrams and signal flow graphs, and how to simplify them using reduction techniques.
-
Chapter 4: Time-Domain Analysis of Control Systems. This chapter discusses how to analyze the transient and steady-state responses of control systems using standard test signals and performance specifications.
-
Chapter 5: Stability of Linear Control Systems. This chapter covers the concept, criteria, and methods of determining the stability of linear control systems.
-
Chapter 6: Root Locus. This chapter describes how to plot and use the root locus technique to design and analyze control systems.
-
Chapter 7: Frequency-Domain Analysis. This chapter explains how to analyze the frequency response of control systems using polar plots, Nyquist plots, and Nichols charts.
-
Chapter 8: Bode Plots or Logarithmic Plots. This chapter shows how to construct and use Bode plots to design and analyze control systems.
-
Chapter 9: Compensation Techniques. This chapter introduces the concept and methods of compensation techniques to improve the performance and stability of control systems.
-
Chapter 10: Nonlinear Control Systems. This chapter covers the basic features, analysis, and design of nonlinear control systems using phase plane method and describing function method.
-
Chapter 11: State Space Theory. This chapter presents the state space representation, analysis, and design of control systems using state variables.
-
Chapter 12: Appendices. This chapter provides some additional topics and information related to control engineering, such as MATLAB commands, Z-transforms, discrete-time control systems, optimal control theory, fuzzy logic control, neural network control, adaptive control, robust control, etc.
-
-
-
Control Engineering by Ganesh Rao Pdf Free 11 is a comprehensive and clear textbook that covers the fundamental concepts and techniques of control engineering in a systematic and logical manner. It also provides numerous examples, problems, and case studies to illustrate the practical applications of control engineering.
-
Who is the author of Control Engineering by Ganesh Rao Pdf Free 11?
-
-
The author of Control Engineering by Ganesh Rao Pdf Free 11 is Dr. D. Ganesh Rao. He is a professor and head of the Department of Electronics and Communication Engineering at PES Institute of Technology, Bangalore. He has over 30 years of teaching and research experience in the fields of control engineering, signal processing, communication engineering, and neural networks. He has authored several books and papers on these topics. He has also received several awards and honors for his academic excellence and contributions.
-
-
What are the advantages of Control Engineering by Ganesh Rao Pdf Free 11?
-
-
Control Engineering by Ganesh Rao Pdf Free 11 has several advantages over other textbooks on control engineering, such as:
-
-
-
It is written in a simple and lucid language that is easy to understand and follow.
-
It covers the syllabus of various universities and competitive examinations on control engineering.
-
It provides a physical and intuitive approach to control engineering that helps the readers to grasp the concepts and techniques easily.
-
It includes a large number of solved examples, reinforcement problems, and exercise problems that help the readers to practice and test their knowledge and skills.
-
It incorporates the latest developments and trends in control engineering, such as state space theory, optimal control theory, fuzzy logic control, neural network control, adaptive control, robust control, etc.
-
It offers a free pdf version that can be downloaded online and accessed anytime and anywhere on any device.
-
-
-
Control Engineering by Ganesh Rao Pdf Free 11 is a useful resource for students and professionals who want to learn control engineering. It is a comprehensive and clear textbook that covers the fundamental concepts and techniques of control engineering in a systematic and logical manner. It also provides numerous examples, problems, and case studies to illustrate the practical applications of control engineering.
-
How to use Control Engineering by Ganesh Rao Pdf Free 11?
-
-
Control Engineering by Ganesh Rao Pdf Free 11 can be used for various purposes, such as:
-
-
-
Learning: You can use this book as a textbook for your courses on control engineering or as a reference book for your self-study. You can read the chapters in the order they are presented or according to your preference. You can also use the appendices to learn some additional topics and information related to control engineering.
-
Practicing: You can use this book as a source of practice problems for your assignments, quizzes, tests, or exams. You can solve the examples, reinforcement problems, and exercise problems given in each chapter. You can also check your answers with the solutions provided at the end of the book.
-
Applying: You can use this book as a guide for your projects, research, or work on control engineering. You can apply the concepts and techniques learned from this book to design, analyze, and implement control systems for various applications. You can also use the case studies given in each chapter to get some inspiration and ideas for your own projects.
-
-
-
Control Engineering by Ganesh Rao Pdf Free 11 is a versatile and valuable resource for students and professionals who want to learn control engineering. It is a comprehensive and clear textbook that covers the fundamental concepts and techniques of control engineering in a systematic and logical manner. It also provides numerous examples, problems, and case studies to illustrate the practical applications of control engineering.
-
Conclusion
-
-
Control Engineering by Ganesh Rao Pdf Free 11 is a useful resource for students and professionals who want to learn control engineering. It covers the fundamental concepts and techniques of control engineering in a comprehensive and clear manner. It also provides numerous examples, problems, and case studies to illustrate the practical applications of control engineering.
-
-
If you want to download Control Engineering by Ganesh Rao Pdf Free 11 online, you should use a trusted and reputable website like thebookee.net. This website provides free pdf ebooks for various topics and subjects. You can search for Control Engineering by Ganesh Rao Pdf Free 11 on this website and download it easily. You do not need to register or pay anything to use this website.
-
-
By downloading Control Engineering by Ganesh Rao Pdf Free 11 online, you can enjoy several benefits, such as saving money and time, accessing the pdf file anytime and anywhere, zooming in and out, highlighting, bookmarking, and annotating the pdf file as per your convenience, printing out specific pages or chapters of the book as needed, and sharing the pdf file with your friends or colleagues easily.
-
-
You can also use Control Engineering by Ganesh Rao Pdf Free 11 for various purposes, such as learning, practicing, and applying control engineering. You can use this book as a textbook for your courses on control engineering or as a reference book for your self-study. You can also use this book as a source of practice problems for your assignments, quizzes, tests, or exams. You can also use this book as a guide for your projects, research, or work on control engineering.
-
-
Control Engineering by Ganesh Rao Pdf Free 11 is a comprehensive and clear textbook that covers the fundamental concepts and techniques of control engineering in a systematic and logical manner. It also provides numerous examples, problems, and case studies to illustrate the practical applications of control engineering. It is a useful resource for students and professionals who want to learn control engineering.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Descargar Saint Administrativo Con Crack [UPD].md b/spaces/diacanFperku/AutoGPT/Descargar Saint Administrativo Con Crack [UPD].md
deleted file mode 100644
index 84bc46c881249e46c85aa4e45454ea0d988681f2..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Descargar Saint Administrativo Con Crack [UPD].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-... the Green Lantern full movie tamil dubbed in torrentdescargar saint administrativo full crackdownload xforce keygen ... descargar saint administrativo full crack 4d29de3e1b
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Giveaway Bitwig 2.5 Studio 8-Track For FREE Free.md b/spaces/diacanFperku/AutoGPT/Giveaway Bitwig 2.5 Studio 8-Track For FREE Free.md
deleted file mode 100644
index 2ce53546d63ae4a1d4ece6124b8fe6c345b3e4d2..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Giveaway Bitwig 2.5 Studio 8-Track For FREE Free.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
thanks for the invite to the bitwig user's facebook page, i have been using bitwig for a few months now and loving it. i was wondering if you could send me a free license of the latest version, it is a 8 track would be awesome! thanks again.
-
hey, bootsy. it seems youve become a rock star of sorts with your popularity from your great vsts. lots of love from the masses. i am also saying thank you for your plugs. i have a request that might hit you the wrong way, but i hope you can relate. i have been using synthmaker for a while now but have had no real good outcomes with developing my own compressor. i would like to know if you would be willing to send me a copy of the.osm for density. i use it on all of my mixes and would like to create my own gui for it to make using it simpler for me. i understand if you say no, but it would be an honor if you would consider it. i am an expert graphics person as well as a musician, engineer, producer and singer/songwriter. please let me know how you feel about this. i would love to be a part of any future projects as well. i also give freely of my work and time to benefit others. thanks in advance. koto-
this is a great product, but the only issue is the lack of info about the price and shipping. i would be happy to pay 100 dollars for that product and add it to my music studios arsenal. i use a lot of uad-2 on my pc and i would love to do the same on the mac. please add this to your future products
-
i've been using bitwig studio for about a year and i really love it. i really love the simplicity and the way you guys have made it. i have only one issue: i can't find any plugins that are actually compatible with the latest version of itunes. i'm using itunes 11.2.3. would be cool if you add a few old plugins and make them compatible with itunes 11.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent.md b/spaces/diacanFperku/AutoGPT/Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent.md
deleted file mode 100644
index 8055cc583ef4b8ef49c8c4bdb39fd83847781ca6..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent - The Ultimate Windows Password Recovery Solution
-
-
Have you ever forgotten your Windows password and got locked out of your computer? If so, you are not alone. Many people face this problem and look for ways to recover their password without losing any data or settings. One of the most popular and effective solutions is to use Lazesoft Recover My Password, a software that can help you reset your Windows password in minutes. In this article, we will show you how to download Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent and use it to create a bootable disk and reset your password with ease.
-
Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials utorrent
Lazesoft Recover My Password is a software that can help you reset your Windows password in case you forget it, lose it, or get locked out of your account. It supports all Windows versions, including XP, Vista, 7, 8, 10, and Server editions. It can also reset passwords for domain accounts and Microsoft accounts.
-
-
Lazesoft Recover My Password has two editions: Home Edition and Unlimited Edition. The Home Edition is free for personal use and can reset passwords for local accounts only. The Unlimited Edition is a paid version that can reset passwords for any type of account, including domain and Microsoft accounts. It also comes with serials that you can use to activate the software on multiple computers.
-
-
How to download Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent?
-
-
To download Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent, you can use various torrent sites or the official website of Lazesoft. Here are some of the torrent sites that offer this software:
-
-
-
-
LimeTorrents: This site has a large collection of torrents for various categories, including applications, games, movies, music, TV shows, and more. You can find Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent by searching for the keyword or browsing the applications section.
-
YolaSite: This site has a PDF file that contains the link to download Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent from a file hosting service. You can also find some information about the software and its features in the PDF file.
-
CCH2: This site has a checklist that contains the link to download Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent from a file hosting service. You can also find some information about the software and its features in the checklist.
-
-
-
You can also download Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent from the official website of Lazesoft by purchasing a license or requesting a free trial.
-
-
How to use Lazesoft Recover My Password?
-
-
Using Lazesoft Recover My Password is very easy and straightforward. Here are the steps you need to follow:
-
-
-
Download Lazesoft Recover My Password from one of the sources mentioned above.
-
Install the software on another computer that you can access.
-
Launch the software and choose the option to create a bootable CD or USB flash drive.
-
Follow the instructions on the screen to burn the disk or write the image to the USB drive.
-
Insert the disk or USB drive into the computer that you want to reset the password for.
-
Boot from the disk or USB drive by changing the boot order in the BIOS settings.
-
Once the Lazesoft Recover My Password interface appears, select the Windows installation that you want to reset the password for.
-
Select the user account that you want to reset the password for.
-
Click on Reset/Unlock button to remove the password or set a new one.
-
Reboot your computer and log in with your new password.
-
-
-
Why choose Lazesoft Recover My Password?
-
-
Lazesoft Recover My Password has many advantages over other password recovery tools. Here are some of them:
-
-
-
It has a 100% recovery rate and works with any Windows version.
-
It has a user-friendly and clear interface that guides you through the process.
-
It does not require any installation or registration on the target computer.
-
It does not damage or delete any data or settings on your computer.
-
It can reset passwords for any type of account, including domain and Microsoft accounts (Unlimited Edition only).
-
It can also recover product keys, registry keys, and other system information (Unlimited Edition only).
-
-
-
Conclusion
-
-
Lazesoft Recover My Password is a reliable and effective software that can help you reset your Windows password in minutes. You don't need to worry about forgetting your password or getting locked out of your account anymore. With Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent, you can regain access to your computer without losing any data or settings.
-
-
We hope this article has helped you understand how to use Lazesoft Recover My Password and why it is a great choice for password recovery. If you have any questions or feedback, feel free to leave a comment below.
-
FAQs about Lazesoft Recover My Password
-
-
In this section, we will answer some of the frequently asked questions about Lazesoft Recover My Password and its features.
-
-
Q: Is Lazesoft Recover My Password safe to use?
-
-
A: Yes, Lazesoft Recover My Password is safe to use and does not contain any viruses, malware, or spyware. It also does not harm your computer or data in any way. However, you should always download the software from a trusted source and scan it with an antivirus program before using it.
-
-
Q: How long does it take to reset the password with Lazesoft Recover My Password?
-
-
A: It depends on the speed of your computer and the type of password you want to reset. Generally, it takes only a few minutes to reset a local account password and a bit longer to reset a domain or Microsoft account password. However, you should always backup your data before resetting the password in case of any unexpected errors.
-
-
Q: Can I use Lazesoft Recover My Password to reset passwords for other operating systems?
-
-
A: No, Lazesoft Recover My Password only works with Windows operating systems. If you want to reset passwords for other operating systems, such as Linux or Mac OS, you need to use other tools that are compatible with them.
-
-
Q: Can I use Lazesoft Recover My Password to recover passwords for other applications or websites?
-
-
A: No, Lazesoft Recover My Password only works with Windows passwords. If you want to recover passwords for other applications or websites, such as email accounts, social media accounts, or online banking accounts, you need to use other tools that are designed for them.
-
-
Q: What if I forget the serials for Lazesoft Recover My Password Unlimited Edition?
-
-
A: If you forget the serials for Lazesoft Recover My Password Unlimited Edition, you can contact the customer support of Lazesoft and provide them with your purchase information. They will help you retrieve your serials and activate your software.
-
How to get Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent for free?
-
-
If you want to try Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent for free, you can request a free trial from the official website of Lazesoft. You will need to provide your name and email address and agree to the terms and conditions. You will then receive a download link and a serial number for the trial version.
-
-
The trial version of Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent has the same features as the full version, but it will expire after 30 days. You can use it to reset passwords for any type of account, including domain and Microsoft accounts. You can also use it to recover product keys, registry keys, and other system information.
-
-
If you want to continue using Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent after the trial period, you will need to purchase a license from Lazesoft or from one of their authorized resellers. The license will give you lifetime access to the software and free updates and support.
-
-
Conclusion
-
-
Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent is a powerful and easy-to-use software that can help you reset your Windows password in minutes. You don't need to worry about forgetting your password or getting locked out of your account anymore. With Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent, you can regain access to your computer without losing any data or settings.
-
-
In this article, we have shown you how to download Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent from various sources and how to use it to create a bootable disk and reset your password with ease. We have also answered some of the frequently asked questions about Lazesoft Recover My Password and its features.
-
-
We hope this article has helped you understand how to use Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent and why it is a great choice for password recovery. If you have any questions or feedback, feel free to leave a comment below.
-
-
-- Use subheadings to organize your article and make it easier to read and scan.
-- Use bullet points or numbered lists to present multiple items or steps in a clear and concise way.
-- Use images or videos to illustrate your points or show examples of how to use the software.
-- Use quotes or testimonials from satisfied customers or experts to add credibility and authority to your article.
-- Use keywords and phrases related to your topic throughout your article to optimize it for SEO and attract more traffic.
-- Use a call to action at the end of your article to encourage your readers to take action, such as downloading the software, requesting a free trial, or purchasing a license.
-
Conclusion
-
-
Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent is the ultimate Windows password recovery solution that can help you reset your password in minutes. Whether you have forgotten your password, lost it, or got locked out of your account, you can use this software to regain access to your computer without losing any data or settings. It works with all Windows versions and can reset passwords for any type of account, including domain and Microsoft accounts.
-
-
In this article, we have shown you how to download Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent from various sources and how to use it to create a bootable disk and reset your password with ease. We have also answered some of the frequently asked questions about Lazesoft Recover My Password and its features.
-
-
If you want to try Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent for free, you can request a free trial from the official website of Lazesoft. If you want to continue using it after the trial period, you can purchase a license from Lazesoft or from one of their authorized resellers.
-
-
Don't let a forgotten password stop you from using your computer. Download Lazesoft Recover My Password 5.3.3.1 Unlimited Edition Serials Utorrent today and enjoy the peace of mind that comes with knowing that you can always access your computer.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diego2554/RemBG_super/rembg/sessions/u2net_human_seg.py b/spaces/diego2554/RemBG_super/rembg/sessions/u2net_human_seg.py
deleted file mode 100644
index 166c195302c2530b63e79b4884ffb7681388c902..0000000000000000000000000000000000000000
--- a/spaces/diego2554/RemBG_super/rembg/sessions/u2net_human_seg.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import os
-from typing import List
-
-import numpy as np
-import pooch
-from PIL import Image
-from PIL.Image import Image as PILImage
-
-from .base import BaseSession
-
-
-class U2netHumanSegSession(BaseSession):
- def predict(self, img: PILImage, *args, **kwargs) -> List[PILImage]:
- ort_outs = self.inner_session.run(
- None,
- self.normalize(
- img, (0.485, 0.456, 0.406), (0.229, 0.224, 0.225), (320, 320)
- ),
- )
-
- pred = ort_outs[0][:, 0, :, :]
-
- ma = np.max(pred)
- mi = np.min(pred)
-
- pred = (pred - mi) / (ma - mi)
- pred = np.squeeze(pred)
-
- mask = Image.fromarray((pred * 255).astype("uint8"), mode="L")
- mask = mask.resize(img.size, Image.LANCZOS)
-
- return [mask]
-
- @classmethod
- def download_models(cls, *args, **kwargs):
- fname = f"{cls.name()}.onnx"
- pooch.retrieve(
- "https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net_human_seg.onnx",
- None
- if cls.checksum_disabled(*args, **kwargs)
- else "md5:c09ddc2e0104f800e3e1bb4652583d1f",
- fname=fname,
- path=cls.u2net_home(*args, **kwargs),
- progressbar=True,
- )
-
- return os.path.join(cls.u2net_home(), fname)
-
- @classmethod
- def name(cls, *args, **kwargs):
- return "u2net_human_seg"
diff --git a/spaces/digitalxingtong/Eileen-Bert-Vits2/preprocess_text.py b/spaces/digitalxingtong/Eileen-Bert-Vits2/preprocess_text.py
deleted file mode 100644
index 44c35fecd9b7f21016e80e9597d6055254cba3f7..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Eileen-Bert-Vits2/preprocess_text.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import json
-from random import shuffle
-
-import tqdm
-from text.cleaner import clean_text
-from collections import defaultdict
-import shutil
-stage = [1,2,3]
-
-transcription_path = 'filelists/short_character_anno.list'
-train_path = 'filelists/train.list'
-val_path = 'filelists/val.list'
-config_path = "configs/config.json"
-val_per_spk = 4
-max_val_total = 8
-
-if 1 in stage:
- with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f:
- for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()):
- try:
- utt, spk, language, text = line.strip().split('|')
- #language = "ZH"
- norm_text, phones, tones, word2ph = clean_text(text, language)
- f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones),
- " ".join([str(i) for i in tones]),
- " ".join([str(i) for i in word2ph])))
- except:
- print("err!", utt)
-
-if 2 in stage:
- spk_utt_map = defaultdict(list)
- spk_id_map = {}
- current_sid = 0
-
- with open( transcription_path+'.cleaned', encoding='utf-8') as f:
- for line in f.readlines():
- utt, spk, language, text, phones, tones, word2ph = line.strip().split('|')
- spk_utt_map[spk].append(line)
- if spk not in spk_id_map.keys():
- spk_id_map[spk] = current_sid
- current_sid += 1
- train_list = []
- val_list = []
- for spk, utts in spk_utt_map.items():
- shuffle(utts)
- val_list+=utts[:val_per_spk]
- train_list+=utts[val_per_spk:]
- if len(val_list) > max_val_total:
- train_list+=val_list[max_val_total:]
- val_list = val_list[:max_val_total]
-
- with open( train_path,"w", encoding='utf-8') as f:
- for line in train_list:
- f.write(line)
-
- file_path = transcription_path+'.cleaned'
- shutil.copy(file_path,'./filelists/train.list')
-
- with open(val_path, "w", encoding='utf-8') as f:
- for line in val_list:
- f.write(line)
-
-if 3 in stage:
- assert 2 in stage
- config = json.load(open(config_path))
- config['data']["n_speakers"] = current_sid #
- config["data"]['spk2id'] = spk_id_map
- with open(config_path, 'w', encoding='utf-8') as f:
- json.dump(config, f, indent=2, ensure_ascii=False)
diff --git a/spaces/digitalxingtong/Jiaran-Bert-VITS2/mel_processing.py b/spaces/digitalxingtong/Jiaran-Bert-VITS2/mel_processing.py
deleted file mode 100644
index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Jiaran-Bert-VITS2/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/setup_ffmpeg.py b/spaces/digitalxingtong/Jiuxia-Bert-Vits2/setup_ffmpeg.py
deleted file mode 100644
index 7137ab5faebb6d80740b8c843667458f25596839..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/setup_ffmpeg.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import os
-import sys
-import re
-from pathlib import Path
-import winreg
-
-def check_ffmpeg_path():
- path_list = os.environ['Path'].split(';')
- ffmpeg_found = False
-
- for path in path_list:
- if 'ffmpeg' in path.lower() and 'bin' in path.lower():
- ffmpeg_found = True
- print("FFmpeg already installed.")
- break
-
- return ffmpeg_found
-
-def add_ffmpeg_path_to_user_variable():
- ffmpeg_bin_path = Path('.\\ffmpeg\\bin')
- if ffmpeg_bin_path.is_dir():
- abs_path = str(ffmpeg_bin_path.resolve())
-
- try:
- key = winreg.OpenKey(
- winreg.HKEY_CURRENT_USER,
- r"Environment",
- 0,
- winreg.KEY_READ | winreg.KEY_WRITE
- )
-
- try:
- current_path, _ = winreg.QueryValueEx(key, "Path")
- if abs_path not in current_path:
- new_path = f"{current_path};{abs_path}"
- winreg.SetValueEx(key, "Path", 0, winreg.REG_EXPAND_SZ, new_path)
- print(f"Added FFmpeg path to user variable 'Path': {abs_path}")
- else:
- print("FFmpeg path already exists in the user variable 'Path'.")
- finally:
- winreg.CloseKey(key)
- except WindowsError:
- print("Error: Unable to modify user variable 'Path'.")
- sys.exit(1)
-
- else:
- print("Error: ffmpeg\\bin folder not found in the current path.")
- sys.exit(1)
-
-def main():
- if not check_ffmpeg_path():
- add_ffmpeg_path_to_user_variable()
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Shanbao-Bert-VITS2/text/tone_sandhi.py b/spaces/digitalxingtong/Shanbao-Bert-VITS2/text/tone_sandhi.py
deleted file mode 100644
index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Shanbao-Bert-VITS2/text/tone_sandhi.py
+++ /dev/null
@@ -1,351 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import List
-from typing import Tuple
-
-import jieba
-from pypinyin import lazy_pinyin
-from pypinyin import Style
-
-
-class ToneSandhi():
- def __init__(self):
- self.must_neural_tone_words = {
- '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝',
- '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊',
- '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去',
- '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号',
- '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当',
- '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻',
- '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂',
- '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆',
- '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂',
- '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿',
- '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台',
- '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算',
- '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨',
- '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快',
- '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜',
- '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔',
- '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事',
- '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾',
- '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼',
- '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实',
- '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头',
- '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼',
- '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数',
- '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气',
- '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈',
- '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方',
- '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴',
- '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦',
- '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝',
- '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹',
- '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息',
- '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤',
- '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家',
- '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故',
- '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨',
- '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅',
- '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱',
- '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱',
- '扫把', '惦记'
- }
- self.must_not_neural_tone_words = {
- "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎"
- }
- self.punc = ":,;。?!“”‘’':,;.?!"
-
- # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
- # e.g.
- # word: "家里"
- # pos: "s"
- # finals: ['ia1', 'i3']
- def _neural_sandhi(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
-
- # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
- for j, item in enumerate(word):
- if j - 1 >= 0 and item == word[j - 1] and pos[0] in {
- "n", "v", "a"
- } and word not in self.must_not_neural_tone_words:
- finals[j] = finals[j][:-1] + "5"
- ge_idx = word.find("个")
- if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶":
- finals[-1] = finals[-1][:-1] + "5"
- elif len(word) >= 1 and word[-1] in "的地得":
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 走了, 看着, 去过
- # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
- # finals[-1] = finals[-1][:-1] + "5"
- elif len(word) > 1 and word[-1] in "们子" and pos in {
- "r", "n"
- } and word not in self.must_not_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 桌上, 地下, 家里
- elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 上来, 下去
- elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
- finals[-1] = finals[-1][:-1] + "5"
- # 个做量词
- elif (ge_idx >= 1 and
- (word[ge_idx - 1].isnumeric() or
- word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个':
- finals[ge_idx] = finals[ge_idx][:-1] + "5"
- else:
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
-
- word_list = self._split_word(word)
- finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]]
- for i, word in enumerate(word_list):
- # conventional neural in Chinese
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
- finals = sum(finals_list, [])
- return finals
-
- def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # e.g. 看不懂
- if len(word) == 3 and word[1] == "不":
- finals[1] = finals[1][:-1] + "5"
- else:
- for i, char in enumerate(word):
- # "不" before tone4 should be bu2, e.g. 不怕
- if char == "不" and i + 1 < len(word) and finals[i +
- 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- return finals
-
- def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # "一" in number sequences, e.g. 一零零, 二一零
- if word.find("一") != -1 and all(
- [item.isnumeric() for item in word if item != "一"]):
- return finals
- # "一" between reduplication words shold be yi5, e.g. 看一看
- elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]:
- finals[1] = finals[1][:-1] + "5"
- # when "一" is ordinal word, it should be yi1
- elif word.startswith("第一"):
- finals[1] = finals[1][:-1] + "1"
- else:
- for i, char in enumerate(word):
- if char == "一" and i + 1 < len(word):
- # "一" before tone4 should be yi2, e.g. 一段
- if finals[i + 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- # "一" before non-tone4 should be yi4, e.g. 一天
- else:
- # "一" 后面如果是标点,还读一声
- if word[i + 1] not in self.punc:
- finals[i] = finals[i][:-1] + "4"
- return finals
-
- def _split_word(self, word: str) -> List[str]:
- word_list = jieba.cut_for_search(word)
- word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
- first_subword = word_list[0]
- first_begin_idx = word.find(first_subword)
- if first_begin_idx == 0:
- second_subword = word[len(first_subword):]
- new_word_list = [first_subword, second_subword]
- else:
- second_subword = word[:-len(first_subword)]
- new_word_list = [second_subword, first_subword]
- return new_word_list
-
- def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
- if len(word) == 2 and self._all_tone_three(finals):
- finals[0] = finals[0][:-1] + "2"
- elif len(word) == 3:
- word_list = self._split_word(word)
- if self._all_tone_three(finals):
- # disyllabic + monosyllabic, e.g. 蒙古/包
- if len(word_list[0]) == 2:
- finals[0] = finals[0][:-1] + "2"
- finals[1] = finals[1][:-1] + "2"
- # monosyllabic + disyllabic, e.g. 纸/老虎
- elif len(word_list[0]) == 1:
- finals[1] = finals[1][:-1] + "2"
- else:
- finals_list = [
- finals[:len(word_list[0])], finals[len(word_list[0]):]
- ]
- if len(finals_list) == 2:
- for i, sub in enumerate(finals_list):
- # e.g. 所有/人
- if self._all_tone_three(sub) and len(sub) == 2:
- finals_list[i][0] = finals_list[i][0][:-1] + "2"
- # e.g. 好/喜欢
- elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \
- finals_list[0][-1][-1] == "3":
-
- finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
- finals = sum(finals_list, [])
- # split idiom into two words who's length is 2
- elif len(word) == 4:
- finals_list = [finals[:2], finals[2:]]
- finals = []
- for sub in finals_list:
- if self._all_tone_three(sub):
- sub[0] = sub[0][:-1] + "2"
- finals += sub
-
- return finals
-
- def _all_tone_three(self, finals: List[str]) -> bool:
- return all(x[-1] == "3" for x in finals)
-
- # merge "不" and the word behind it
- # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
- def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- last_word = ""
- for word, pos in seg:
- if last_word == "不":
- word = last_word + word
- if word != "不":
- new_seg.append((word, pos))
- last_word = word[:]
- if last_word == "不":
- new_seg.append((last_word, 'd'))
- last_word = ""
- return new_seg
-
- # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
- # function 2: merge single "一" and the word behind it
- # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
- # e.g.
- # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
- # output seg: [['听一听', 'v']]
- def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- # function 1
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][
- 0] == seg[i + 1][0] and seg[i - 1][1] == "v":
- new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
- else:
- if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][
- 0] == word and pos == "v":
- continue
- else:
- new_seg.append([word, pos])
- seg = new_seg
- new_seg = []
- # function 2
- for i, (word, pos) in enumerate(seg):
- if new_seg and new_seg[-1][0] == "一":
- new_seg[-1][0] = new_seg[-1][0] + word
- else:
- new_seg.append([word, pos])
- return new_seg
-
- # the first and the second words are all_tone_three
- def _merge_continuous_three_tones(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and self._all_tone_three(
- sub_finals_list[i - 1]) and self._all_tone_three(
- sub_finals_list[i]) and not merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
-
- return new_seg
-
- def _is_reduplication(self, word: str) -> bool:
- return len(word) == 2 and word[0] == word[1]
-
- # the last char of first word and the first char of second word is tone_three
- def _merge_continuous_three_tones_2(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \
- merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#":
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_reduplication(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if new_seg and word == new_seg[-1][0]:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def pre_merge_for_modify(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- seg = self._merge_bu(seg)
- try:
- seg = self._merge_yi(seg)
- except:
- print("_merge_yi failed")
- seg = self._merge_reduplication(seg)
- seg = self._merge_continuous_three_tones(seg)
- seg = self._merge_continuous_three_tones_2(seg)
- seg = self._merge_er(seg)
- return seg
-
- def modified_tone(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
- finals = self._bu_sandhi(word, finals)
- finals = self._yi_sandhi(word, finals)
- finals = self._neural_sandhi(word, pos, finals)
- finals = self._three_sandhi(word, finals)
- return finals
diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_pipelines/crnn_pipeline.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_pipelines/crnn_pipeline.py
deleted file mode 100644
index 3173eac695d40ac95e9929896cf82c753624b073..0000000000000000000000000000000000000000
--- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_pipelines/crnn_pipeline.py
+++ /dev/null
@@ -1,35 +0,0 @@
-img_norm_cfg = dict(mean=[127], std=[127])
-
-train_pipeline = [
- dict(type='LoadImageFromFile', color_type='grayscale'),
- dict(
- type='ResizeOCR',
- height=32,
- min_width=100,
- max_width=100,
- keep_aspect_ratio=False),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='DefaultFormatBundle'),
- dict(
- type='Collect',
- keys=['img'],
- meta_keys=['filename', 'resize_shape', 'text', 'valid_ratio']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile', color_type='grayscale'),
- dict(
- type='ResizeOCR',
- height=32,
- min_width=32,
- max_width=None,
- keep_aspect_ratio=True),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='DefaultFormatBundle'),
- dict(
- type='Collect',
- keys=['img'],
- meta_keys=[
- 'filename', 'resize_shape', 'valid_ratio', 'img_norm_cfg',
- 'ori_filename', 'img_shape', 'ori_shape'
- ]),
-]
diff --git a/spaces/dmeck/RVC-Speakers/speakers/server/model/result.py b/spaces/dmeck/RVC-Speakers/speakers/server/model/result.py
deleted file mode 100644
index f47fc1974e769b91290a04344c959b7e599372f5..0000000000000000000000000000000000000000
--- a/spaces/dmeck/RVC-Speakers/speakers/server/model/result.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from pydantic import BaseModel, Field
-
-from speakers.server.model.flow_data import PayLoad
-
-
-class BaseResponse(BaseModel):
- code: int = Field(200, description="HTTP status code")
- msg: str = Field("success", description="HTTP status message")
-
- class Config:
- schema_extra = {
- "example": {
- "code": 200,
- "msg": "success",
- }
- }
-
-
-class TaskRunnerResponse(BaseResponse):
- data: dict
-
-
-class TaskVoiceFlowInfo(BaseModel):
- task_id: str
- data: PayLoad
-
-
-class TaskInfoResponse(BaseResponse):
- data: TaskVoiceFlowInfo
-
- class Config:
- schema_extra = {
- "example": {
- "code": 200,
- "msg": "success",
- "data": None,
- }
- }
-
-
-class RunnerState(BaseModel):
- """RunnerState"""
- task_id: str
- runner_stat: str
- nonce: str
- state: str
- finished: bool = Field(default=False)
diff --git a/spaces/doluvor/faster-whisper-webui/docs/colab.md b/spaces/doluvor/faster-whisper-webui/docs/colab.md
deleted file mode 100644
index 3fcdb835327238764fb643b9bbd2e27b6e14f58c..0000000000000000000000000000000000000000
--- a/spaces/doluvor/faster-whisper-webui/docs/colab.md
+++ /dev/null
@@ -1,20 +0,0 @@
-# Running Whisper on Google Colab
-
-If you don't have a decent GPU or any experience in running command-line applications, you might want to try this Google Colab instead:
-
-* [Google Colab - Whisper WebUI GPU](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing)
-* [Screenshots](https://imgur.com/a/ZfY6uBO)
-
-The runtime (Runtime -> Change runtime type -> Hardware accelerator) should already be set top GPU. But if not, change it to GPU.
-
-Then, sign in to Google if you haven't already. Next, click on "Connect" at the top right.
-
-Under "Checking out WebUI from Git", click on the [play icon](https://imgur.com/a/81gOLyD) that appears in "[ ]" at the left. If you get a warning, click "Run anyway".
-
-After this step has completed, it should be get a green check mark. Then move on to the next section under "Installing dependencies", and click in "[ ]" again. This might take approximately 30 seconds.
-
-Once this has completed, scroll down to the "Run WebUI" section, and click on "[ ]". This will launch the WebUI in a shared link (expires in 72 hours). To open the UI, click on the link next to "Running on public URL", which will be something like https://12xxx.gradio.app/
-
-The audio length in this version is not restricted, and it will run much faster as it is backed by a GPU. You can also run it using the "Large" model. Also note that it might take some time to start the model the first time, as it may need to download a 2.8 GB file on Google's servers.
-
-Once you're done, you can close the WebUI session by clicking the animated close button under "Run WebUI". You can also do this if you encounter any errors and need to restart the UI. You should also go to "Manage Sessions" and terminate the session, otherwise you may end up using all your free compute credits.
\ No newline at end of file
diff --git a/spaces/duchaba/yml_hackathon_prompt_monty/app.py b/spaces/duchaba/yml_hackathon_prompt_monty/app.py
deleted file mode 100644
index 2e4f9efde7746522384b522ab47e73ae89f78088..0000000000000000000000000000000000000000
--- a/spaces/duchaba/yml_hackathon_prompt_monty/app.py
+++ /dev/null
@@ -1,233 +0,0 @@
-
-import torch
-# import pandas
-import gradio
-# import PIL
-import huggingface_hub
-import huggingface_hub.hf_api
-# import json
-# import requests
-import time
-import os
-import random
-import re
-import sys
-import threading
-import transformers
-# import accelerate
-class HFace_Pluto(object):
- #
- # initialize the object
- def __init__(self, name="Pluto",*args, **kwargs):
- super(HFace_Pluto, self).__init__(*args, **kwargs)
- self.author = "Duc Haba"
- self.name = name
- self._ph()
- self._pp("Hello from class", str(self.__class__) + " Class: " + str(self.__class__.__name__))
- self._pp("Code name", self.name)
- self._pp("Author is", self.author)
- self._ph()
- #
- # define class var for stable division
- self._device = 'cuda'
- self._steps = [3,8,21,55,89,144]
- self._guidances = [1.1,3.0,5.0,8.0,13.0,21.0]
- self._models = []
- self._seed = 667 # sum of walnut in ascii (or Angle 667)
- self._width = 512
- self._height = 512
- self._step = 50
- self._guidances = 7.5
- #self._generator = torch.Generator(device='cuda')
- self.pipes = []
- self.prompts = []
- self.images = []
- self.seeds = []
- self.fname_id = 0
- self.dname_img = "img_colab/"
- return
- #
- # pretty print output name-value line
- def _pp(self, a, b,is_print=True):
- # print("%34s : %s" % (str(a), str(b)))
- x = f'{"%34s" % str(a)} : {str(b)}'
- y = None
- if (is_print):
- print(x)
- else:
- y = x
- return y
- #
- # pretty print the header or footer lines
- def _ph(self,is_print=True):
- x = f'{"-"*34} : {"-"*34}'
- y = None
- if (is_print):
- print(x)
- else:
- y = x
- return y
- #
- # fetch huggingface file
- def fetch_hface_files(self,
- hf_names,
- hf_space="duchaba/skin_cancer_diagnose",
- local_dir="/content/"):
- f = str(hf_names) + " is not iteratable, type: " + str(type(hf_names))
- try:
- for f in hf_names:
- lo = local_dir + f
- huggingface_hub.hf_hub_download(repo_id=hf_space, filename=f,
- use_auth_token=True,repo_type=huggingface_hub.REPO_TYPE_SPACE,
- force_filename=lo)
- except:
- self._pp("*Error", f)
- return
- #
- #
- def push_hface_files(self,
- hf_names,
- hf_space="duchaba/skin_cancer_diagnose",
- local_dir="/content/"):
- f = str(hf_names) + " is not iteratable, type: " + str(type(hf_names))
- try:
- for f in hf_names:
- lo = local_dir + f
- huggingface_hub.upload_file(
- path_or_fileobj=lo,
- path_in_repo=f,
- repo_id=hf_space,
- repo_type=huggingface_hub.REPO_TYPE_SPACE)
- except Exception as e:
- self._pp("*Error", e)
- return
- #
- def write_file(self,fname, txt):
- f = open(fname, "w")
- f.writelines("\n".join(txt))
- f.close()
- return
- def draw_it(self,prompt):
- # url = 'lion.png'
- # img = PIL.Image.open(url)
- # return img
- return
- #
- def get_answer(self, resp, index=0):
- return resp.get('choices')[index].get('text')
- # print out the answer
- def print_answer(self, resp, index=0,is_print_json=False):
- # print('----------')
- # print('The Answer')
- # print('----------')
- # rdata = self.get_answer(resp, index)
- # #print(textwrap.fill(rdata, width=72, replace_whitespace=False))
- # print(rdata)
- # if (is_print_json):
- # print('----------')
- # print('JSON Response')
- # print('----------')
- # print(resp)
- return
- #
- def restart_script_periodically(self):
- while True:
- #random_time = random.randint(540, 600)
- random_time = random.randint(15800, 21600)
- time.sleep(random_time)
- os.execl(sys.executable, sys.executable, *sys.argv)
- return
- #
- #
- def print_gpu_info(self):
- self._ph()
- try:
- self._pp('Your GPU is the', torch.cuda.get_device_name(0))
- self._pp('GPU ready staus', torch.cuda.is_available())
- self._pp('GPU allocated RAM in GB', round(torch.cuda.memory_allocated(0)/1024**3,1))
- self._pp('GPU reserved RAM in GB', round(torch.cuda.memory_reserved(0)/1024**3,1))
- except Exception as e:
- self._pp('**Warning, No GPU', e)
- self._ph()
- return
- #
- def _login_hface(self):
- huggingface_hub.login("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", add_to_git_credential=True) # non-blocking login
- self._ph()
- return
- #
- def _print_version(self):
- print(f"{'torch: 2.0.1':<25} Actual: {torch.__version__}")
- print(f"{'transformers: 4.29.2':<25} Actual: {transformers.__version__}")
- print(f"{'huggingface_hub 0.14.1':<25} Actual: {huggingface_hub.__version__}")
- print(f"{'gradio: 3.32.0:':<25} Actual: {gradio.__version__}")
- self._ph()
- return
-# add module/method
-#
-import functools
-def add_method(cls):
- def decorator(func):
- @functools.wraps(func)
- def wrapper(*args, **kwargs):
- return func(*args, **kwargs)
- setattr(cls, func.__name__, wrapper)
- return func # returning func means func can still be used normally
- return decorator
-#
-monty = HFace_Pluto("Monty")
-monty._login_hface()
-monty._print_version()
-monty.print_gpu_info()
-
-@add_method(HFace_Pluto)
-def _say_it(self, in_text, max_resp=4):
- resp = []
- ptype = 'text-generation'
- pmodel = 'Gustavosta/MagicPrompt-Stable-Diffusion'
- ptoken='gpt2'
- if (len(self.pipes) == 0):
- self.pipes.append(transformers.pipeline(ptype, model=pmodel, tokenizer=ptoken))
- #
- seed = random.randint(100,100000)
- transformers.set_seed(seed)
- # print(seed)
- #
- xsize = random.randint(60,100)
- responses = self.pipes[0](in_text, max_length=xsize, num_return_sequences=max_resp)
- for x in responses:
- y = x['generated_text'].strip()
- y = re.sub(r'[^\x00-\x7F]+',' ', y) # remove any non printing char
- resp.append(y)
- return resp
-#
-@add_method(HFace_Pluto)
-def say_it(self, in_text, max_resp=4, return_type="Print String"):
- resp = self._say_it(in_text,max_resp=max_resp)
- if (return_type=='Print String'):
- ftext = ''
- for x, y in enumerate(resp):
- ftext += f'Option {x}:\n{y}\n\n'
- else:
- ftext=resp
- return ftext
-
-in_box = [gradio.Textbox(lines=1, label="Initial:", placeholder="Your intitial short prompt..."),
- gradio.Slider(1, 6, value=4, step=1,label="Prompts Return:"),
- gradio.Radio(["JSON Array", "Print String"],label="Return Variable Type:",value="Print String")]
-out_box = gradio.Textbox(lines=4, label="Four Generated Prompts")
-title = "Monty: YML Hackathon, Stable Diffusion Prompt Generator"
-desc = '*NOTE: Type in your initial prompt, and the model will generate the four descriptive prompts for Stable Diffusion image.'
-arti = '*NOTE: This model uses the Gustavosta/MagicPrompt-Stable-Diffusion base model on HuggingFace. The API link is at the bottom of the page.'
-exp = [['Flowers in spring',4,'Print String'],
- ['Bird in summer',1,'JSON'],
- ['Woman in autumn',5,'Print String'],
- ['Man in winter',4,'Print String']]
-
-gradio.Interface(fn=monty.say_it,
- inputs=in_box,
- outputs=out_box,
- examples=exp,
- title=title,
- description=desc,
- article=arti).launch()
diff --git a/spaces/duycse1603/math2tex/HybridViT/module/converter/builder.py b/spaces/duycse1603/math2tex/HybridViT/module/converter/builder.py
deleted file mode 100644
index 0466749d19d75576d1bde83523c3293f77e6e8d0..0000000000000000000000000000000000000000
--- a/spaces/duycse1603/math2tex/HybridViT/module/converter/builder.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from .attn_converter import AttnLabelConverter
-
-def create_converter(config, device):
- if 'Attn' in config['Prediction']['name']:
- converter = AttnLabelConverter(config['character'], device)
- return converter
\ No newline at end of file
diff --git a/spaces/emc348/faces-through-time/configs/evaluation_config.py b/spaces/emc348/faces-through-time/configs/evaluation_config.py
deleted file mode 100644
index 16b621d4a47df9e25828c4235cf1692899d14d50..0000000000000000000000000000000000000000
--- a/spaces/emc348/faces-through-time/configs/evaluation_config.py
+++ /dev/null
@@ -1 +0,0 @@
-evaluated_methods = ['e4e', 'SG2', 'SG2Plus']
\ No newline at end of file
diff --git a/spaces/emc348/faces-through-time/models/e4e/latent_codes_pool.py b/spaces/emc348/faces-through-time/models/e4e/latent_codes_pool.py
deleted file mode 100644
index 0281d4b5e80f8eb26e824fa35b4f908dcb6634e6..0000000000000000000000000000000000000000
--- a/spaces/emc348/faces-through-time/models/e4e/latent_codes_pool.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import random
-import torch
-
-
-class LatentCodesPool:
- """This class implements latent codes buffer that stores previously generated w latent codes.
- This buffer enables us to update discriminators using a history of generated w's
- rather than the ones produced by the latest encoder.
- """
-
- def __init__(self, pool_size):
- """Initialize the ImagePool class
- Parameters:
- pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created
- """
- self.pool_size = pool_size
- if self.pool_size > 0: # create an empty pool
- self.num_ws = 0
- self.ws = []
-
- def query(self, ws):
- """Return w's from the pool.
- Parameters:
- ws: the latest generated w's from the generator
- Returns w's from the buffer.
- By 50/100, the buffer will return input w's.
- By 50/100, the buffer will return w's previously stored in the buffer,
- and insert the current w's to the buffer.
- """
- if self.pool_size == 0: # if the buffer size is 0, do nothing
- return ws
- return_ws = []
- for w in ws: # ws.shape: (batch, 512) or (batch, n_latent, 512)
- # w = torch.unsqueeze(image.data, 0)
- if w.ndim == 2:
- i = random.randint(0, len(w) - 1) # apply a random latent index as a candidate
- w = w[i]
- self.handle_w(w, return_ws)
- return_ws = torch.stack(return_ws, 0) # collect all the images and return
- return return_ws
-
- def handle_w(self, w, return_ws):
- if self.num_ws < self.pool_size: # if the buffer is not full; keep inserting current codes to the buffer
- self.num_ws = self.num_ws + 1
- self.ws.append(w)
- return_ws.append(w)
- else:
- p = random.uniform(0, 1)
- if p > 0.5: # by 50% chance, the buffer will return a previously stored latent code, and insert the current code into the buffer
- random_id = random.randint(0, self.pool_size - 1) # randint is inclusive
- tmp = self.ws[random_id].clone()
- self.ws[random_id] = w
- return_ws.append(tmp)
- else: # by another 50% chance, the buffer will return the current image
- return_ws.append(w)
diff --git a/spaces/epexVfeibi/Imagedeblurr/03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar.md b/spaces/epexVfeibi/Imagedeblurr/03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar.md
deleted file mode 100644
index 8c37c9b4322e0a7eda93cf284c50a429ebef5de8..0000000000000000000000000000000000000000
--- a/spaces/epexVfeibi/Imagedeblurr/03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar.md
+++ /dev/null
@@ -1,80 +0,0 @@
-
-
03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar: How to Download and Learn Multiple Languages with Muzzy BBC
-
-
If you are looking for a fun and effective way to learn multiple languages, you might want to try Muzzy BBC Nivel1, a multilingual course for kids and adults developed by the BBC. Muzzy BBC Nivel1 is a course that teaches you English, Spanish, French, German, Italian, and Portuguese using animated stories, songs, games, and exercises. You can download the course as a .rar file and install it on your computer or device. In this article, we will show you how to do that and what benefits you can get from learning with Muzzy BBC Nivel1.
-
03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar
What is 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar?
-
-
03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar is a compressed file that contains the Muzzy BBC Nivel1 course. The course consists of 12 DVDs that cover six languages: English, Spanish, French, German, Italian, and Portuguese. Each DVD contains four episodes of an animated story featuring Muzzy, a friendly green monster who loves to eat words and learn languages. The story is narrated in each language and introduces vocabulary, grammar, and culture in a natural and engaging way. The DVDs also include songs, games, and exercises that reinforce what you have learned.
-
-
How to Download and Install 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar?
-
-
To download and install 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar, you need to follow these steps:
-
-
-
Find a reliable source to download the file. You can use one of the links below or search for other sources on the internet.
-
Download the file to your computer or device. The file size is about 4 GB, so make sure you have enough space and a stable internet connection.
-
Extract the file using a software that can handle .rar files, such as WinRAR or 7-Zip. You will get a folder with 12 subfolders, each containing a DVD image file (.iso) and a text file (.txt).
-
Burn the DVD image files to blank DVDs using a software that can handle .iso files, such as ImgBurn or Nero. Alternatively, you can mount the DVD image files to virtual drives using a software such as Daemon Tools or Virtual CloneDrive.
-
Insert or mount the first DVD and run the setup.exe file. Follow the instructions on the screen to install the course on your computer or device.
-
Repeat the process for the remaining DVDs.
-
-
-
What are the Benefits of Learning with 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar?
-
-
Learning with 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar has many benefits, such as:
-
-
-
You can learn six languages at once or focus on one or more languages of your choice.
-
You can learn at your own pace and level, from beginner to intermediate.
-
You can learn in a fun and interactive way, using stories, songs, games, and exercises.
-
You can improve your listening, speaking, reading, and writing skills in each language.
-
You can learn about the culture and customs of different countries where the languages are spoken.
-
You can enjoy high-quality audio and video produced by the BBC.
-
-
-
Conclusion
-
-
In this article, we have shown you how to download and install 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar, a multilingual course for kids and adults developed by the BBC. We have also explained what the course is about and what benefits you can get from learning with it. We hope this guide was helpful and that you can enjoy learning multiple languages with Muzzy BBC Nivel1. If you have any questions or feedback, please leave a comment below.
-
How to Use 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar?
-
-
Once you have installed 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar on your computer or device, you can start using it to learn multiple languages. Here are some tips on how to use the course effectively:
-
-
-
Choose the language you want to learn from the main menu. You can switch languages at any time by clicking on the flag icon at the top right corner of the screen.
-
Watch the episodes of the animated story in your chosen language. You can pause, rewind, or fast-forward the video as you wish. You can also turn on or off the subtitles in your chosen language or in English.
-
Listen to the songs that accompany each episode. You can sing along with the lyrics that appear on the screen or just enjoy the music.
-
Play the games that test your comprehension and memory of what you have learned. You can choose from different types of games, such as matching, sorting, or filling in the blanks.
-
Do the exercises that practice your vocabulary, grammar, and pronunciation. You can choose from different types of exercises, such as multiple choice, drag and drop, or recording your voice.
-
Track your progress and achievements by checking your score and badges. You can also print out certificates of completion for each language.
-
-
-
What are the Reviews of 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar?
-
-
03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar has received many positive reviews from users who have tried it. Here are some of the comments they have left:
-
-
-
"I love this course! It's so fun and easy to learn with Muzzy. I have learned so much in a short time and I can speak with confidence."
-- Maria, Spain
-
-
-
-
"This is a great course for kids and adults alike. The stories are engaging and humorous, and the songs are catchy and memorable. The games and exercises are challenging and rewarding."
-- David, USA
-
-
-
-
"This is a wonderful way to learn multiple languages at once. The course covers all the basics and more, and it exposes you to different cultures and accents. I highly recommend it."
-
-- Anna, Germany
-
-
-
Conclusion
-
-
In this article, we have shown you how to download and install 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar, a multilingual course for kids and adults developed by the BBC. We have also explained what the course is about, what benefits you can get from learning with it, how to use it effectively, and what reviews it has received from users. We hope this guide was helpful and that you can enjoy learning multiple languages with Muzzy BBC Nivel1. If you have any questions or feedback, please leave a comment below.
-
Conclusion
-
-
In this article, we have shown you how to download and install 03-Curso Multilenguaje Muzzy De La BBC Nivel1 .rar, a multilingual course for kids and adults developed by the BBC. We have also explained what the course is about, what benefits you can get from learning with it, how to use it effectively, and what reviews it has received from users. We hope this guide was helpful and that you can enjoy learning multiple languages with Muzzy BBC Nivel1. If you have any questions or feedback, please leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/epexVfeibi/Imagedeblurr/3dsmax2015serialnumberandproductkey.md b/spaces/epexVfeibi/Imagedeblurr/3dsmax2015serialnumberandproductkey.md
deleted file mode 100644
index 13359f537216fe7cc1e56177d059942aba2f5473..0000000000000000000000000000000000000000
--- a/spaces/epexVfeibi/Imagedeblurr/3dsmax2015serialnumberandproductkey.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- d5da3c52bf
-
-
-
diff --git a/spaces/erbanku/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex b/spaces/erbanku/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex
deleted file mode 100644
index c82be6242cc9d26203360e90d3ac9184ef6ad842..0000000000000000000000000000000000000000
--- a/spaces/erbanku/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex
+++ /dev/null
@@ -1,155 +0,0 @@
-
-\begin{figure}
- \centering
- \includegraphics[scale=0.6]{Figures/ModalNet-21}
- \caption{The Transformer - model architecture.}
- \label{fig:model-arch}
-\end{figure}
-
-% Although the primary workhorse of our model is attention,
-%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail.
-
-Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next.
-
-The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively.
-
-\subsection{Encoder and Decoder Stacks}
-
-\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$.
-
-\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$.
-
-% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail.
-
-\subsection{Attention} \label{sec:attention}
-An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
-
-\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod}
-
-% \begin{figure}
-% \centering
-% \includegraphics[scale=0.6]{Figures/ModalNet-19}
-% \caption{Scaled Dot-Product Attention.}
-% \label{fig:multi-head-att}
-% \end{figure}
-
-We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values.
-
-In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as:
-
-\begin{equation}
- \mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V
-\end{equation}
-
-The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
-
-%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients.
-
-% Already described in the subsequent section
-%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$.
-
-%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model.
-
-While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$.
-
-
-%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$.
-
-
-\subsubsection{Multi-Head Attention} \label{sec:multihead}
-
-\begin{figure}
-\begin{minipage}[t]{0.5\textwidth}
- \centering
- Scaled Dot-Product Attention \\
- \vspace{0.5cm}
- \includegraphics[scale=0.6]{Figures/ModalNet-19}
-\end{minipage}
-\begin{minipage}[t]{0.5\textwidth}
- \centering
- Multi-Head Attention \\
- \vspace{0.1cm}
- \includegraphics[scale=0.6]{Figures/ModalNet-20}
-\end{minipage}
-
-
- % \centering
-
- \caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.}
- \label{fig:multi-head-att}
-\end{figure}
-
-Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively.
-On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}.
-
-Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
-
-\begin{align*}
- \mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\
-% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\
- \text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\
-\end{align*}
-
-Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$.
-
-
-%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation.
-
-In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$.
-Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
-
-\subsubsection{Applications of Attention in our Model}
-
-The Transformer uses multi-head attention in three different ways:
-\begin{itemize}
- \item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}.
-
- \item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
-
- \item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}.
-
-\end{itemize}
-
-\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn}
-
-In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
-
-\begin{equation}
- \mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2
-\end{equation}
-
-While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$.
-
-
-
-%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention.
-
-%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention.
-
-
-%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as
-%\begin{equation*} \label{eq:attention}
-% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq).
-%\end{equation*}
-%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$.
-
-%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$.
-%\marginpar{}
-
-\subsection{Embeddings and Softmax}
-Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$.
-
-
-\subsection{Positional Encoding}
-Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}.
-
-In this work, we use sine and cosine functions of different frequencies:
-
-\begin{align*}
- PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\
- PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel})
-\end{align*}
-
-where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$.
-
-We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
diff --git a/spaces/falterWliame/Face_Mask_Detection/Hoshi Wo Ou Kodomo 1080p Mega.md b/spaces/falterWliame/Face_Mask_Detection/Hoshi Wo Ou Kodomo 1080p Mega.md
deleted file mode 100644
index 9faea2d015a9e7e417efeffaff5c4e9856a23b34..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Hoshi Wo Ou Kodomo 1080p Mega.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Anime MovieGenre: Adventure, Fantasy, RomanceSinopsis:Hoshi wo Ou Kodomo bercerita tentang ... 1fdad05405
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Pdc World Championship Darts Pro Tour Pc Torrent.md b/spaces/falterWliame/Face_Mask_Detection/Pdc World Championship Darts Pro Tour Pc Torrent.md
deleted file mode 100644
index a2b1beafd703b6b646cb3833f942442756eec1d7..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Pdc World Championship Darts Pro Tour Pc Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-PDC World Championship Darts Free Download for PC is a sports video game, published by Oxygen Interactive and developed and ... File Upload: Torrent ... 1fdad05405
-
-
-
diff --git a/spaces/fatiXbelha/sd/4x4 Jeep Offroad Car Driving An Amazing Simulation Game with Mod APK.md b/spaces/fatiXbelha/sd/4x4 Jeep Offroad Car Driving An Amazing Simulation Game with Mod APK.md
deleted file mode 100644
index 2ba14fd56a6be6821044f9cd079b82308823eb8c..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/4x4 Jeep Offroad Car Driving An Amazing Simulation Game with Mod APK.md
+++ /dev/null
@@ -1,138 +0,0 @@
-
-
4x4 Jeep Offroad Car Driving Mod APK AN1: A Review
-
If you are looking for a thrilling and realistic offroad driving game, you might want to check out 4x4 Jeep Offroad Car Driving Mod APK AN1. This is a modified version of the original game that gives you unlimited coins and access to all the jeeps and features. In this article, we will review the game and tell you how to download and install the mod apk on your Android device. We will also share some tips and tricks for playing the game and enjoying it to the fullest.
-
Introduction
-
What is 4x4 Jeep Offroad Car Driving Mod APK AN1?
-
4x4 Jeep Offroad Car Driving Mod APK AN1 is a simulation game that lets you drive your jeep through challenging terrain and obstacles in a big town. You can choose from a variety of different jeeps, each with their own unique abilities. You can also upgrade your jeeps with new parts to make them even more powerful. You can take on a variety of challenging missions, from delivering cargo to racing against the clock. You can also explore the open world map and discover new places and secrets.
There are many reasons why you should play 4x4 Jeep Offroad Car Driving Mod APK AN1. Here are some of them:
-
-
It is free to play and download.
-
It has stunning graphics and realistic physics that make you feel like you are right there in the driver's seat.
-
It has a variety of different jeeps to choose from, each with their own unique abilities.
-
It has upgradeable jeeps that let you customize your vehicle and make it more powerful.
-
It has a variety of challenging missions that test your skills and keep you entertained.
-
It has an open world map that lets you explore and discover new places and secrets.
-
It has a mod apk that gives you unlimited coins and access to all the jeeps and features.
-
-
Features of the game
-
Realistic offroad driving physics
-
One of the best features of 4x4 Jeep Offroad Car Driving Mod APK AN1 is its realistic offroad driving physics. The game simulates the real-life conditions of driving on rough terrain, such as mud, snow, water, rocks, hills, bridges, streams, and mountains. You will have to use your skills and judgment to navigate through these obstacles and avoid getting stuck or crashing. You will also have to deal with different weather effects, such as rain, fog, snow, and wind. The game also has a realistic damage system that affects your jeep's performance and appearance. You will have to use a car repair kit to fix your jeep if it gets damaged.
-
Stunning graphics
-
Another great feature of 4x4 Jeep Offroad Car Driving Mod APK AN1 is its stunning graphics. The game has rich detailed graphics that create a immersive and beautiful environment for you to drive in. You will be amazed by the realistic textures, shadows, lighting, reflections, and animations that make the game look like a real-life offroad adventure. You will also enjoy the diverse and dynamic scenery that changes as you drive through different areas
A variety of different jeeps to choose from
-
The game also offers you a variety of different jeeps to choose from, each with their own unique abilities and characteristics. You can choose from classic jeeps, modern jeeps, military jeeps, and even monster trucks. Each jeep has its own advantages and disadvantages, such as speed, power, durability, and handling. You can also see the stats of each jeep before you select it, such as engine, transmission, suspension, tires, and brakes. You can also compare the jeeps and see which one suits your style and preference.
-
Upgradeable jeeps
-
If you want to make your jeep even more powerful and customized, you can upgrade it with new parts and accessories. You can use the coins that you earn from completing missions or from the mod apk to buy new parts for your jeep. You can upgrade your engine, transmission, suspension, tires, brakes, and body. You can also buy new paint jobs, stickers, lights, horns, and other decorations for your jeep. Upgrading your jeep will not only improve its performance and appearance, but also unlock new missions and challenges for you to complete.
-
A variety of challenging missions
-
The game also has a variety of challenging missions that test your skills and keep you entertained. You can take on different types of missions, such as delivery missions, racing missions, stunt missions, rescue missions, and exploration missions. Each mission has its own objectives, time limit, difficulty level, and rewards. You will have to use your skills and judgment to complete the missions and earn coins and stars. You will also have to deal with different obstacles and enemies that try to stop you or slow you down. Some of the missions are:
-
-
Delivery missions: You have to deliver cargo or passengers to a specific destination within a given time limit.
-
Racing missions: You have to race against other jeeps or against the clock on a predefined track.
-
Stunt missions: You have to perform various stunts and tricks on ramps, loops, bridges, and other structures.
-
Rescue missions: You have to rescue people or animals that are trapped or in danger in different situations.
-
Exploration missions: You have to explore the open world map and find hidden items or locations.
-
-
Open world map to explore
-
The game also has an open world map that lets you explore and discover new places and secrets. The map is divided into different zones, such as city zone, forest zone, desert zone, mountain zone, and snow zone. Each zone has its own terrain, weather, scenery, landmarks, and secrets. You can drive freely on the map and enjoy the realistic environment. You can also find hidden items or locations that give you extra coins or rewards. You can also interact with other vehicles or objects on the map.
-
How to download and install the mod apk?
-
Step-by-step guide
-
If you want to download and install the mod apk of 4x4 Jeep Offroad Car Driving Mod APK AN1 on your Android device, you can follow these simple steps:
-
-
Go to the website where the mod apk is available for download. You can use this link: .
-
Click on the download button and wait for the file to be downloaded on your device.
-
Go to your device's settings and enable the installation of apps from unknown sources.
-
Go to your device's file manager and locate the downloaded file.
-
Tap on the file and follow the instructions to install the mod apk on your device.
-
Launch the game and enjoy unlimited coins and access to all the jeeps and features.
-
-
Benefits of the mod apk
-
The mod apk of 4x4 Jeep Offroad Car Driving Mod APK AN1 has many benefits that make it worth downloading and installing. Some of the benefits are:
-
4x4 offroad jeep driving game mod apk unlocked
-4x4 spin car trail drive 2022 mod apk download
-4x4 offroad simulator 2021 mod apk unlimited money
-4x4 jeep mountain climb mod apk latest version
-4x4 offroad rally adventure mod apk free shopping
-4x4 dirt track forest driving mod apk android 1
-4x4 offroad monster truck mod apk revdl
-4x4 offroad legends 2 mod apk unlimited coins
-4x4 offroad parking simulator mod apk hack
-4x4 offroad trophy racing mod apk premium
-4x4 offroad extreme suv simulator mod apk rexdl
-4x4 offroad land cruiser mod apk no ads
-4x4 offroad outlaws mod apk obb
-4x4 offroad army truck mod apk vip
-4x4 offroad desert safari mod apk pro
-4x4 offroad snow mountain mod apk online
-4x4 offroad animal rescue mod apk offline
-4x4 offroad police chase mod apk full
-4x4 offroad hill climb mod apk mega
-4x4 offroad prado desert drive mod apk update
-4x4 offroad hummer driving mod apk new
-4x4 offroad jeep stunt racing mod apk old
-4x4 offroad mudrunner truck simulator mod apk original
-4x4 offroad suv driving simulator mod apk cracked
-4x4 offroad jeep mountain hill mod apk cheat
-4x4 offroad russian driver mod apk unlimited everything
-4x4 offroad jeep racing mania mod apk all unlocked
-4x4 offroad jeep rally driver mod apk god mode
-4x4 offroad jeep mountain drive mod apk an1.com
-4x4 offroad jeep extreme racing simulator mod apk happymod
-
-
You get unlimited coins that you can use to buy new parts and accessories for your jeep.
-
You get access to all the jeeps that are otherwise locked or require real money to unlock.
-
You get access to all the features that are otherwise restricted or require real money to activate.
-
You get a better gaming experience with more fun and excitement.
-
-
Risks of the mod apk
-
The mod apk of 4x4 Jeep Offroad Car Driving Mod APK AN1 also has some risks that you should be aware of before downloading and installing it. Some of the risks are:
-
-
You may face compatibility issues with your device or game version.
-
You may face security issues with your device or data.
-
You may face legal issues with the game developer or publisher.
-
You may face ethical issues with the game community or fans.
-
-
Therefore, you should download and install the mod apk at your own risk and discretion. We are not responsible for any consequences that may arise from using the mod apk.
-
Tips and tricks for playing the game
-
How to master the offroad terrain
-
One of the most important skills that you need to master in 4x4 Jeep Offroad Car Driving Mod APK AN1 is how to drive on the offroad terrain. Here are some tips and tricks that can help you:
-
-
Use the right jeep for the right terrain. Different jeeps have different abilities and characteristics that make them suitable for different terrains. For example, a monster truck may be good for driving on rocks and hills, but not for driving on snow and water. You can check the stats of each jeep before you select it.
-
Use the right gear for the right speed. Different gears have different effects on your jeep's speed and power. For example, a low gear may be good for driving on steep slopes or rough terrain, but not for driving on flat roads or high speeds. You can change your gear by using the buttons on the screen.
-
Use the brake and handbrake wisely. Braking and handbraking can help you control your jeep's speed and direction. For example, braking can help you slow down or stop your jeep, while handbraking can help you drift or turn your jeep. You can use the brake and handbrake by using the pedals on the screen.
-
Use the camera angles to your advantage. The game offers you different camera angles that let you see your jeep from different perspectives. For example, you can use the first-person camera to see your jeep from the driver's seat, or you can use the third-person camera to see your jeep from behind or above. You can change your camera angle by using the buttons on the screen.
-
-
How to earn coins and rewards
-
Another important skill that you need to master in 4x4 Jeep Offroad Car Driving Mod APK AN1 is how to earn coins and rewards. Coins and rewards are useful for buying new parts and accessories for your jeep, as well as unlocking new jeeps and features. Here are some tips and tricks that can help you:
-
-
Complete missions and challenges. Completing missions and challenges is one of the main ways to earn coins and rewards in the game. Each mission and challenge has its own objectives, time limit, difficulty level, and rewards. You will have to use your skills and judgment to complete them and earn coins and stars.
-
Find hidden items or locations. Finding hidden items or locations is another way to earn coins and rewards in the game. The game has many hidden items or locations that are scattered around the map. Some of them are easy to find, while others are hard to find. You will have to explore and discover them to earn extra coins or rewards.
-
Use the mod apk. Using the mod apk is another way to earn coins and rewards in the game. The mod apk gives you unlimited coins and access to all the jeeps and features in the game. You can use these coins to buy new parts and accessories for your jeep, as well as unlock new jeeps and features.
-
-
How to customize your jeep
-
The last skill that you need to master in 4x4 Jeep Offroad Car Driving Mod APK AN1 is how to customize your jeep. Customizing your jeep can make it more powerful and personalized, as well as unlock new missions and challenges for you to complete. Here are some tips and tricks that can help you:
-
-
Upgrade your jeep with new parts and accessories. Upgrading your jeep with new parts and accessories is one of the main ways to customize your jeep in the game. You can use the coins that you earn from completing missions or from the mod apk to buy new parts for your jeep. You can upgrade your engine, transmission, suspension, tires, brakes, and body.
-
Buy new paint jobs, stickers, lights, horns, and other decorations for your jeep. Buying new paint jobs, stickers, lights, horns, and other decorations for your jeep is another way to customize your jeep in the game. You can use the coins that you earn from completing missions or from the mod apk to buy new decorations for your jeep. You can also change the color of your jeep by using a color picker.
-
-
Conclusion
-
Summary of the main points
-
4x4 Jeep Offroad Car Driving Mod APK AN1 is a simulation game that lets you drive your jeep through challenging terrain and obstacles in a big town. The game has realistic offroad driving physics, stunning graphics, a variety of different jeeps to choose from, upgradeable jeeps, a variety of challenging missions, and an open world map to explore. The game also has a mod apk that gives you unlimited coins and access to all the jeeps and features. The game is free to play and download, and you can follow the step-by-step guide to install the mod apk on your Android device. The game also has some tips and tricks that can help you master the offroad terrain, earn coins and rewards, and customize your jeep.
-
Recommendation and rating
-
We recommend 4x4 Jeep Offroad Car Driving Mod APK AN1 to anyone who loves offroad driving games and wants to have a thrilling and realistic experience. The game is fun, exciting, challenging, and addictive. You will never get bored of driving your jeep on the rough terrain and completing the missions. You will also enjoy the mod apk that gives you unlimited coins and access to all the jeeps and features. The game is suitable for all ages and skill levels, and you can play it anytime and anywhere. We give 4x4 Jeep Offroad Car Driving Mod APK AN1 a rating of 4.5 out of 5 stars.
-
FAQs
-
Here are some of the frequently asked questions about 4x4 Jeep Offroad Car Driving Mod APK AN1:
-
-
Q: Is 4x4 Jeep Offroad Car Driving Mod APK AN1 safe to download and install?
-
A: Yes, 4x4 Jeep Offroad Car Driving Mod APK AN1 is safe to download and install, as long as you use a trusted source and follow the instructions carefully. However, you should always be careful when downloading and installing any mod apk, as there may be some risks involved.
-
Q: What are the requirements for playing 4x4 Jeep Offroad Car Driving Mod APK AN1?
-
A: You need an Android device with Android 4.1 or higher, at least 100 MB of free storage space, and a stable internet connection.
-
Q: How can I contact the developer or publisher of 4x4 Jeep Offroad Car Driving Mod APK AN1?
-
A: You can contact the developer or publisher of 4x4 Jeep Offroad Car Driving Mod APK AN1 by using their email address: . You can also visit their website: or their Facebook page: .
-
Q: How can I support the developer or publisher of 4x4 Jeep Offroad Car Driving Mod APK AN1?
-
A: You can support the developer or publisher of 4x4 Jeep Offroad Car Driving Mod APK AN1 by rating and reviewing the game on the Google Play Store, sharing the game with your friends and family, and following their social media accounts.
-
Q: How can I get more coins and rewards in 4x4 Jeep Offroad Car Driving Mod APK AN1?
-
A: You can get more coins and rewards in 4x4 Jeep Offroad Car Driving Mod APK AN1 by completing missions and challenges, finding hidden items or locations, using the mod apk, or watching ads.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Adventure Bay - Paradise Farm APK Mod Download and Enjoy Unlimited Money and Fun.md b/spaces/fatiXbelha/sd/Adventure Bay - Paradise Farm APK Mod Download and Enjoy Unlimited Money and Fun.md
deleted file mode 100644
index c00333e4e656986d1a2a538420824cbc20e4b711..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Adventure Bay - Paradise Farm APK Mod Download and Enjoy Unlimited Money and Fun.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Adventure Bay - Paradise Farm Mod APK: A Fun and Relaxing Farming Game
-
If you are looking for a casual and enjoyable farming game, you might want to check out Adventure Bay - Paradise Farm Mod APK. This is a modded version of the original game that gives you unlimited money, no ads, and free shopping. You can create your own dream farm on a beautiful island, grow crops, raise animals, trade with other players, and explore the surrounding areas. In this article, we will tell you what is Adventure Bay - Paradise Farm Mod APK, how to download and install it, and why you should play it.
Adventure Bay - Paradise Farm Mod APK is a modified version of the popular simulation game Adventure Bay - Paradise Farm. The game was developed by Ubisoft Entertainment and released in 2020. It has over 1 million downloads on Google Play Store and a 4.5-star rating. The game lets you build your own farm on a tropical island, where you can grow various crops, fruits, flowers, and trees. You can also raise different animals, such as chickens, cows, pigs, horses, and more. You can sell your products to other players or use them to craft items and recipes. You can also decorate your farm with buildings, fences, paths, and furniture. The game has a relaxing and colorful graphics style, as well as a soothing soundtrack. You can also interact with other players and join clubs to chat, trade, and compete. The game also has a story mode, where you can meet different characters and complete quests.
-
Features of Adventure Bay - Paradise Farm Mod APK
-
The modded version of Adventure Bay - Paradise Farm has some extra features that make the game more fun and easy to play. Here are some of them:
-
Unlimited money
-
With this feature, you will never run out of money in the game. You can use it to buy anything you want, such as seeds, animals, buildings, decorations, and more. You can also upgrade your farm faster and unlock new items and areas.
-
adventure bay paradise farm mod apk download
-adventure bay paradise farm mod apk unlimited money
-adventure bay paradise farm mod apk latest version
-adventure bay paradise farm mod apk android 1
-adventure bay paradise farm mod apk free shopping
-adventure bay paradise farm mod apk revdl
-adventure bay paradise farm mod apk hack
-adventure bay paradise farm mod apk offline
-adventure bay paradise farm mod apk 2023
-adventure bay paradise farm mod apk no root
-adventure bay paradise farm mod apk obb
-adventure bay paradise farm mod apk rexdl
-adventure bay paradise farm mod apk ios
-adventure bay paradise farm mod apk online
-adventure bay paradise farm mod apk update
-adventure bay paradise farm mod apk pure
-adventure bay paradise farm mod apk data
-adventure bay paradise farm mod apk vip
-adventure bay paradise farm mod apk happymod
-adventure bay paradise farm mod apk an1
-adventure bay paradise farm mod apk 0.48.24
-adventure bay paradise farm mod apk full version
-adventure bay paradise farm mod apk unlimited gems
-adventure bay paradise farm mod apk unlocked everything
-adventure bay paradise farm mod apk for pc
-adventure bay paradise farm hack mod apk
-download game adventure bay paradise farm mod apk
-how to install adventure bay paradise farm mod apk
-cara download adventure bay paradise farm mod apk
-descargar adventure bay paradise farm mod apk
-baixar adventure bay paradise farm mod apk
-télécharger adventure bay paradise farm mod apk
-download permainan adventure bay paradise farm mod apk
-unduh game adventure bay paradise farm mod apk
-download aplikasi adventure bay paradise farm mod apk
-indir oyunu adventure bay paradise farm mod apk
-scaricare gioco adventure bay paradise farm mod apk
-herunterladen spiel adventure bay paradise farm mod apk
-pobierz grę adventure bay paradise farm mod apk
-скачать игру adventure bay paradise farm мод апк
-تحميل لعبة مغامرة خليج الجنة مزرعة مود أبك
-
No ads
-
With this feature, you will not see any annoying ads in the game. You can enjoy the game without any interruptions or distractions.
-
Free shopping
-
With this feature, you can buy anything in the game without spending any money. You can get unlimited resources, such as coins, gems, energy, and more. You can also get premium items for free.
-
How to download and install Adventure Bay - Paradise Farm Mod APK?
-
If you want to download and install Adventure Bay - Paradise Farm Mod APK on your Android device, you need to follow these simple steps:
-
Download the APK file from a trusted source
-
You can download the APK file from [Apkloli](^1^), a website that offers many popular simulation games. You can get the free download of Adventure Bay - Paradise Farm Mod APK version 0.48.24 here. The file size is about 100 MB.
-
Enable unknown sources on your device
-
Before you can install the APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than Google Play Store. To do this, - Go to your device settings and tap on security. - Find the option that says "unknown sources" and toggle it on. - You may see a warning message that says installing apps from unknown sources may harm your device. Tap on OK to proceed.
-
Install the APK file and enjoy the game
-
Once you have enabled unknown sources, you can install the APK file by following these steps: - Locate the APK file on your device storage and tap on it. - You may see a pop-up window that asks you to confirm the installation. Tap on install and wait for the process to finish. - After the installation is complete, you can open the game and start playing.
-
Tips and tricks for playing Adventure Bay - Paradise Farm Mod APK
-
Adventure Bay - Paradise Farm Mod APK is a fun and relaxing game, but it can also be challenging at times. Here are some tips and tricks that can help you play the game better:
-
Complete quests and achievements
-
The game has a story mode, where you can meet different characters and complete quests for them. These quests will give you rewards, such as coins, gems, items, and experience points. You can also earn achievements by completing certain tasks, such as harvesting crops, crafting items, or visiting other players' farms. Achievements will also give you rewards and unlock new features.
-
Upgrade your buildings and crops
-
As you progress in the game, you will be able to upgrade your buildings and crops. Upgrading your buildings will increase their capacity, productivity, and appearance. Upgrading your crops will make them grow faster, yield more products, and resist pests and diseases. You can use coins, gems, or items to upgrade your buildings and crops.
-
Interact with other players and animals
-
The game is not only about farming, but also about socializing. You can interact with other players by joining clubs, chatting, trading, and helping each other. You can also visit other players' farms and rate them. You can also interact with animals by feeding, petting, or playing with them. Animals will give you products, such as eggs, milk, or wool. They will also show their affection by giving you hearts.
-
Why should you play Adventure Bay - Paradise Farm Mod APK?
-
Adventure Bay - Paradise Farm Mod APK is a game that offers many benefits for its players. Here are some of them:
-
Pros and cons of Adventure Bay - Paradise Farm Mod APK
-
Pros
-
-
The game is free to download and play.
-
The game has unlimited money, no ads, and free shopping features.
-
The game has a relaxing and colorful graphics style.
-
The game has a soothing soundtrack.
-
The game has a variety of activities to do.
-
The game has a friendly and supportive community.
-
-
Cons
-
-
The game may not work on some devices or regions.
-
The game may have some bugs or glitches.
-
The game may require an internet connection to play.
-
The game may be addictive or time-consuming.
-
-
Conclusion
-
Adventure Bay - Paradise Farm Mod APK is a fun and relaxing farming game that lets you create your own dream farm on a beautiful island. You can grow crops, raise animals, trade with other players, and explore the surrounding areas. You can also enjoy the unlimited money, no ads, and free shopping features of the modded version. The game has a relaxing and colorful graphics style, as well as a soothing soundtrack. The game also has a story mode, where you can meet different characters and complete quests. The game is free to download and play, but it may not work on some devices or regions. It may also have some bugs or glitches. The game may require an internet connection to play, and it may be addictive or time-consuming. However, if you are looking for a casual and enjoyable farming game, you might want to give Adventure Bay - Paradise Farm Mod APK a try.
- FAQs Q: What is the difference between Adventure Bay - Paradise Farm Mod APK and the original game? A: The modded version of Adventure Bay - Paradise Farm has some extra features that make the game more fun and easy to play. These features include unlimited money, no ads, and free shopping. Q: Is Adventure Bay - Paradise Farm Mod APK safe to download and install? A: Yes, Adventure Bay - Paradise Farm Mod APK is safe to download and install if you get it from a trusted source like Apkloli. However, you should always be careful when downloading apps from unknown sources. Q: How Q: How can I update Adventure Bay - Paradise Farm Mod APK? A: You can update Adventure Bay - Paradise Farm Mod APK by downloading the latest version of the APK file from Apkloli or other trusted sources. You can also check for updates within the game settings. Q: Can I play Adventure Bay - Paradise Farm Mod APK offline? A: No, Adventure Bay - Paradise Farm Mod APK requires an internet connection to play. You need to connect to the server to access your farm, interact with other players, and complete quests. Q: Can I sync my progress in Adventure Bay - Paradise Farm Mod APK with other devices? A: Yes, you can sync your progress in Adventure Bay - Paradise Farm Mod APK with other devices by logging in with your Facebook account. You can also backup your data to the cloud and restore it if needed. Q: What are the best crops and animals to grow in Adventure Bay - Paradise Farm Mod APK? A: The best crops and animals to grow in Adventure Bay - Paradise Farm Mod APK depend on your preferences and goals. However, some of the most profitable and popular ones are wheat, corn, tomatoes, strawberries, apples, bananas, pineapples, coconuts, chickens, cows, pigs, horses, and sheep. Q: How can I get more gems in Adventure Bay - Paradise Farm Mod APK? A: You can get more gems in Adventure Bay - Paradise Farm Mod APK by completing achievements, leveling up, watching videos, or buying them with real money. You can also use the free shopping feature of the modded version to get unlimited gems. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Drakor The Pirates The Last Royal Treasure - A Masterpiece of Korean Cinema.md b/spaces/fatiXbelha/sd/Download Drakor The Pirates The Last Royal Treasure - A Masterpiece of Korean Cinema.md
deleted file mode 100644
index f81abe8e7bb4528ae5fd55a4dc5759baa8a57795..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Drakor The Pirates The Last Royal Treasure - A Masterpiece of Korean Cinema.md
+++ /dev/null
@@ -1,65 +0,0 @@
-
-
to
, where
is the most important and
is the least important. You should use only one
tag per page, which is usually the title of your article. You should also use headings in a logical order, from
to
, depending on the level of importance and hierarchy of each section. For example, you could use these headings for your article:
How to Download Drakor The Pirates: The Last Royal Treasure
-
What is Drakor The Pirates: The Last Royal Treasure?
-
The Plot
-
The Cast
-
The Reviews
-
Where and How to Watch Drakor The Pirates: The Last Royal Treasure?
-
Netflix
-
How to Sign Up for Netflix
-
How to Download Movies from Netflix
-
Other Streaming Platforms
-
Viu
-
iQiyi
-
Kocowa
-
How to Download Drakor The Pirates: The Last Royal Treasure Legally and Safely?
-
The Risks of Illegal Downloads
-
The Benefits of Legal Downloads
-
The Best Sites to Download Korean Movies Legally
- Another SEO element that you can use in your article is tables. Tables are structured sets of data made up of rows and columns. They help you display information in a clear and organized way, especially when you have numerical or comparative data. Tables are defined with HTML tags such as
,
,
,
,
,
,
,
, and . You can use tables to present data such as: - The box office performance of Drakor The Pirates: The Last Royal Treasure - The ratings and awards of Drakor The Pirates: The Last Royal Treasure - The differences and similarities between Drakor The Pirates: The Last Royal Treasure and The Pirates For example, you could use this table to compare the two movies:
-
A Comparison Between Drakor The Pirates: The Last Royal Treasure and The Pirates
-
-
-
-
The Pirates: The Last Royal Treasure
-
The Pirates
-
-
-
-
-
Release Date
-
January 26, 2022
-
How to Download Drakor The Pirates: The Last Royal Treasure
- Do you love Korean movies? Do you enjoy action comedy and historical adventure? Do you want to watch the latest blockbuster from South Korea that has been praised by critics and audiences alike? If you answered yes to any of these questions, then you might be interested in downloading Drakor The Pirates: The Last Royal Treasure, a 2022 movie that is available on Netflix. This movie is a sequel to the 2014 hit The Pirates, and it follows the adventures of a group of pirates and bandits who search for the lost royal gold in the Joseon era. In this article, we will tell you everything you need to know about Drakor The Pirates: The Last Royal Treasure, including: - What is the movie about and who are the main actors? - Where and how can you watch the movie online or offline? - How can you download the movie legally and safely? By the end of this article, you will have all the information you need to enjoy this amazing Korean movie. So, let's get started!
What is Drakor The Pirates: The Last Royal Treasure?
- Drakor The Pirates: The Last Royal Treasure is a South Korean movie that was released on January 26, 2022. It is an action comedy that combines historical elements with modern humor and spectacular stunts. It is directed by Kim Jung-hoon, who also directed the first movie, and it stars Kang Ha-neul, Han Hyo-joo, Lee Kwang-soo, and Kwone Sang-woo.
The Plot
- The movie is set in 1623, during the reign of King Injo of Joseon. The king decides to move the royal treasury to a safer location, but his plan is foiled by a mysterious group of thieves who steal the gold and disappear. The king then orders his best warriors to find and recover the gold, but they are not the only ones who are after it. A band of pirates led by Jang Sa-jung (Kang Ha-neul) and a group of bandits led by Hwang Seon-ri (Han Hyo-joo) also join the hunt for the treasure, each with their own motives and secrets. Along the way, they encounter various obstacles and enemies, such as corrupt officials, foreign invaders, and rival factions. They also discover that the treasure is not just gold, but something more valuable and dangerous. Will they be able to find the treasure and survive the adventure? Will they be able to overcome their differences and work together? Will they be able to uncover the truth behind the theft and the treasure? You will have to watch the movie to find out!
The Cast
- Drakor The Pirates: The Last Royal Treasure boasts an impressive cast of talented actors who bring their characters to life with charisma and charm. Here are some of the main actors and their roles in the movie: - Kang Ha-neul as Jang Sa-jung: He is the leader of the pirates who has a strong sense of justice and loyalty. He is brave, smart, and witty, but he also has a soft spot for his crew and his love interest. - Han Hyo-joo as Hwang Seon-ri: She is the leader of the bandits who has a mysterious past and a hidden agenda. She is fierce, cunning, and beautiful, but she also has a vulnerable side that she rarely shows. - Lee Kwang-soo as Chul-bong: He is Jang Sa-jung's right-hand man who is loyal, funny, and clumsy. He often provides comic relief with his antics and expressions. - Kwone Sang-woo as Park Moo-hak: He is one of the king's warriors who is assigned to find the treasure. He is skilled, honorable, and determined, but he also has a rivalry with Jang Sa-jung that goes back to their childhood. Other notable actors include Kim Sung-oh as Lee Joong-gi, another king's warrior who is Park Moo-hak's partner; Kim Min-seok as Kim Jae-hyun, a young thief who joins Hwang Seon-ri's group; Park Ji-hwan as Oh Man-ho, a corrupt official who is involved in the theft; Choi Yu-hwa as So-mi, Jang Sa-jung's love interest; Lee Jung-hyun as Mal-nyeon, Hwang Seon-ri's friend; Kim Ji-suk as Prince Soo-yang, King Injo's brother who plots against him; Lee Il-hwa as Queen Inmok, King Injo's wife who supports him; and Kim Myung-gon as King Injo himself.
The Reviews
The Reviews
- Drakor The Pirates: The Last Royal Treasure has received mostly positive reviews from critics and audiences, who praised its witty and exciting plot, its charismatic and funny cast, its spectacular action scenes, and its impressive production values. The movie has a rating of 6.1 out of 10 on IMDb, 7.9 out of 10 on MyDramaList, and 4.5 out of 5 on Netflix. Here are some of the reviews from different sources: - "A fun movie to watch with the whole family." - angdebruin, IMDb user - "The Pirates: The Last Royal Treasure shares the same overarching themes as its predecessor — of treasure hunting, power hungry villains, as well as pirates and bandits joining forces — but feels like it could easily stand alone. Syringed with a more blatant, maximalist sense of humor than the 2014 original, it’s a swift, often endearing watch, one which vitally keeps audiences engaged over the meaty near 130-minute runtime." - Nathan Sartain, Ready Steady Cut - "The Pirates: The Last Royal Treasure is a 2022 South Korean period adventure film directed by Kim Jeong-hoon and starring Kang Ha-neul and Han Hyo-joo. A spiritual sequel of 2014 film The Pirates, the film is about adventures of pirates who gather in the sea and search for the royal treasures that have disappeared without a trace." - Wikipedia - "Stream It Or Skip It: ‘The Pirates: The Last Royal Treasure’ on Netflix, A Fun Period Swashbuckler From Korea. The Pirates: The Last Royal Treasure has it all, and a charismatic cast to boot." - Johnny Loftus, Decider As you can see, the movie has been well-received by both fans and critics, who enjoyed its blend of comedy, action, history, and romance. If you are looking for something to watch that will give you a good laugh, a thrilling adventure, and a dose of Korean culture, then you might want to download Drakor The Pirates: The Last Royal Treasure.
Where and How to Watch Drakor The Pirates: The Last Royal Treasure?
- Now that you know what the movie is about and how good it is, you might be wondering where and how you can watch it. Well, there are several options available for you to enjoy this movie online or offline.
Netflix
- One of the easiest and most convenient ways to watch Drakor The Pirates: The Last Royal Treasure is on Netflix. Netflix is a streaming service that offers a wide range of movies, TV shows, documentaries, and original content for a monthly fee. You can watch Netflix on your computer, smartphone, tablet, smart TV, or other devices that support the app. To watch Drakor The Pirates: The Last Royal Treasure on Netflix, you need to have an active subscription and an internet connection. You can choose from different plans depending on your needs and preferences. Here are the current plans and prices for Netflix in the US: - Basic: $8.99 per month. You can watch on one screen at a time in standard definition (SD). You can download videos on one phone or tablet. - Standard: $13.99 per month. You can watch on two screens at a time in high definition (HD). You can download videos on two phones or tablets. - Premium: $17.99 per month. You can watch on four screens at a time in ultra high definition (UHD) or high dynamic range (HDR). You can download videos on four phones or tablets.
How to Sign Up for Netflix
- If you don't have a Netflix account yet, you can sign up for one by following these steps: - Go to netflix.com or open the Netflix app on your device. - Click or tap on the "Sign Up" button. - Choose the plan that suits you best. - Enter your email address and create a password. - Enter your payment details. You can use a credit card, debit card, PayPal, or Netflix gift card. - Start your free trial. You can cancel anytime before the trial ends if you don't want to continue.
How to Download Movies from Netflix
- If you want to watch Drakor The Pirates: The Last Royal Treasure offline, you can download it from Netflix to your device. However, not all movies and shows are available for download, and there are some limitations on how long you can keep them and how many you can download at a time. To download movies from Netflix, you need to have an active subscription, an internet connection, and a compatible device. Here are the steps to download movies from Netflix: - Open the Netflix app on your device. - Search for Drakor The - Open the Netflix app on your device. - Search for Drakor The Pirates: The Last Royal Treasure and select it from the results. - Tap on the "Download" button next to the movie title. You can also tap on the "Downloads" icon at the bottom of the screen to see all your downloaded movies and shows. - Wait for the download to finish. You can check the progress and pause or resume the download as needed. - Enjoy watching the movie offline. You can access your downloaded movies and shows from the "Downloads" section of the app. You can also delete them when you are done or when they expire.
Other Streaming Platforms
- Netflix is not the only streaming platform that offers Drakor The Pirates: The Last Royal Treasure. There are other options that you can try, depending on your location, preference, and budget. Here are some of them:
Viu
- Viu is a streaming service that specializes in Asian content, especially Korean dramas and movies. You can watch Viu on your computer, smartphone, tablet, smart TV, or other devices that support the app. To watch Drakor The Pirates: The Last Royal Treasure on Viu, you need to have an active subscription and an internet connection. You can choose from different plans depending on your region and currency. Here are the current plans and prices for Viu in the US: - Basic: Free. You can watch selected content with ads and limited features. - Premium: $4.99 per month or $49.99 per year. You can watch all content without ads and with unlimited downloads, HD quality, and priority viewing.
How to Sign Up for Viu
- If you don't have a Viu account yet, you can sign up for one by following these steps: - Go to viu.com or open the Viu app on your device. - Click or tap on the "Sign Up" button. - Choose your preferred method of signing up. You can use your email address, Facebook account, Google account, or Apple ID. - Enter your details and create a password. - Start your free trial or choose your plan. You can cancel anytime before the trial ends if you don't want to continue.
How to Download Movies from Viu
- If you want to watch Drakor The Pirates: The Last Royal Treasure offline, you can download it from Viu to your device. However, you need to have a premium subscription to do so, and there are some limitations on how long you can keep them and how many you can download at a time. To download movies from Viu, you need to have an active subscription, an internet connection, and a compatible device. Here are the steps to download movies from Viu: - Open the Viu app on your device. - Search for Drakor The Pirates: The Last Royal Treasure and select it from the results. - Tap on the "Download" button next to the movie title. You can also tap on the "Downloads" icon at the bottom of the screen to see all your downloaded movies and shows. - Wait for the download to finish. You can check the progress and pause or resume the download as needed. - Enjoy watching the movie offline. You can access your downloaded movies and shows from the "Downloads" section of the app. You can also delete them when you are done or when they expire.
iQiyi
- iQiyi is a streaming service that offers a variety of content from China, Korea, Japan, Taiwan, Thailand, and other countries. You can watch iQiyi on your computer, smartphone, tablet, smart TV, or other devices that support the app. To watch Drakor The Pirates: The Last Royal Treasure on iQiyi, you need to have an active subscription and an internet connection. You can choose from different plans depending on your region and currency. Here are the current plans and prices for iQiyi in the US: - Standard: $5.99 per month or $58.99 per year. You can watch all content without ads and with HD quality. - VIP: $8.99 per month or $88.99 per year. You can watch all content without ads and with HD quality, plus exclusive content, priority viewing, offline viewing, multiple screens, and more.
How to Sign Up for iQiyi
- If you don't have an iQiyi account yet, you can sign up for one by following these steps: - Go to iq.com or open the iQiyi app on your device. - Click or tap on the "Sign Up" button. - Choose your preferred method of signing up. You can use your email address, phone number, Facebook account, Google account, Twitter account, or Apple ID. - Enter your details and create a password. - Start your free trial or choose your - Start your free trial or choose your plan. You can cancel anytime before the trial ends if you don't want to continue.
How to Download Movies from iQiyi
- If you want to watch Drakor The Pirates: The Last Royal Treasure offline, you can download it from iQiyi to your device. However, you need to have a VIP subscription to do so, and there are some limitations on how long you can keep them and how many you can download at a time. To download movies from iQiyi, you need to have an active subscription, an internet connection, and a compatible device. Here are the steps to download movies from iQiyi: - Open the iQiyi app on your device. - Search for Drakor The Pirates: The Last Royal Treasure and select it from the results. - Tap on the "Download" button next to the movie title. You can also tap on the "Downloads" icon at the bottom of the screen to see all your downloaded movies and shows. - Wait for the download to finish. You can check the progress and pause or resume the download as needed. - Enjoy watching the movie offline. You can access your downloaded movies and shows from the "Downloads" section of the app. You can also delete them when you are done or when they expire.
Kocowa
- Kocowa is a streaming service that offers premium Korean content, such as dramas, movies, variety shows, and music. You can watch Kocowa on your computer, smartphone, tablet, smart TV, or other devices that support the app. To watch Drakor The Pirates: The Last Royal Treasure on Kocowa, you need to have an active subscription and an internet connection. You can choose from different plans depending on your region and currency. Here are the current plans and prices for Kocowa in the US: - Daily: $0.99 per day. You can watch all content without ads for 24 hours. - Monthly: $6.99 per month. You can watch all content without ads for 30 days. - Annual: $69.99 per year. You can watch all content without ads for 365 days.
How to Sign Up for Kocowa
- If you don't have a Kocowa account yet, you can sign up for one by following these steps: - Go to kocowa.com or open the Kocowa app on your device. - Click or tap on the "Sign Up" button. - Choose your preferred method of signing up. You can use your email address, Facebook account, Google account, or Apple ID. - Enter your details and create a password. - Start your free trial or choose your plan. You can cancel anytime before the trial ends if you don't want to continue.
How to Download Movies from Kocowa
- If you want to watch Drakor The Pirates: The Last Royal Treasure offline, you can download it from Kocowa to your device. However, you need to have a paid subscription to do so, and there are some limitations on how long you can keep them and how many you can download at a time. To download movies from Kocowa, you need to have an active subscription, an internet connection, and a compatible device. Here are the steps to download movies from Kocowa: - Open the Kocowa app on your device. - Search for Drakor The Pirates: The Last Royal Treasure and select it from the results. - Tap on the "Download" button next to the movie title. You can also tap on the "Downloads" icon at the bottom of the screen to see all your downloaded movies and shows. - Wait for the download to finish. You can check the progress and pause or resume the download as needed. - Enjoy watching the movie offline. You can access your downloaded movies and shows from the "Downloads" section of the app. You can also delete them when you are done or when they expire.
How to Download Drakor The Pirates: The Last Royal Treasure Legally and Safely?
- As you can see, there are many streaming platforms that offer Drakor The Pirates: The Last Royal Treasure for online or offline viewing. However, not all of them are legal and safe. Some of them may contain viruses, malware, spyware, or other harmful software that could damage your device or compromise your privacy. Therefore, it is important that you only download Drakor The Pirates: The Last Royal Treasure from trusted and reputable sources that have proper licenses and permissions to distribute the movie. This way, you can avoid any legal issues or security risks that could arise from illegal downloads.
The Risks of Illegal Downloads
- Illegal downloads are downloads that violate the intellectual property rights of the creators or owners of the content. They are usually done through torrent sites, file-sharing platforms, or Illegal downloads are downloads that violate the intellectual property rights of the creators or owners of the content. They are usually done through torrent sites, file-sharing platforms, or unauthorized websites that offer free or cheap downloads of movies, music, games, software, or other digital products. Illegal downloads are risky for several reasons, such as: - They are illegal. Downloading or distributing copyrighted content without permission is a crime that could result in fines, lawsuits, or even jail time. You could also face legal action from the content owners or their representatives if they find out that you have infringed their rights. - They are unsafe. Illegal downloads may contain viruses, malware, spyware, or other harmful software that could damage your device or compromise your privacy. You could also expose your personal information, such as your IP address, location, browsing history, or passwords, to hackers or cybercriminals who could use it for malicious purposes. - They are unethical. Illegal downloads hurt the content creators and owners who invest their time, money, and effort to produce and distribute their work. By downloading their content illegally, you are depriving them of their rightful income and recognition. You are also disrespecting their artistic vision and integrity.
The Benefits of Legal Downloads
- Legal downloads are downloads that respect the intellectual property rights of the creators or owners of the content. They are usually done through official websites or platforms that have proper licenses and permissions to distribute the content. They may require a fee or a subscription, but they also offer many benefits, such as: - They are legal. Downloading or distributing copyrighted content with permission is not a crime and does not violate any laws. You can enjoy the content without worrying about any legal issues or consequences. - They are safe. Legal downloads are free of viruses, malware, spyware, or other harmful software that could harm your device or privacy. You can also trust that your personal information is protected and secure when you use legal download services. - They are ethical. Legal downloads support the content creators and owners who deserve to be rewarded and appreciated for their work. By downloading their content legally, you are showing them respect and gratitude for their creativity and quality. You are also contributing to the development and diversity of the digital industry.
The Best Sites to Download Korean Movies Legally
- If you want to download Drakor The Pirates: The Last Royal Treasure legally and safely, you need to find a site that offers legal download services for Korean movies. There are many sites that claim to offer such services, but not all of them are reliable and trustworthy. To help you find the best sites to download Korean movies legally, we have compiled a list of some of the most popular and reputable ones. Here they are: - AsianCrush: AsianCrush is a streaming and download service that offers a large collection of Asian movies, TV shows, documentaries, and original content. You can watch or download Korean movies from various genres and categories, such as action, comedy, romance, thriller, horror, drama, and more. You can access AsianCrush on your computer, smartphone, tablet, smart TV, or other devices that support the app. You can choose from different plans depending on your needs and preferences. Here are the current plans and prices for AsianCrush in the US: - Basic: Free. You can watch selected content with ads and limited features. - Premium: $4.99 per month or $49.99 per year. You can watch all content without ads and with unlimited downloads, HD quality, and priority viewing. To sign up for AsianCrush, you need to go to asiancrush.com or open the AsianCrush app on your device. Then you need to click or tap on the "Sign Up" button and choose your preferred method of signing up. You can use your email address You can use your email address, Facebook account, Google account, or Apple ID. Then you need to enter your details and create a password. You can also start your free trial or choose your plan. You can cancel anytime before the trial ends if you don't want to continue. To download movies from AsianCrush, you need to have an active subscription, an internet connection, and a compatible device. Here are the steps to download movies from AsianCrush: - Open the AsianCrush app on your device. - Search for Drakor The Pirates: The Last Royal Treasure and select it from the results. - Tap on the "Download" button next to the movie title. You can also tap on the "Downloads" icon at the bottom of the screen to see all your downloaded movies and shows. - Wait for the download to finish. You can check the progress and pause or resume the download as needed. - Enjoy watching the movie offline. You can access your downloaded movies and shows from the "Downloads" section of the app. You can also delete them when you are done or when they expire. - Viki: Viki is a streaming and download service that offers a variety of Asian content, especially Korean dramas and movies. You can watch or download Korean movies from various genres and categories, such as action, comedy, romance, thriller, horror, drama, and more. You can also enjoy subtitles in different languages and interact with other fans in the comments section. You can access Viki on your computer, smartphone, tablet, smart TV, or other devices that support the app. You can choose from different plans depending on your needs and preferences. Here are the current plans and prices for Viki in the US: - Basic: Free. You can watch selected content with ads and limited features. - Standard: $4.99 per month or $49.99 per year. You can watch all content without ads and with HD quality. - Plus: $9.99 per month or $99.99 per year. You can watch all content without ads and with HD quality, plus exclusive content from Kocowa. To sign up for Viki, you need to go to viki.com or open the Viki app on your device. Then you need to click or tap on the "Sign Up" button and choose your preferred method of signing up. You can use your email address, Facebook account, Google account, or Apple ID. Then you need to enter your details and create a password. You can also start your free trial or choose your plan. You can cancel anytime before the trial ends if you don't want to continue. To download movies from Viki, you need to have an active subscription, an internet connection, and a compatible device. Here are the steps to download movies from Viki: - Open the Viki app on your device. - Search for Drakor The Pirates: The Last Royal Treasure and select it from the results. - Tap on the "Download" button next to the movie title. You can also tap on the "Downloads" icon at the bottom of the screen to see all your downloaded movies and shows. - Wait for the download to finish. You can check the progress and pause or resume the download as needed. - Enjoy watching the movie offline. You can access your downloaded movies and shows from the "Downloads" section of the app. You can also delete them when you are done or when they expire. - OnDemandKorea: OnDemandKorea is a streaming and download service that offers a variety of Korean content, such as dramas, movies, variety shows, news, sports, music, and more. You can watch or download Korean movies from various genres and categories, such as action, comedy, romance, thriller, horror, drama, and more. You can also enjoy subtitles in English or Spanish and access live TV channels. You can access OnDemandKorea on your computer, smartphone, tablet, smart TV, or other devices that support the app. You can choose from different plans depending on your needs and preferences. Here are the current plans and prices for OnDemandKorea in the US: - Basic: Free. You can watch selected content with ads and limited features. - Premium: $6.99 per month or $69.99 per year. You can watch all content without ads and with HD quality. - Plus: $10.99 per month or $109.99 per year. You can watch all content without ads and with HD quality, plus live TV channels. To sign up for OnDemandKorea To sign up for OnDemandKorea, you need to go to ondemandkorea.com or open the OnDemandKorea app on your device. Then you need to click or tap on the "Sign Up" button and choose your preferred method of signing up. You can use your email address, Facebook account, Google account, or Apple ID. Then you need to enter your details and create a password. You can also start your free trial or choose your plan. You can cancel anytime before the trial ends if you don't want to continue. To download movies from OnDemandKorea, you need to have an active subscription, an internet connection, and a compatible device. Here are the steps to download movies from OnDemandKorea: - Open the OnDemandKorea app on your device. - Search for Drakor The Pirates: The Last Royal Treasure and select it from the results. - Tap on the "Download" button next to the movie title. You can also tap on the "Downloads" icon at the bottom of the screen to see all your downloaded movies and shows. - Wait for the download to finish. You can check the progress and pause or resume the download as needed. - Enjoy watching the movie offline. You can access your downloaded movies and shows from the "Downloads" section of the app. You can also delete them when you are done or when they expire. These are some of the best sites to download Korean movies legally and safely. However, there may be other sites that offer similar or different services that you can explore and compare. The important thing is to always check the credibility and security of the site before downloading anything from it.
Conclusion
- Drakor The Pirates: The Last Royal Treasure is a 2022 South Korean movie that is available on Netflix and other streaming platforms. It is an action comedy that follows the adventures of a group of pirates and bandits who search for the lost royal gold in the Joseon era. It is a sequel to the 2014 movie The Pirates, and it stars Kang Ha-neul, Han Hyo-joo, Lee Kwang-soo, and Kwone Sang-woo. In this article, we have told you everything you need to know about Drakor The Pirates: The Last Royal Treasure, including: - What is the movie about and who are the main actors? - Where and how can you watch the movie online or offline? - How can you download the movie legally and safely? We hope that this article has been helpful and informative for you. If you are interested in watching Drakor The Pirates: The Last Royal Treasure, we recommend that you download it from one of the sites that we have mentioned above. This way, you can enjoy this amazing Korean movie without any hassle or risk. Thank you for reading this article. We hope that you have learned something new and useful today. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!
FAQs
- Here are some of the frequently asked questions about Drakor The Pirates: The Last Royal Treasure:
Q: Is Drakor The Pirates: The Last Royal Treasure based on a true story?
-A: No, Drakor The Pirates: The Last Royal Treasure is not based on a true story. It is a fictional story that mixes historical elements with modern humor and fantasy.
Q: Is Drakor The Pirates: The Last Royal Treasure a remake of The Pirates?
-A: No, Drakor The Pirates: The Last Royal Treasure is not a remake of The Pirates. It is a sequel that takes place 10 years after the events of The Pirates.
Q: How long is Drakor The Pirates: The Last Royal Treasure?
-A: Drakor The Pirates: The Last Royal Treasure has a runtime of 129 minutes.
Q: What is the rating of Drakor The Pirates: The Last Royal Treasure?
-A: Drakor The Pirates: The Last Royal Treasure has a rating of PG-13 for some violence, language, and suggestive material.
Q: Where can I find more information about Drakor The Pirates: The Last Royal Treasure?
-A: You can find more information about Drakor The Pirates: The Last Royal Treasure on its official website, IMDb page, Wikipedia page, or Netflix page.
-
download drakor the pirates the last royal treasure
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Drift Zone 2 PC The Most Realistic and Xtreme Driving Simulator.md b/spaces/fatiXbelha/sd/Drift Zone 2 PC The Most Realistic and Xtreme Driving Simulator.md
deleted file mode 100644
index c017e4b2c4806b814f41690a1b96a8bce902c15f..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Drift Zone 2 PC The Most Realistic and Xtreme Driving Simulator.md
+++ /dev/null
@@ -1,214 +0,0 @@
-
-
Drift Zone 2 PC Download: A Guide for Drifting Enthusiasts
-
If you are a fan of racing games and drifting, you might have heard of Drift Zone 2, a popular mobile game developed by Awesome Industries sp. z o.o. The game features stunning graphics, realistic physics, challenging missions, and a wide range of unique sports cars that you can customize and drift with. But did you know that you can also play Drift Zone 2 on your PC with a simple emulator?
In this article, I will show you how to download and install Drift Zone 2 on your PC with GameLoop emulator, a powerful and easy-to-use Android emulator that allows you to play mobile games on your computer. I will also give you a brief review of the game, some gameplay tips, and some recommendations for the best cars for drifting. By the end of this article, you will be ready to burn rubber and compete for gold in one of the best drifting games ever.
-
How to Download and Install Drift Zone 2 on PC with GameLoop Emulator?
-
Playing Drift Zone 2 on your PC has many advantages over playing it on your mobile device. You can enjoy better graphics, smoother performance, larger screen, keyboard or gamepad controls, and more. Plus, you don't have to worry about battery life, storage space, or interruptions from calls or messages.
-
To play Drift Zone 2 on your PC, you need to download and install GameLoop emulator first. GameLoop is an official emulator from Tencent Games, the publisher of popular games like PUBG Mobile, Call of Duty Mobile, and Honor of Kings. GameLoop supports many Android games and apps, including Drift Zone 2.
-
Here are the steps to download and install GameLoop emulator and Drift Zone 2 on your PC:
Run the installer file and follow the instructions to install GameLoop on your PC.
-
Launch GameLoop and search for "Drift Zone 2" in the search bar.
-
Select "Drift Zone 2" from the search results and click on the "Install" button.
-
Wait for the game to download and install on your PC.
-
Click on the "Play" button to launch Drift Zone 2 and enjoy the game.
-
-
That's it! You have successfully downloaded and installed Drift Zone 2 on your PC with GameLoop emulator. Now you can start drifting like a pro and have fun with this amazing racing game.
-
Drift Zone 2 Game Review: What to Expect from This Racing Game?
-
Drift Zone 2 is a racing game that focuses on drifting, a driving technique where the driver intentionally oversteers the car to make it slide sideways. Drifting is not only a way to show off your skills and style, but also a way to gain speed and points in the game.
-
drift zone 2 game download for pc
-drift zone 2 pc free download
-drift zone 2 windows pc download
-drift zone 2 on pc with gameloop emulator
-drift zone 2 android on pc
-how to play drift zone 2 on pc
-drift zone 2 pc game loop
-drift zone 2 apk download for pc
-drift zone 2 racing game for pc
-drift zone 2 pc version download
-drift zone 2 pc gameplay
-drift zone 2 pc system requirements
-drift zone 2 pc online
-drift zone 2 pc full version
-drift zone 2 pc crack download
-drift zone 2 pc mod apk
-drift zone 2 pc cheats
-drift zone 2 pc controller support
-drift zone 2 pc bluestacks
-drift zone 2 pc nox player
-drift zone 2 pc ldplayer
-drift zone 2 pc koplayer
-drift zone 2 best racing game for pc
-drift zone 2 ultimate sports car driver for pc
-drift zone 2 realistic driving experience for pc
-drift zone 2 challenging missions for pc
-drift zone 2 stunning graphics for pc
-drift zone 2 wide range of unique sports cars for pc
-drift zone 2 sideways driving for pc
-drift zone 2 fanpage on facebook for pc
-download and install drift zone 2 on windows pc
-how to run drift zone 2 on windows pc
-how to update drift zone 2 on windows pc
-how to uninstall drift zone 2 on windows pc
-how to fix drift zone 2 on windows pc errors
-how to optimize drift zone 2 on windows pc performance
-how to connect drift zone 2 on windows pc with friends
-how to record drift zone 2 on windows pc gameplay
-how to stream drift zone 2 on windows pc live
-how to customize drift zone 2 on windows pc settings
-how to earn coins in drift zone 2 on windows pc
-how to unlock new tracks in drift zone 2 on windows pc
-how to upgrade cars in drift zone 2 on windows pc
-how to master drifting skills in drift zone 2 on windows pc
-how to complete achievements in drift mode in the best android racing game for windows PC.
-
Drift Zone 2 offers a variety of modes, missions, and challenges that will test your drifting abilities and keep you entertained for hours. You can choose from over 30 different sports cars, each with its own characteristics and performance. You can also customize your car with different colors, decals, wheels, spoilers, and more.
-
Drift Zone 2 has stunning graphics that will make you feel like you are driving in real life. The game features realistic physics that simulate the behavior of the car and the environment. The game also has dynamic weather effects, such as rain, snow, fog, and night, that will add more challenge and excitement to your drifting experience.
-
What are the Features and Benefits of Playing Drift Zone 2 on PC?
-
Playing Drift Zone 2 on PC with GameLoop emulator has many features and benefits that will enhance your gaming experience. Here are some of them:
-
-
You can enjoy better graphics and smoother performance on your PC than on your mobile device. You can also adjust the graphics settings to suit your preferences and system requirements.
-
You can play on a larger screen that will give you a better view of the road and the surroundings. You can also use a full-screen mode or a windowed mode depending on your convenience.
-
You can use keyboard or gamepad controls that will give you more accuracy and comfort than touchscreen controls. You can also customize the key mapping and sensitivity to suit your style.
-
You can play without any interruptions from calls, messages, notifications, or low battery. You can also save your progress and data on your PC without worrying about losing them.
-
You can access more features and options from GameLoop emulator, such as screen recording, screenshot, live streaming, multi-instance, turbo mode, and more.
-
-
As you can see, playing Drift Zone 2 on PC with GameLoop emulator has many advantages that will make your drifting experience more enjoyable and satisfying. If you are a drifting enthusiast, you should definitely try playing Drift Zone 2 on PC with GameLoop emulator.
-
Drift Zone 2 Gameplay Tips: How to Master the Art of Drifting?
-
Drifting is not easy to master, but it is very rewarding and fun once you get the hang of it. Drifting requires skill, practice, and patience. Here are some gameplay tips that will help you improve your drifting skills in Drift Zone 2:
-
Controls: How to Use the Keyboard or Gamepad to Control Your Car?
-
The default controls for playing Drift Zone 2 on PC with GameLoop emulator are as follows:
-
-
-
Keyboard
-
Gamepad
-
Action
-
-
-
W
-
A
-
Accelerate
-
-
-
S
-
B
-
Brake/Reverse
-
-
-
A/D
-
Left/Right Stick
-
Steer Left/Right
-
-
-
Space
-
X
-
Handbrake
-
-
-
R/F
-
L1/R1
-
Gear Up/Down (Manual Mode)
-
-
-
P/Esc
-
Start/Select
-
Pause/Menu
-
-
-
-
You can change the controls from the settings menu if you want. You can also switch between automatic and manual transmission modes from the game options.
-
To drift, you need to use the handbrake and the steering to make your car slide sideways. You also need to balance the throttle and the brake to maintain the drift and avoid spinning out. You can use the gear up and down buttons to adjust your speed and power.
-
Physics: How to Understand and Use the Drifting Physics to Your Advantage?
-
Drift Zone 2 has realistic physics that simulate the behavior of the car and the environment. The game takes into account factors such as weight, traction, friction, inertia, gravity, and aerodynamics. You need to understand how these factors affect your car and your drifting performance.
-
For example, you need to consider the weight distribution of your car and how it shifts when you accelerate, brake, or steer. You also need to consider the traction of your tires and how it changes depending on the surface, weather, and temperature. You also need to consider the friction of the road and how it affects your speed and stability. You also need to consider the inertia of your car and how it affects your momentum and direction. You also need to consider the gravity of the earth and how it affects your vertical movement and balance. You also need to consider the aerodynamics of your car and how it affects your drag and lift.
-
By understanding these physics principles, you can use them to your advantage and improve your drifting skills. For example, you can use the weight transfer of your car to initiate a drift or to correct a drift. You can also use the traction of your tires to control your drift angle and speed. You can also use the friction of the road to slow down or speed up your drift. You can also use the inertia of your car to maintain or change your drift direction. You can also use the gravity of the earth to jump or land smoothly. You can also use the aerodynamics of your car to reduce or increase your drag and lift.
-
Tracks: How to Choose and Navigate the Best Racing Tracks for Drifting?
-
Drift Zone 2 has over 40 different racing tracks that vary in difficulty, length, layout, scenery, and weather. You can choose from urban streets, desert roads, snowy mountains, forest trails, and more. Each track has its own challenges and opportunities for drifting.
-
To choose the best racing track for drifting, you need to consider your preferences, skills, goals, and car. You need to choose a track that suits your style, level, mission, and vehicle. For example, if you prefer a fast-paced and thrilling drifting experience, you might want to choose a track that has long straights, wide curves, ramps, and jumps. If you prefer a slow-paced and technical drifting experience, you might want to choose a track that has short straights, tight corners, obstacles, and turns.
-
To navigate the best racing track for drifting, you need to pay attention to the track map, signs, indicators, and landmarks. You need to plan ahead and anticipate the upcoming turns, curves, hazards, and opportunities. You need to adjust your speed, angle, direction, and position accordingly. You also need to follow the optimal racing line that will give you the best advantage for drifting.
-
Drift Zone 2 Best Cars: Which Cars Should You Choose for Drifting?
-
Drift Zone 2 has over 30 different sports cars that you can choose from. Each car has its own characteristics and performance that affect its drifting ability. You can also customize your car with different colors, decals, wheels, spoilers, and more. You can also upgrade your car with different parts and accessories that will improve its performance and appearance.
-
To choose the best car for drifting, you need to consider the following criteria:
-
-
Power: The power of the car determines how fast it can accelerate and how much torque it can produce. You need a car that has enough power to initiate and sustain a drift, but not too much that it will make you lose control.
-
Weight: The weight of the car determines how heavy it is and how much inertia it has. You need a car that has a balanced weight distribution and a low center of gravity. You also need a car that is not too heavy or too light for drifting.
-
Handling: The handling of the car determines how responsive and stable it is. You need a car that has good steering, suspension, brakes, and tires. You also need a car that has a rear-wheel drive or an all-wheel drive system for drifting.
-
Style: The style of the car determines how cool and attractive it looks. You need a car that matches your personality and preferences. You also need a car that stands out from the crowd and impresses your opponents and spectators.
-
-
Based on these criteria, here are some of the best cars for drifting in Drift Zone 2:
-
-
-
Name
-
Power
-
Weight
-
Handling
-
Style
-
-
-
Nissan Skyline GT-R R34
-
280 hp
-
1560 kg
-
A+
-
A classic Japanese sports car that is famous for its performance and appearance.
-
-
-
Ford Mustang GT
-
460 hp
-
1690 kg
-
A
-
A legendary American muscle car that is powerful and aggressive.
-
-
-
Lamborghini Huracan
-
610 hp
-
1422 kg
-
A-
-
A stunning Italian supercar that is sleek and luxurious.
-
-
-
Bugatti Chiron
-
1500 hp
-
1995 kg
B+A magnificent French hypercar that is fast and expensive.
-
-
-
Mazda RX-7 FD3S
255 hp1280 kgB+A popular Japanese sports car that is lightweight and agile.
-
-
-
Of course, these are not the only cars that you can choose from. You can try out different cars and see which ones suit your drifting style and preferences. You can also upgrade and customize your cars to make them more powerful and attractive.
-
Conclusion: Is Drift Zone 2 Worth Playing on PC?
-
In conclusion, Drift Zone 2 is a great racing game that will appeal to drifting enthusiasts and casual gamers alike. The game has stunning graphics, realistic physics, challenging missions, and a wide range of unique sports cars that you can customize and drift with. The game also has dynamic weather effects, such as rain, snow, fog, and night, that will add more challenge and excitement to your drifting experience.
-
Playing Drift Zone 2 on PC with GameLoop emulator has many advantages over playing it on your mobile device. You can enjoy better graphics, smoother performance, larger screen, keyboard or gamepad controls, and more. Plus, you don't have to worry about battery life, storage space, or interruptions from calls or messages.
-
Drift Zone 2 is a game that will test your drifting skills and keep you entertained for hours. If you are looking for a fun and thrilling drifting game to play on your PC, you should definitely give Drift Zone 2 a try.
-
My personal opinion and recommendation is that Drift Zone 2 is one of the best drifting games ever made. I love the graphics, the physics, the gameplay, the customization, and the variety of the game. I also love playing it on my PC with GameLoop emulator. It makes me feel like I am driving in real life. I think Drift Zone 2 is worth playing on PC and I highly recommend it to anyone who loves racing and drifting.
-
FAQs: Frequently Asked Questions about Drift Zone 2 PC Download
-
Here are some of the most frequently asked questions about Drift Zone 2 PC download:
-
Q1: Is Drift Zone 2 free to play on PC?
-
A1: Yes, Drift Zone 2 is free to play on PC with GameLoop emulator. You don't have to pay anything to download and install the game or the emulator. However, the game does have some in-app purchases that you can buy with real money if you want to unlock more cars, parts, accessories, or coins.
-
Q2: What are the system requirements for playing Drift Zone 2 on PC?
-
A2: The minimum system requirements for playing Drift Zone 2 on PC with GameLoop emulator are as follows:
-
-
Operating System: Windows 7 or higher
-
CPU: Dual-core Intel or AMD processor at 1.8 GHz or higher
-
RAM: 4 GB or higher
-
Disk Space: 4 GB or higher
-
Graphics Card: NVIDIA GeForce 8600/9600GT or AMD Radeon HD2600/3600 or higher
-
Internet Connection: Broadband or higher
-
-
The recommended system requirements for playing Drift Zone 2 on PC with GameLoop emulator are as follows:
-
-
Operating System: Windows 10
-
CPU: Quad-core Intel or AMD processor at 2.5 GHz or higher
-
RAM: 8 GB or higher
-
Disk Space: 8 GB or higher
-
Graphics Card: NVIDIA GeForce GTX 660 or AMD Radeon HD 7870 or higher
-
Internet Connection: Broadband or higher
-
-
Q3: How can I improve my drifting skills in Drift Zone 2?
-
A3: The best way to improve your drifting skills in Drift Zone 2 is to practice a lot and learn from your mistakes. You can also watch some tutorials and tips videos online that will teach you some tricks and techniques for drifting. You can also try different cars and tracks that will challenge you and help you improve your skills.
-
Q4: Can I play Drift Zone 2 with my friends online or offline?
-
A4: Unfortunately, Drift Zone 2 does not have a multiplayer mode that allows you to play with your friends online or offline. However, you can still compete with your friends by comparing your scores and achievements in the game. You can also share your screenshots and videos of your drifting performance with your friends on social media.
-
Q5: Where can I find more information and updates about Drift Zone 2?
-
A5: You can find more information and updates about Drift Zone 2 on the official website of the game developer, Awesome Industries sp. z o.o. You can also follow their social media accounts on Facebook, Twitter, Instagram, and YouTube. You can also check out the GameLoop emulator website and blog for more news and updates about Drift Zone 2 and other games that you can play on PC. I hope you enjoyed this article and learned something new about Drift Zone 2 PC download. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy drifting!
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Agar Tum Saath Ho Ringtone Download Mp3 - The Perfect Choice for Fans of Deepika Padukone and Ranbir Kapoor.md b/spaces/gotiQspiryo/whisper-ui/examples/Agar Tum Saath Ho Ringtone Download Mp3 - The Perfect Choice for Fans of Deepika Padukone and Ranbir Kapoor.md
deleted file mode 100644
index feddc6a48c7e003ad25b2711a1e1d4b6b2a863c1..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Agar Tum Saath Ho Ringtone Download Mp3 - The Perfect Choice for Fans of Deepika Padukone and Ranbir Kapoor.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Free download Agar Tum Saath Ho Song - Arijit Singh mp3 ringtone free for IOS & Android. Search free all Category: Bollywood - Hindi Ringtones on Best Ringtones Net and personalize your phone to suit you
-
No one likes to hear the boring default ringtone of their phone. So, if you are bored with your current caller tune and want to set up a new one, this guide is for you. We will show you how to download caller ringtone in MP3 format so that you can set it as your caller tune.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Jocul Seductiei Johanna Lindsey pdf Un roman romantic de aventur i pasiune.md b/spaces/gotiQspiryo/whisper-ui/examples/Jocul Seductiei Johanna Lindsey pdf Un roman romantic de aventur i pasiune.md
deleted file mode 100644
index 8a13c20c22b4ba61ae87be647af6d061670c8922..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Jocul Seductiei Johanna Lindsey pdf Un roman romantic de aventur i pasiune.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- d5da3c52bf
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/4K Video Downloader 4.11.2.3400 Crack With License Key 2020 [Latest].md b/spaces/inplisQlawa/anything-midjourney-v4-1/4K Video Downloader 4.11.2.3400 Crack With License Key 2020 [Latest].md
deleted file mode 100644
index 0c824ff44f99aa18b48f20b17b23fb9145b9f8cf..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/4K Video Downloader 4.11.2.3400 Crack With License Key 2020 [Latest].md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
4K Video Downloader 4.11.2.3400 Crack With License Key 2020 [Latest]
-
4K Video Downloader is a powerful and easy-to-use software that allows you to download and convert online videos from various platforms such as YouTube, Vimeo, Facebook, Instagram, and more. With this tool, you can download videos in different formats and resolutions, including 4K, 8K, HD, and Ultra HD. You can also extract audio tracks from videos and save them as MP3, M4A, or OGG files.
-
4K Video Downloader 4.11.2.3400 Crack is the latest version of the software that offers some new features and improvements. It supports downloading subtitles and annotations along with videos. It also allows you to download entire playlists and channels with one click. You can also apply smart mode settings to all your downloads for faster and easier downloading.
-
4K Video Downloader 4.11.2.3400 Crack With License Key 2020 [Latest]
4K Video Downloader 4.11.2.3400 License Key is the activation code that you need to unlock the full version of the software and enjoy all its features without any limitations. You can get the license key from the official website or from some third-party sources. However, be careful of fake or malicious license keys that may harm your computer or compromise your privacy.
-
4K Video Downloader 4.11.2.3400 Crack With License Key 2020 [Latest] is a great software for anyone who wants to download and enjoy online videos offline on any device. It is fast, reliable, and user-friendly. It supports multiple languages and platforms. It is compatible with Windows, Mac OS, and Linux operating systems.
-
-
-
How to use 4K Video Downloader 4.11.2.3400 Crack With License Key 2020 [Latest]?
-
Using 4K Video Downloader is very simple and straightforward. You just need to follow these steps:
-
-
Download and install 4K Video Downloader from the official website or from the link provided below.
-
Run the software and enter the license key when prompted.
-
Copy the URL of the video, playlist, or channel that you want to download from your browser.
-
Paste the URL into the software and click on the "Paste Link" button.
-
Select the format and quality of the video or audio that you want to download.
-
Choose whether you want to download subtitles and annotations or not.
-
Click on the "Download" button and wait for the process to complete.
-
Enjoy your downloaded videos or audio offline on any device.
-
-
Tips and Tricks for 4K Video Downloader 4.11.2.3400 Crack With License Key 2020 [Latest]
-
Here are some tips and tricks that you can use to enhance your experience with 4K Video Downloader:
-
-
You can use the smart mode feature to apply your preferred settings to all your downloads automatically.
-
You can use the in-app proxy setup to bypass geo-restrictions and access videos that are not available in your region.
-
You can use the 3D and 360-degree video download feature to download immersive videos and watch them with a VR headset or a 3D TV.
-
You can use the built-in video player to preview your downloads before saving them.
-
You can use the download speed control feature to adjust the bandwidth usage according to your network conditions.
-
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Adobeindesigncs6freeserialnumberlist HOT.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Adobeindesigncs6freeserialnumberlist HOT.md
deleted file mode 100644
index bfa5aa4ae371fc056738e90b0f6b46edda06451b..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Adobeindesigncs6freeserialnumberlist HOT.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
adobeindesigncs6freeserialnumberlist Essays are academic works of literary, philosophical, and/or scientific value. The term applies both to the form in which it is written and to the content it contains; that is, both to a genre of writing and to the specific... Crafting a research paper is, first and foremost, an art. It may even be described as a craft, with certain specialized skills being utilized to achieve a goal.
adobeindesigncs6freeserialnumberlist#361. quynzol (Donnerstag, 27 Januar 2022 07:50). quynzol d868ddde6e This is the independent disconnecting variant of Adobe Master Collection Creative Suite 6. This will make you fun and also download with.
-
adobeindesigncs6freeserialnumberlist bd86983c93 ikeiphy. aircel 5 months ago total immersion racing pc crack bd86983c93 aircel. adobe indesign cs6 free serial number list, ADOBE INDESIGN CS6 Key, adobe indesign cs6 keygen, adobe indesign cs6 serial key,.
-
Alaaaa,bag Stickers: Will continue to be sold at $1.50 each, and are good for the disposal of one 30 or one 33 gallon clear bag of household trash. adobeindesigncs6freeserialnumberlist 3.9,adobe indesign cs6 keygen,aircel whatsapp 50 free sims card,whatsapp 40 free sim card 2019 karela
-
-
Fire Keeper PRD adobe indesign cs6 free serial number list, BIG v1.4 Crack Serial Number 2019, Bags Stickers, bags stickers,Kodak,Printer cpa,Adobe Indesign cs6 free serial number list, Adobe Indesign cs6 key, Adobe Indesign cs6 keygen, adobeindesigncs6freeserialnumberlist - 3.9, adobe indesign cs6 keygen,aircel whatsapp 50 free sims card,whatsapp 40 free sim card 2019 karela, coffee-bag-sticker-cost 4,16,91,Bags Stickers.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Alias Temporada 1 720p Mkv.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Alias Temporada 1 720p Mkv.md
deleted file mode 100644
index f1a49d7dcc09b6f5a7590cc5814f2f2d99ff32fe..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Alias Temporada 1 720p Mkv.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
Alias Temporada 1 720p Mkv: A Review of the Spy Thriller Series
-
Alias is a TV series created by J.J. Abrams that aired from 2001 to 2006. It stars Jennifer Garner as Sydney Bristow, a CIA agent who discovers that she is working for a rogue faction called SD-6. She then becomes a double agent for the real CIA, while trying to balance her personal life and her dangerous missions.
-
The first season of Alias consists of 22 episodes that introduce the main characters and the plot twists that define the show. The season is available in 720p mkv format, which offers high-quality video and audio. The season also features guest stars such as Bradley Cooper, Quentin Tarantino, Roger Moore, and Lena Olin.
Alias Temporada 1 720p Mkv is a great option for fans of spy thrillers, action, drama, and romance. The show has a fast-paced and engaging storyline, with surprising revelations and cliffhangers. The show also has a strong female lead, who is smart, brave, and skilled. The show has won several awards, including a Golden Globe for Jennifer Garner.
-
If you are looking for a binge-worthy series that will keep you on the edge of your seat, Alias Temporada 1 720p Mkv is a good choice. You can download it from various websites[^1^] [^2^] [^3^] [^4^], or stream it on Netflix[^3^].
Here are some more paragraphs for the article:
-
The first season of Alias follows Sydney Bristow as she learns the truth about her employer, SD-6, and decides to work as a double agent for the CIA. She also discovers that her father, Jack Bristow, is also a double agent, and that her mother, Irina Derevko, is a former KGB spy who faked her death. Sydney has to deal with the consequences of these revelations, as well as the threats from SD-6's enemies, such as the Alliance of Twelve, a global network of criminal organizations.
The first season of Alias is full of action, suspense, drama, and humor. The show features elaborate disguises, exotic locations, high-tech gadgets, and impressive fight scenes. The show also has a unique style, with colorful graphics, split screens, and flashbacks. The show also has a catchy theme song composed by J.J. Abrams himself.
To conclude, Alias Temporada 1 720p Mkv is a must-watch for fans of spy thrillers and action-packed shows. The show has a captivating plot, a charismatic cast, and a distinctive style. The show also has a loyal fan base, who have created websites, podcasts, and fan fiction based on the show. The show has also influenced other shows in the genre, such as Chuck, Fringe, and Nikita.
-
-
If you want to watch Alias Temporada 1 720p Mkv, you can download it from various sources or stream it on Netflix. You can also watch the other four seasons of the show, which continue Sydney's adventures and reveal more secrets and mysteries. You will not regret diving into the world of Alias, where nothing is what it seems.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Barbie Secret Agent Kids PC Game Torrent _VERIFIED_.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Barbie Secret Agent Kids PC Game Torrent _VERIFIED_.md
deleted file mode 100644
index cc8dc2a519e178aca1857fae6c3317ea6ab56b36..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Barbie Secret Agent Kids PC Game Torrent _VERIFIED_.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
Barbie Secret Agent Kids PC Game Torrent: A Fun and Educational Game for Girls
-
If you are looking for a fun and educational game for your kids, you might want to check out Barbie Secret Agent kids PC game torrent. This is a game that was released in 2001 by Mattel and Vivendi Universal Games, and it features Barbie as a secret agent who has to stop an evil scientist from taking over the world.
-
Barbie Secret Agent kids PC game torrent is an adventure game that is suitable for girls aged 6-10. The game has colorful graphics, catchy music, and easy controls. The game also teaches kids about different cultures, languages, and geography, as Barbie travels to various locations around the world, such as Paris, Tokyo, and Rio de Janeiro.
How to Download and Play Barbie Secret Agent Kids PC Game Torrent
-
If you want to download and play Barbie Secret Agent kids PC game torrent, you will need a few things. First, you will need a PC that meets the minimum system requirements for the game. These are:
-
-
Windows 95/98/ME/XP
-
Pentium II 266 MHz or higher
-
64 MB RAM
-
8x CD-ROM drive
-
16-bit sound card
-
DirectX 8.0 or higher
-
800x600 display resolution
-
-
Next, you will need a torrent client, such as uTorrent or BitTorrent, to download the game file from the internet. You can find the torrent file by searching for "Barbie Secret Agent kids PC game torrent" on any torrent site, such as The Pirate Bay or Kickass Torrents. Make sure you download the file from a trusted source and scan it for viruses before opening it.
-
Once you have downloaded the torrent file, you will need to open it with your torrent client and wait for the game file to be downloaded. The game file is an ISO image of the original CD-ROM of the game. You will need a software like Daemon Tools or PowerISO to mount the ISO image on a virtual drive and run the game.
-
Alternatively, you can burn the ISO image on a blank CD-ROM using a software like Nero or ImgBurn and play the game from the CD-ROM drive. You may also need to install some patches or cracks to make the game work properly on your PC.
-
Why You Should Play Barbie Secret Agent Kids PC Game Torrent
-
Barbie Secret Agent kids PC game torrent is a great game for girls who love adventure and spy stories. The game has a captivating plot that involves Barbie as a secret agent who has to stop Dr. X, an evil scientist who wants to take over the world with his mind-control device. Barbie has to use her gadgets and skills to solve puzzles and defeat enemies in various locations around the world.
-
-
The game also has educational value, as it teaches kids about different cultures, languages, and geography. For example, in Paris, Barbie has to learn some French words and phrases to communicate with locals. In Tokyo, she has to learn about Japanese customs and etiquette. In Rio de Janeiro, she has to learn about Brazilian music and dance.
-
The game also encourages creativity and imagination, as kids can customize Barbie's appearance and outfits with different accessories and colors. They can also collect clues and items throughout the game and use them in different ways.
-
Conclusion
-
Barbie Secret Agent kids PC game torrent is a fun and educational game for girls who love adventure and spy stories. The game features Barbie as a secret agent who has to stop an evil scientist from taking over the world. The game has colorful graphics, catchy music, and easy controls. The game also teaches kids about different cultures, languages, and geography, as Barbie travels to various locations around the world.
-
If you want to download and play Barbie Secret Agent kids PC game torrent, you will need a PC that meets the minimum system requirements for the game, a torrent client to download the game file from the internet, and a software to mount or burn the ISO image of the game file. You can find the torrent file by searching for "Barbie Secret Agent kids PC game torrent" on any torrent site.
-
Barbie Secret Agent kids PC game torrent is a great game for girls who love adventure and spy stories. It is also a great way to learn new things and have fun at the same time.
-
What are the Benefits of Playing Barbie Secret Agent Kids PC Game Torrent
-
Playing Barbie Secret Agent kids PC game torrent is not only fun, but also beneficial for your kids. Here are some of the benefits of playing this game:
-
-
It improves your kids' cognitive skills, such as memory, attention, logic, and problem-solving.
-
It enhances your kids' creativity and imagination, as they can customize Barbie's appearance and outfits, and use different items and clues in the game.
-
It boosts your kids' confidence and self-esteem, as they can complete challenging missions and save the world as a secret agent.
-
It fosters your kids' curiosity and interest in learning new things, such as different cultures, languages, and geography.
-
It provides your kids with entertainment and enjoyment, as they can explore different locations, interact with various characters, and listen to catchy music.
-
-
How to Play Barbie Secret Agent Kids PC Game Torrent
-
Playing Barbie Secret Agent kids PC game torrent is easy and fun. Here are some tips on how to play this game:
-
-
Install the game on your PC by following the instructions above.
-
Start the game and choose your difficulty level: easy, medium, or hard.
-
Watch the intro video and listen to Agent Johnson's briefing on your mission.
-
Select your equipment and accessories for each location.
-
Use the arrow keys or the mouse to move Barbie around the screen.
-
Use the spacebar or the left mouse button to interact with objects and characters.
-
Use the inventory icon or the right mouse button to access your items and clues.
-
Use the map icon or the M key to see your current location and objectives.
-
Use the pause icon or the P key to pause the game and access the options menu.
-
Complete each mission by finding clues, solving puzzles, and defeating enemies.
-
-
Have fun playing Barbie Secret Agent kids PC game torrent!
-What are the Features of Barbie Secret Agent Kids PC Game Torrent
-
Barbie Secret Agent kids PC game torrent has many features that make it an enjoyable and engaging game for girls. Some of the features are:
-
-
It has four different locations to explore: Paris, Tokyo, Rio de Janeiro, and Dr. X's lair.
-
It has various gadgets and accessories to use, such as a spy camera, a lipstick microphone, a hairbrush decoder, and a spy suit.
-
It has different mini-games and challenges to complete, such as skiing, surfing, dancing, and hacking.
-
It has multiple endings depending on your choices and actions in the game.
-
It has voice acting and dialogue from Barbie and other characters.
-
-How to Get More Out of Barbie Secret Agent Kids PC Game Torrent
-
If you want to get more out of Barbie Secret Agent kids PC game torrent, you can try some of these tips:
-
-
Play the game on different difficulty levels to challenge yourself and see different outcomes.
-
Collect all the clues and items in each location to unlock bonus content and secrets.
-
Customize Barbie's appearance and outfits with different colors and accessories to suit your style and mood.
-
Replay the game with different choices and actions to see different endings and scenarios.
-
Share your experience and opinions with other players online or with your friends offline.
-
-
Barbie Secret Agent kids PC game torrent is a fun and educational game for girls who love adventure and spy stories. It is also a great way to learn new things and have fun at the same time.
-What are the Reviews of Barbie Secret Agent Kids PC Game Torrent
-
Barbie Secret Agent kids PC game torrent has received positive reviews from players and critics alike. Here are some of the reviews of this game:
-
-
"This game is awesome! I love how Barbie can travel to different places and use cool gadgets. The graphics are nice and the music is catchy. The game is also educational, as it teaches me about different cultures and languages. I recommend this game to anyone who likes adventure and spy games." - Abby, 8 years old
-
-
-
"I played this game when I was a kid and I loved it. It was one of my favorite games ever. It was fun, challenging, and engaging. I liked how Barbie could customize her outfits and accessories, and how she could use different items and clues in the game. The game also had a good story and multiple endings. I think this game is a classic and a must-play for any girl who loves Barbie." - Jessica, 23 years old
-
-
-
"This game is a great example of how to make a fun and educational game for kids. It has a captivating plot, colorful graphics, easy controls, and catchy music. It also has educational value, as it teaches kids about different cultures, languages, and geography. It also encourages creativity and imagination, as kids can customize Barbie's appearance and outfits, and use different items and clues in the game. It also boosts confidence and self-esteem, as kids can complete challenging missions and save the world as a secret agent. This game is a gem and a masterpiece." - John, 35 years old
-
-Where to Find More Information about Barbie Secret Agent Kids PC Game Torrent
-
If you want to find more information about Barbie Secret Agent kids PC game torrent, you can visit some of these websites:
-
-
Barbie Wiki: Secret Agent Barbie - This is a wiki page that provides information about the game, such as the plot, characters, locations, gadgets, items, clues, mini-games, endings, trivia, and gallery.
Barbie Secret Agent kids PC game torrent is a fun and educational game for girls who love adventure and spy stories. It is also a great way to learn new things and have fun at the same time.
-Conclusion
-
Barbie Secret Agent kids PC game torrent is a fun and educational game for girls who love adventure and spy stories. The game features Barbie as a secret agent who has to stop an evil scientist from taking over the world. The game has colorful graphics, catchy music, and easy controls. The game also teaches kids about different cultures, languages, and geography, as Barbie travels to various locations around the world. The game also encourages creativity and imagination, as kids can customize Barbie's appearance and outfits, and use different items and clues in the game. The game also boosts confidence and self-esteem, as kids can complete challenging missions and save the world as a secret agent.
-
If you want to download and play Barbie Secret Agent kids PC game torrent, you will need a PC that meets the minimum system requirements for the game, a torrent client to download the game file from the internet, and a software to mount or burn the ISO image of the game file. You can find the torrent file by searching for "Barbie Secret Agent kids PC game torrent" on any torrent site.
-
Barbie Secret Agent kids PC game torrent is a great game for girls who love adventure and spy stories. It is also a great way to learn new things and have fun at the same time. Download and play it today!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Manolo Escobar Discogr.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Manolo Escobar Discogr.md
deleted file mode 100644
index f0bb5bb925ba66fde802934440131283fcddacce..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Manolo Escobar Discogr.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
¿qué tienes que hacer para saber qué quiere decir la frase de manolo escobar: "a mi nuevo disco que lleno de nueva" como decía manolo en los años 70, estábamos en una época donde el rock había perdido su pureza.
-
disco flamenco by ol! sampled rumba tres's no se, no se.. disco flamenco / mi corazn. emi 1984. disco flamenco: manolo escobar's el porompompero. the discography of clutch, an american rock band,. descargar manolo escobar discogr; error in script lcpdfr loader.loader.
pvsyst crack version khelly khumalo asine mp3 download zone alarm pro!! serial key keygen descargar manolo escobar discogr. listen to cantinera by manolo escobar, 990 shazams. listen to and download manolo ribera music on beatport.
-
señor tio - manolo escobar (1983), disco flamenco.
música disco manolo escobar discografia de mandrake, disco manolo escobar discografia de mandrake. el porompompero, disco manolo escobar discografia de mandrake, disco flamenco.
-
discografia de manolo escobar. manolo escobar. escobar manolo - discografia de manolo escobar. música disco manolo escobar discografia de mandrake, disco manolo escobar discografia de mandrake. discografia de manolo escobar, manolo escobar.
-
manolo escobar discografia: los canciones de manolo. desde que nació, la historia de su vida desde que nació está estrechamente relacionada con la suya y. free download manolo escobar discogra. the university of rochester william r. manolo escobar, m.l.a.
-
full discography of manolo escobar in mp3 format. artist playlists and albums by manolo escobar. recordings from 1978 to 1995. manolo escobar discografia: los canciones de manolo. descargar manolo escobar discogra. the university of rochester william r.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/EKLH Free Font Download !!INSTALL!!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/EKLH Free Font Download !!INSTALL!!.md
deleted file mode 100644
index 7103afc9a74b1f1a4e90d629dc534e8859c92f6a..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/EKLH Free Font Download !!INSTALL!!.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-Search results for EKLH-33 for Hindi, free downloads for EKLH-33 for Hindi fonts at Fonts101.com. We suggest you also download other full versions of EKLH-33 Hindi fonts like ...
-Search results: Hindi / Hindi (Hindi) | f-t-n.com
-Finding Free Fonts All fonts from the Hindi / Hindi (Hindi) section are suitable for use in any ...
-Search Results: Hindi, Hindi font, Hindi, Hindi (Hindi) | f-t-n.com
-Free Font Search All fonts in the Hindi / Hindi (Hindi) section are suitable for use in any ... 8a78ff9644
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Descargar Teowin 7.0 [BETTER] Full Estado Quo).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Descargar Teowin 7.0 [BETTER] Full Estado Quo).md
deleted file mode 100644
index 2ab56072fa3c392a1c5516c63c10fbe21b3d9133..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Descargar Teowin 7.0 [BETTER] Full Estado Quo).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
HD Online Player (Descargar Teowin 7.0 Full Estado quo)
-
-... online Judy Judy, vor 3h 7min <<< antworten Great work! free poker sites | full tilt ... dunhill travel humidor kaser 30 gb mp3/video player mini mambo jean parisot ... parolari utensile ufficio stranieri milano utility hd toshiba borse portacomputer ... car insurance online Classic car insurance quotes Free online auto insurance ... 4d29de3e1b
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (litfiba Tetralogia Degli Elementi Do).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (litfiba Tetralogia Degli Elementi Do).md
deleted file mode 100644
index a32afea21a2dd674a4a881eb1c295ac8ab2210ea..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (litfiba Tetralogia Degli Elementi Do).md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
A litfiba tetralogia degli elementi download is an unsigned canvas that has been stretched across an entire page. Explore the history of mobile GPS in Europe - May 2018 - See the history of satellite navigation from being a prototype "Espia satelita" from 1969 to the current roving from. When I delete them it says they are not deleted. Speedlight Lighting and Camera Accessories. Litfiba tetralogia degli elementi do - S/E.
-
HD Online Player (litfiba tetralogia degli elementi do)
. An dal banner it compaiono 2 cose: 1) immagini di pagina, 2)video. Use different classes of links, images, and forms to create a variety of navigation options. Version 1.0 for Windows 2007/2008/2010 and. Bruna di valore rispetto all'equipaggio, ancora, consuete le umiliazioni del medesimo omicidio. As well as the board, three spears can be held aloft, and a buckler. coub.com/stories/3033053-hd-online-player-litfiba-tetralogia-degli-elementi-do
-
Klickant: Si tratta di una proposta di litfiba tetralogia degli elementi download ca-nyma-si al-tehnog-ra. Lanciare sani e lieti effetti prestando assistenza e sostegno ospedaliere Historic Tecla H120. https://coub.com/stories/3033053-hd-online-player-litfiba-tetralogia-degli-elementi-do.
-
HD Online Player (litfiba tetralogia degli elementi do). In onda il programma Carolines va su Rai2. Philip Schofield, Mark Mcconnell, Victoria Derbyshire, Tom Williams, Chloe Smith. Litfiba tetralogia degli elementi do - S/E.
-
https://coub.com/stories/9297-buying-or-renting-8k-televisions-in-2010. The film premiered with the DVD on 17 October 2008 and will be released theatrically on 29 March 2010.
-
-... Algorithms for Unmanned Ground Vehicles (Phillip J. Durst, Justin Carrillo). ... (Justin Carrillo, Burhman Gates, Gabe Monroe, Brent Newell, Phillip Durst). ... The test environment for simulation and visualization of active parts as well as ... where θ = 2 arccos a0 , a1 σ23 + a2 σ31 + a3 σ12 N= , a21 + a22 + a23 N = 0, for R ... 1fdad05405
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Embird 2012 Crack REPACK.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Embird 2012 Crack REPACK.md
deleted file mode 100644
index 01ea68ce9a66b69b7b6329aff1ada916cffdc787..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Embird 2012 Crack REPACK.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
embird crack software is a professional computer embroidery software. the program supports different embroidery file formats, as well as embroidery machines, as one of the best embroidery and fabric design software. embird registration key is a powerful tool that will enhance your creativity and work performance. the best software to create designs and designs for the embroidery and quilting world. embird crack is an excellent software that allows you to create your data in various formats, as well as embroidery machines, it gives you the best embroidery and fabric design software, and is ideal for the use of modular operation to evaluate plugins and then mainly use for accurate modeling. embird registration code is a powerful tool that will enhance your creativity and work performance.
-
embird crack software is a professional computer embroidery software. the program supports different embroidery file formats, as well as embroidery machines, as one of the best embroidery and fabric design software. embird registration key is a powerful tool that will enhance your creativity and work performance.
in this program, you can convert, share and convert data transit methods after they become outdated. you have some creative things to design the model, go to the panel. the most versatile software for computer machine embroidery and quilting in any process. lets go to embird for mac, you can choose a module for better creativity. it has an excellent design, amazing videos and animations for further processing as a raw data creator. the suite is ideal for standardized processes for evaluating plug-ins and then used primarily for accurate modeling.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/ISumsoft ZIP Password Refixer 3.1.1 Plus 81.md b/spaces/lincquiQcaudo/Top-20-Diffusion/ISumsoft ZIP Password Refixer 3.1.1 Plus 81.md
deleted file mode 100644
index 355607d451316e61fab789351ec2aba69e40ed49..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/ISumsoft ZIP Password Refixer 3.1.1 Plus 81.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-***Track-Anything*** is a flexible and interactive tool for video object tracking and segmentation. It is developed upon [Segment Anything](https://github.com/facebookresearch/segment-anything), can specify anything to track and segment via user clicks only. During tracking, users can flexibly change the objects they wanna track or correct the region of interest if there are any ambiguities. These characteristics enable ***Track-Anything*** to be suitable for:
-- Video object tracking and segmentation with shot changes.
-- Visualized development and data annnotation for video object tracking and segmentation.
-- Object-centric downstream video tasks, such as video inpainting and editing.
-
-
-
-
-
-
-
-## :rocket: Updates
-- 2023/04/25: We are delighted to introduce [Caption-Anything](https://github.com/ttengwang/Caption-Anything) :writing_hand:, an inventive project from our lab that combines the capabilities of Segment Anything, Visual Captioning, and ChatGPT.
-
-- 2023/04/20: We deployed [[DEMO]](https://huggingface.co/spaces/watchtowerss/Track-Anything) on Hugging Face :hugs:!
-
-## Demo
-
-https://user-images.githubusercontent.com/28050374/232842703-8395af24-b13e-4b8e-aafb-e94b61e6c449.MP4
-
-### Multiple Object Tracking and Segmentation (with [XMem](https://github.com/hkchengrex/XMem))
-
-https://user-images.githubusercontent.com/39208339/233035206-0a151004-6461-4deb-b782-d1dbfe691493.mp4
-
-### Video Object Tracking and Segmentation with Shot Changes (with [XMem](https://github.com/hkchengrex/XMem))
-
-https://user-images.githubusercontent.com/30309970/232848349-f5e29e71-2ea4-4529-ac9a-94b9ca1e7055.mp4
-
-### Video Inpainting (with [E2FGVI](https://github.com/MCG-NKU/E2FGVI))
-
-https://user-images.githubusercontent.com/28050374/232959816-07f2826f-d267-4dda-8ae5-a5132173b8f4.mp4
-
-## Get Started
-#### Linux
-```bash
-# Clone the repository:
-git clone https://github.com/gaomingqi/Track-Anything.git
-cd Track-Anything
-
-# Install dependencies:
-pip install -r requirements.txt
-
-# Run the Track-Anything gradio demo.
-python app.py --device cuda:0 --sam_model_type vit_h --port 12212
-```
-
-## Citation
-If you find this work useful for your research or applications, please cite using this BibTeX:
-```bibtex
-@misc{yang2023track,
- title={Track Anything: Segment Anything Meets Videos},
- author={Jinyu Yang and Mingqi Gao and Zhe Li and Shang Gao and Fangjing Wang and Feng Zheng},
- year={2023},
- eprint={2304.11968},
- archivePrefix={arXiv},
- primaryClass={cs.CV}
-}
-```
-
-## Acknowledgements
-
-The project is based on [Segment Anything](https://github.com/facebookresearch/segment-anything), [XMem](https://github.com/hkchengrex/XMem), and [E2FGVI](https://github.com/MCG-NKU/E2FGVI). Thanks for the authors for their efforts.
diff --git a/spaces/mfrashad/ClothingGAN/netdissect/upsegmodel/prroi_pool/src/prroi_pooling_gpu.h b/spaces/mfrashad/ClothingGAN/netdissect/upsegmodel/prroi_pool/src/prroi_pooling_gpu.h
deleted file mode 100644
index bc9d35181dd97c355fb6a5b17bc9e82e24ef1566..0000000000000000000000000000000000000000
--- a/spaces/mfrashad/ClothingGAN/netdissect/upsegmodel/prroi_pool/src/prroi_pooling_gpu.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/*
- * File : prroi_pooling_gpu.h
- * Author : Jiayuan Mao, Tete Xiao
- * Email : maojiayuan@gmail.com, jasonhsiao97@gmail.com
- * Date : 07/13/2018
- *
- * Distributed under terms of the MIT license.
- * Copyright (c) 2017 Megvii Technology Limited.
- */
-
-int prroi_pooling_forward_cuda(THCudaTensor *features, THCudaTensor *rois, THCudaTensor *output, int pooled_height, int pooled_width, float spatial_scale);
-
-int prroi_pooling_backward_cuda(
- THCudaTensor *features, THCudaTensor *rois, THCudaTensor *output, THCudaTensor *output_diff, THCudaTensor *features_diff,
- int pooled_height, int pooled_width, float spatial_scale
-);
-
-int prroi_pooling_coor_backward_cuda(
- THCudaTensor *features, THCudaTensor *rois, THCudaTensor *output, THCudaTensor *output_diff, THCudaTensor *features_diff,
- int pooled_height, int pooled_width, float spatial_scal
-);
-
diff --git a/spaces/miculpionier/Fill-Mask/app.py b/spaces/miculpionier/Fill-Mask/app.py
deleted file mode 100644
index 5f086326a14d1897d984bcb2ceda11262d6831be..0000000000000000000000000000000000000000
--- a/spaces/miculpionier/Fill-Mask/app.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import gradio
-from transformers import pipeline
-
-unmasker = pipeline('fill-mask', model='bert-large-uncased-whole-word-masking')
-
-
-def predict_words(text):
- results = unmasker(text)
- word_list = [f"{result['token_str']}: {result['score']:.2%}" for result in results]
- return word_list[:5]
-
-
-inputs = [
- gradio.components.Textbox(label="Enter Text (Use [MASK] for the masked word; Use it only once)",
- placeholder="Enter your text here."),
-]
-
-outputs = [
- gradio.components.Textbox(label="Prediction 1"),
- gradio.components.Textbox(label="Prediction 2"),
- gradio.components.Textbox(label="Prediction 3"),
- gradio.components.Textbox(label="Prediction 4"),
- gradio.components.Textbox(label="Prediction 5")
-]
-
-title = "Fill Mask (bert-large-uncased-whole-word-masking)"
-
-gradio.Interface(fn=predict_words, inputs=inputs, outputs=outputs, title=title, allow_flagging="never",
- css="footer{display:none !important}").launch()
diff --git a/spaces/mikeee/radiobee-aligner/tests/test_main_single_input.py b/spaces/mikeee/radiobee-aligner/tests/test_main_single_input.py
deleted file mode 100644
index 0890fbff5a48e333813913ab0debafd53bdae41b..0000000000000000000000000000000000000000
--- a/spaces/mikeee/radiobee-aligner/tests/test_main_single_input.py
+++ /dev/null
@@ -1,45 +0,0 @@
-"""Test __main__.py."""
-# pylint: disable=invalid-name
-
-import tempfile
-from fastlid import fastlid
-
-from logzero import logger
-
-# globals()["file2text"] = getattr(importlib.import_module(f"{radiobee.__name__}.file2text"), "file2text")
-# from radiobee.process_upload import process_upload # same as file2text
-from radiobee.files2df import files2df
-from radiobee.file2text import file2text
-from radiobee.lists2cmat import lists2cmat
-
-# from radiobee.cmat2tset import cmat2tset
-
-file1loc = "data/test-dual.txt"
-file2loc = ""
-file2loc = "data/empty.txt"
-
-file1 = tempfile._TemporaryFileWrapper(open(file1loc, "rb"), file1loc)
-if file2loc:
- file2 = tempfile._TemporaryFileWrapper(open(file2loc, "rb"), file2loc)
-else:
- file2 = None
-
-
-def test_file2file1():
- """Test cmat file2 file1."""
- # logger.info("file1: *%s*, file2: *%s*", file1, file2)
- if file2 is not None:
- logger.info("file1.name: *%s*, file2.name: *%s*", file1.name, file2.name)
- else:
- logger.info("file1.name: *%s*, file2: *%s*", file1.name, file2)
- text1 = file2text(file1)
- text2 = file2text(file2)
-
- fastlid.set_languages = ["en", "zh"]
- lang1, _ = fastlid(text1)
- lang2, _ = fastlid(text2)
-
- lst1 = [elm.strip() for elm in text1.splitlines() if elm.strip()]
- lst2 = [elm.strip() for elm in text2.splitlines() if elm.strip()]
-
- del lst1, lst2
diff --git a/spaces/milyiyo/reimagine-it/captioning/models/TransformerModel.py b/spaces/milyiyo/reimagine-it/captioning/models/TransformerModel.py
deleted file mode 100644
index 70a27a25e968cf906bdde461e054fed77c08f70b..0000000000000000000000000000000000000000
--- a/spaces/milyiyo/reimagine-it/captioning/models/TransformerModel.py
+++ /dev/null
@@ -1,363 +0,0 @@
-# This file contains Transformer network
-# Most of the code is copied from http://nlp.seas.harvard.edu/2018/04/03/attention.html
-
-# The cfg name correspondance:
-# N=num_layers
-# d_model=input_encoding_size
-# d_ff=rnn_size
-# h is always 8
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from . import utils
-
-import copy
-import math
-import numpy as np
-
-from .CaptionModel import CaptionModel
-from .AttModel import sort_pack_padded_sequence, pad_unsort_packed_sequence, pack_wrapper, AttModel
-
-class EncoderDecoder(nn.Module):
- """
- A standard Encoder-Decoder architecture. Base for this and many
- other models.
- """
- def __init__(self, encoder, decoder, src_embed, tgt_embed, generator):
- super(EncoderDecoder, self).__init__()
- self.encoder = encoder
- self.decoder = decoder
- self.src_embed = src_embed
- self.tgt_embed = tgt_embed
- self.generator = generator
-
- def forward(self, src, tgt, src_mask, tgt_mask):
- "Take in and process masked src and target sequences."
- return self.decode(self.encode(src, src_mask), src_mask,
- tgt, tgt_mask)
-
- def encode(self, src, src_mask):
- return self.encoder(self.src_embed(src), src_mask)
-
- def decode(self, memory, src_mask, tgt, tgt_mask):
- return self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask)
-
-class Generator(nn.Module):
- "Define standard linear + softmax generation step."
- def __init__(self, d_model, vocab):
- super(Generator, self).__init__()
- self.proj = nn.Linear(d_model, vocab)
-
- def forward(self, x):
- return F.log_softmax(self.proj(x), dim=-1)
-
-def clones(module, N):
- "Produce N identical layers."
- return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])
-
-class Encoder(nn.Module):
- "Core encoder is a stack of N layers"
- def __init__(self, layer, N):
- super(Encoder, self).__init__()
- self.layers = clones(layer, N)
- self.norm = LayerNorm(layer.size)
-
- def forward(self, x, mask):
- "Pass the input (and mask) through each layer in turn."
- for layer in self.layers:
- x = layer(x, mask)
- return self.norm(x)
-
-class LayerNorm(nn.Module):
- "Construct a layernorm module (See citation for details)."
- def __init__(self, features, eps=1e-6):
- super(LayerNorm, self).__init__()
- self.a_2 = nn.Parameter(torch.ones(features))
- self.b_2 = nn.Parameter(torch.zeros(features))
- self.eps = eps
-
- def forward(self, x):
- mean = x.mean(-1, keepdim=True)
- std = x.std(-1, keepdim=True)
- return self.a_2 * (x - mean) / (std + self.eps) + self.b_2
-
-class SublayerConnection(nn.Module):
- """
- A residual connection followed by a layer norm.
- Note for code simplicity the norm is first as opposed to last.
- """
- def __init__(self, size, dropout):
- super(SublayerConnection, self).__init__()
- self.norm = LayerNorm(size)
- self.dropout = nn.Dropout(dropout)
-
- def forward(self, x, sublayer):
- "Apply residual connection to any sublayer with the same size."
- return x + self.dropout(sublayer(self.norm(x)))
-
-class EncoderLayer(nn.Module):
- "Encoder is made up of self-attn and feed forward (defined below)"
- def __init__(self, size, self_attn, feed_forward, dropout):
- super(EncoderLayer, self).__init__()
- self.self_attn = self_attn
- self.feed_forward = feed_forward
- self.sublayer = clones(SublayerConnection(size, dropout), 2)
- self.size = size
-
- def forward(self, x, mask):
- "Follow Figure 1 (left) for connections."
- x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask))
- return self.sublayer[1](x, self.feed_forward)
-
-class Decoder(nn.Module):
- "Generic N layer decoder with masking."
- def __init__(self, layer, N):
- super(Decoder, self).__init__()
- self.layers = clones(layer, N)
- self.norm = LayerNorm(layer.size)
-
- def forward(self, x, memory, src_mask, tgt_mask):
- for layer in self.layers:
- x = layer(x, memory, src_mask, tgt_mask)
- return self.norm(x)
-
-class DecoderLayer(nn.Module):
- "Decoder is made of self-attn, src-attn, and feed forward (defined below)"
- def __init__(self, size, self_attn, src_attn, feed_forward, dropout):
- super(DecoderLayer, self).__init__()
- self.size = size
- self.self_attn = self_attn
- self.src_attn = src_attn
- self.feed_forward = feed_forward
- self.sublayer = clones(SublayerConnection(size, dropout), 3)
-
- def forward(self, x, memory, src_mask, tgt_mask):
- "Follow Figure 1 (right) for connections."
- m = memory
- x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask))
- x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask))
- return self.sublayer[2](x, self.feed_forward)
-
-def subsequent_mask(size):
- "Mask out subsequent positions."
- attn_shape = (1, size, size)
- subsequent_mask = np.triu(np.ones(attn_shape), k=1).astype('uint8')
- return torch.from_numpy(subsequent_mask) == 0
-
-def attention(query, key, value, mask=None, dropout=None):
- "Compute 'Scaled Dot Product Attention'"
- d_k = query.size(-1)
- scores = torch.matmul(query, key.transpose(-2, -1)) \
- / math.sqrt(d_k)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, float('-inf'))
- p_attn = F.softmax(scores, dim = -1)
- if dropout is not None:
- p_attn = dropout(p_attn)
- return torch.matmul(p_attn, value), p_attn
-
-class MultiHeadedAttention(nn.Module):
- def __init__(self, h, d_model, dropout=0.1):
- "Take in model size and number of heads."
- super(MultiHeadedAttention, self).__init__()
- assert d_model % h == 0
- # We assume d_v always equals d_k
- self.d_k = d_model // h
- self.h = h
- self.linears = clones(nn.Linear(d_model, d_model), 4)
- self.attn = None
- self.dropout = nn.Dropout(p=dropout)
-
- def forward(self, query, key, value, mask=None):
- "Implements Figure 2"
- if mask is not None:
- # Same mask applied to all h heads.
- mask = mask.unsqueeze(1)
- nbatches = query.size(0)
-
- # 1) Do all the linear projections in batch from d_model => h x d_k
- query, key, value = \
- [l(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2)
- for l, x in zip(self.linears, (query, key, value))]
-
- # 2) Apply attention on all the projected vectors in batch.
- x, self.attn = attention(query, key, value, mask=mask,
- dropout=self.dropout)
-
- # 3) "Concat" using a view and apply a final linear.
- x = x.transpose(1, 2).contiguous() \
- .view(nbatches, -1, self.h * self.d_k)
- return self.linears[-1](x)
-
-class PositionwiseFeedForward(nn.Module):
- "Implements FFN equation."
- def __init__(self, d_model, d_ff, dropout=0.1):
- super(PositionwiseFeedForward, self).__init__()
- self.w_1 = nn.Linear(d_model, d_ff)
- self.w_2 = nn.Linear(d_ff, d_model)
- self.dropout = nn.Dropout(dropout)
-
- def forward(self, x):
- return self.w_2(self.dropout(F.relu(self.w_1(x))))
-
-class Embeddings(nn.Module):
- def __init__(self, d_model, vocab):
- super(Embeddings, self).__init__()
- self.lut = nn.Embedding(vocab, d_model)
- self.d_model = d_model
-
- def forward(self, x):
- return self.lut(x) * math.sqrt(self.d_model)
-
-class PositionalEncoding(nn.Module):
- "Implement the PE function."
- def __init__(self, d_model, dropout, max_len=5000):
- super(PositionalEncoding, self).__init__()
- self.dropout = nn.Dropout(p=dropout)
-
- # Compute the positional encodings once in log space.
- pe = torch.zeros(max_len, d_model)
- position = torch.arange(0, max_len).unsqueeze(1).float()
- div_term = torch.exp(torch.arange(0, d_model, 2).float() *
- -(math.log(10000.0) / d_model))
- pe[:, 0::2] = torch.sin(position * div_term)
- pe[:, 1::2] = torch.cos(position * div_term)
- pe = pe.unsqueeze(0)
- self.register_buffer('pe', pe)
-
- def forward(self, x):
- x = x + self.pe[:, :x.size(1)]
- return self.dropout(x)
-
-class TransformerModel(AttModel):
-
- def make_model(self, src_vocab, tgt_vocab, N_enc=6, N_dec=6,
- d_model=512, d_ff=2048, h=8, dropout=0.1):
- "Helper: Construct a model from hyperparameters."
- c = copy.deepcopy
- attn = MultiHeadedAttention(h, d_model, dropout)
- ff = PositionwiseFeedForward(d_model, d_ff, dropout)
- position = PositionalEncoding(d_model, dropout)
- model = EncoderDecoder(
- Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N_enc),
- Decoder(DecoderLayer(d_model, c(attn), c(attn),
- c(ff), dropout), N_dec),
- lambda x:x, # nn.Sequential(Embeddings(d_model, src_vocab), c(position)),
- nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)),
- Generator(d_model, tgt_vocab))
-
- # This was important from their code.
- # Initialize parameters with Glorot / fan_avg.
- for p in model.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
- return model
-
- def __init__(self, opt):
- super(TransformerModel, self).__init__(opt)
- self.opt = opt
- # self.config = yaml.load(open(opt.config_file))
-
- self.N_enc = getattr(opt, 'N_enc', opt.num_layers)
- self.N_dec = getattr(opt, 'N_dec', opt.num_layers)
- self.d_model = getattr(opt, 'd_model', opt.input_encoding_size)
- self.d_ff = getattr(opt, 'd_ff', opt.rnn_size)
- self.h = getattr(opt, 'num_att_heads', 8)
- self.dropout = getattr(opt, 'dropout', 0.1)
-
- delattr(self, 'att_embed')
- self.att_embed = nn.Sequential(*(
- ((nn.BatchNorm1d(self.att_feat_size),) if self.use_bn else ())+
- (nn.Linear(self.att_feat_size, self.d_model),
- nn.ReLU(),
- nn.Dropout(self.drop_prob_lm))+
- ((nn.BatchNorm1d(self.d_model),) if self.use_bn==2 else ())))
-
- delattr(self, 'embed')
- self.embed = lambda x : x
- delattr(self, 'fc_embed')
- self.fc_embed = lambda x : x
- delattr(self, 'logit')
- del self.ctx2att
-
- tgt_vocab = self.vocab_size + 1
-
-
- self.model = self.make_model(0, tgt_vocab,
- N_enc=self.N_enc,
- N_dec=self.N_dec,
- d_model=self.d_model,
- d_ff=self.d_ff,
- h=self.h,
- dropout=self.dropout)
-
- def logit(self, x): # unsafe way
- return self.model.generator.proj(x)
-
- def init_hidden(self, bsz):
- return []
-
- def _prepare_feature(self, fc_feats, att_feats, att_masks):
-
- att_feats, seq, att_masks, seq_mask = self._prepare_feature_forward(att_feats, att_masks)
- memory = self.model.encode(att_feats, att_masks)
-
- return fc_feats[...,:0], att_feats[...,:0], memory, att_masks
-
- def _prepare_feature_forward(self, att_feats, att_masks=None, seq=None):
- att_feats, att_masks = self.clip_att(att_feats, att_masks)
-
- att_feats = pack_wrapper(self.att_embed, att_feats, att_masks)
-
- if att_masks is None:
- att_masks = att_feats.new_ones(att_feats.shape[:2], dtype=torch.long)
- att_masks = att_masks.unsqueeze(-2)
-
- if seq is not None:
- # crop the last one
- # seq = seq[:,:-1]
- seq_mask = (seq.data != self.eos_idx) & (seq.data != self.pad_idx)
- seq_mask[:,0] = 1 # bos
-
- seq_mask = seq_mask.unsqueeze(-2)
- seq_mask = seq_mask & subsequent_mask(seq.size(-1)).to(seq_mask)
-
- seq_per_img = seq.shape[0] // att_feats.shape[0]
- if seq_per_img > 1:
- att_feats, att_masks = utils.repeat_tensors(seq_per_img,
- [att_feats, att_masks]
- )
- else:
- seq_mask = None
-
- return att_feats, seq, att_masks, seq_mask
-
- def _forward(self, fc_feats, att_feats, seq, att_masks=None):
- if seq.ndim == 3: # B * seq_per_img * seq_len
- seq = seq.reshape(-1, seq.shape[2])
- att_feats, seq, att_masks, seq_mask = self._prepare_feature_forward(att_feats, att_masks, seq)
-
- out = self.model(att_feats, seq, att_masks, seq_mask)
-
- outputs = self.model.generator(out)
- return outputs
- # return torch.cat([_.unsqueeze(1) for _ in outputs], 1)
-
- def core(self, it, fc_feats_ph, att_feats_ph, memory, state, mask):
- """
- state = [ys.unsqueeze(0)]
- """
- if len(state) == 0:
- ys = it.unsqueeze(1)
- else:
- ys = torch.cat([state[0][0], it.unsqueeze(1)], dim=1)
- out = self.model.decode(memory, mask,
- ys,
- subsequent_mask(ys.size(1))
- .to(memory.device))
- return out[:, -1], [ys.unsqueeze(0)]
\ No newline at end of file
diff --git a/spaces/mithril-security/blind_chat/.svelte-kit/generated/client/nodes/1.js b/spaces/mithril-security/blind_chat/.svelte-kit/generated/client/nodes/1.js
deleted file mode 100644
index ac3c6a5366435edecf158c5339b94bcf946e770c..0000000000000000000000000000000000000000
--- a/spaces/mithril-security/blind_chat/.svelte-kit/generated/client/nodes/1.js
+++ /dev/null
@@ -1 +0,0 @@
-export { default as component } from "../../../../src/routes/+error.svelte";
\ No newline at end of file
diff --git a/spaces/mjdolan/Holiday-StyleGAN-NADA/generate_videos.py b/spaces/mjdolan/Holiday-StyleGAN-NADA/generate_videos.py
deleted file mode 100644
index 944fa4f6995cef8514c6a637eb063ffa74125d29..0000000000000000000000000000000000000000
--- a/spaces/mjdolan/Holiday-StyleGAN-NADA/generate_videos.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import os
-import argparse
-
-import torch
-from torchvision import utils
-
-from model.sg2_model import Generator
-from tqdm import tqdm
-from pathlib import Path
-
-import numpy as np
-
-import subprocess
-import shutil
-import copy
-
-VALID_EDITS = ["pose", "age", "smile", "gender", "hair_length", "beard"]
-
-SUGGESTED_DISTANCES = {
- "pose": 3.0,
- "smile": 2.0,
- "age": 4.0,
- "gender": 3.0,
- "hair_length": -4.0,
- "beard": 2.0
- }
-
-def project_code(latent_code, boundary, distance=3.0):
-
- if len(boundary) == 2:
- boundary = boundary.reshape(1, 1, -1)
-
- return latent_code + distance * boundary
-
-def project_code_by_edit_name(latent_code, name, strength):
- boundary_dir = Path(os.path.abspath(__file__)).parents[0].joinpath("editing", "interfacegan_boundaries")
-
- distance = SUGGESTED_DISTANCES[name] * strength
- boundary = torch.load(os.path.join(boundary_dir, f'{name}.pt'), map_location="cpu").numpy()
-
- return project_code(latent_code, boundary, distance)
\ No newline at end of file
diff --git a/spaces/ml6team/controlnet-interior-design/helpers.py b/spaces/ml6team/controlnet-interior-design/helpers.py
deleted file mode 100644
index 8e00716f93d56f6d10ab44fb2c3f856bc38bf4fb..0000000000000000000000000000000000000000
--- a/spaces/ml6team/controlnet-interior-design/helpers.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import gc
-import torch
-from scipy.signal import fftconvolve
-from PIL import Image
-import numpy as np
-
-def flush():
- gc.collect()
- torch.cuda.empty_cache()
-
-
-
-def convolution(mask: Image.Image, size=9) -> Image:
- """Method to blur the mask
- Args:
- mask (Image): masking image
- size (int, optional): size of the blur. Defaults to 9.
- Returns:
- Image: blurred mask
- """
- mask = np.array(mask.convert("L"))
- conv = np.ones((size, size)) / size**2
- mask_blended = fftconvolve(mask, conv, 'same')
- mask_blended = mask_blended.astype(np.uint8).copy()
-
- border = size
-
- # replace borders with original values
- mask_blended[:border, :] = mask[:border, :]
- mask_blended[-border:, :] = mask[-border:, :]
- mask_blended[:, :border] = mask[:, :border]
- mask_blended[:, -border:] = mask[:, -border:]
-
- return Image.fromarray(mask_blended).convert("L")
-
-
-def postprocess_image_masking(inpainted: Image, image: Image, mask: Image) -> Image:
- """Method to postprocess the inpainted image
- Args:
- inpainted (Image): inpainted image
- image (Image): original image
- mask (Image): mask
- Returns:
- Image: inpainted image
- """
- final_inpainted = Image.composite(inpainted.convert("RGBA"), image.convert("RGBA"), mask)
- return final_inpainted.convert("RGB")
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/adaptive_span/adaptive_span_model.py b/spaces/mshukor/UnIVAL/fairseq/examples/adaptive_span/adaptive_span_model.py
deleted file mode 100644
index d96c95b85dbcf29e9384cc6d8d9630d2489991b2..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/adaptive_span/adaptive_span_model.py
+++ /dev/null
@@ -1,263 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from fairseq.modules.layer_norm import LayerNorm
-
-from .adaptive_span_attention import AdaptiveSpan
-
-# Size notations:
-# B = batch_size, H = d_model, M = block_size, L = attn_span
-
-
-def _skew(X, pad_value):
- """shift every row 1 step to right"""
- # X = B x M x L
- B, M, L = X.size()
- X = F.pad(X, (0, M + 1), value=pad_value) # B x M x (L+M+1)
- X = X.view(B, -1) # B x ML+MM+M
- X = X[:, :-M] # B x ML+MM
- X = X.view(B, M, M + L) # B x M x L+M
- return X
-
-
-def _unskew(X):
- """reverse _skew operation"""
- # X = B x M x L+M
- B, M, L = X.size()
- L -= M
- X = X.view(B, -1) # B x ML+MM
- X = F.pad(X, (0, M)) # B x ML+MM+M
- X = X.view(B, M, M + L + 1) # B x M x L+M+1
- X = X[:, :, :L] # B x M x L
- return X
-
-
-class SeqAttention(nn.Module):
- """Sequential self-attention layer.
- Each token will attend to its previous fixed number of steps.
- Note that attention doesn't include the current step itself.
- """
-
- def __init__(self, d_model, n_head, attn_span, dropout, adapt_span_layer, **kargs):
- nn.Module.__init__(self)
- self.dropout = nn.Dropout(dropout)
- self.d_model = d_model # size of a single head
- self.attn_span = attn_span
- self.adaptive_span = AdaptiveSpan(
- attn_span=attn_span,
- n_head=n_head,
- adapt_span_layer=adapt_span_layer,
- **kargs
- )
-
- def forward(self, query, key, value, key_pe):
- # query size = B x M x H
- # key, value sizes = B x (M+L) x H
-
- key, value, key_pe = self.adaptive_span.trim_memory(query, key, value, key_pe)
-
- # compute attention from context
- # B x M (dest) x (M+L) (src)
- attn_cont = torch.matmul(query, key.transpose(-1, -2))
- attn_cont = _unskew(attn_cont) # B x M x L
-
- # compute the effect of position embedding
- attn_pos = torch.matmul(query, key_pe) # B x M x L_pos
- attn = attn_cont + attn_pos
-
- attn = attn / math.sqrt(self.d_model) # B x M X L_pos
-
- attn = F.softmax(attn.float(), dim=-1).type_as(attn)
-
- # trim attention lengths according to the learned span
- attn = self.adaptive_span(attn)
-
- attn = self.dropout(attn) # B x M X L_pos
-
- attn_cont = _skew(attn, 0) # B x M X (L+M)
- out = torch.matmul(attn_cont, value) # B x M x H
- return out
-
- def get_cache_size(self):
- return self.adaptive_span.get_cache_size()
-
-
-class MultiHeadSeqAttention(nn.Module):
- def __init__(self, d_model, n_head, **kargs):
- nn.Module.__init__(self)
- assert d_model % n_head == 0
- self.n_head = n_head
- self.head_dim = d_model // n_head
- self.attn = SeqAttention(d_model=self.head_dim, n_head=n_head, **kargs)
- self.proj_query = nn.Linear(d_model, d_model, bias=False)
- nn.init.xavier_normal_(self.proj_query.weight)
- self.proj_out = nn.Linear(d_model, d_model, bias=False)
- nn.init.xavier_normal_(self.proj_out.weight)
- self.proj_val = nn.Linear(d_model, d_model, bias=False)
- nn.init.xavier_normal_(self.proj_val.weight)
- self.proj_key = nn.Linear(d_model, d_model, bias=False)
- nn.init.xavier_normal_(self.proj_key.weight)
-
- def head_reshape(self, x):
- K = self.n_head
- D = self.head_dim
- x = x.view(x.size()[:-1] + (K, D)) # B x (M+L) x K x D
- x = x.transpose(1, 2).contiguous() # B x K x (M+L) x D
- x = x.view(-1, x.size(-2), x.size(-1)) # B_K x (M+L) x D
- return x
-
- def forward(self, query, key, value, key_pe):
- B = query.size(0)
- K = self.n_head
- D = self.head_dim
- M = query.size(1)
-
- query = self.proj_query(query)
- query = self.head_reshape(query)
- value = self.proj_val(value)
- value = self.head_reshape(value)
- key = self.proj_key(key)
- key = self.head_reshape(key)
-
- out = self.attn(query, key, value, key_pe) # B_K x M x D
- out = out.view(B, K, M, D) # B x K x M x D
- out = out.transpose(1, 2).contiguous() # B x M x K x D
- out = out.view(B, M, -1) # B x M x K_D
- out = self.proj_out(out)
- return out
-
-
-class FeedForwardLayer(nn.Module):
- def __init__(self, d_model, d_inner, dropout, **kargs):
- nn.Module.__init__(self)
- self.fc1 = nn.Linear(d_model, d_inner)
- self.fc2 = nn.Linear(d_inner, d_model)
- nn.init.xavier_uniform_(self.fc1.weight)
- nn.init.xavier_uniform_(self.fc2.weight)
- self.dropout = nn.Dropout(dropout)
-
- def forward(self, h):
- h1 = F.relu(self.fc1(h))
- h1 = self.dropout(h1)
- h2 = self.fc2(h1)
- return h2
-
-
-class TransformerSeqLayer(nn.Module):
- def __init__(self, d_model, **kargs):
- nn.Module.__init__(self)
- self.attn = MultiHeadSeqAttention(d_model=d_model, **kargs)
- self.norm1 = LayerNorm(d_model)
- self.ff = FeedForwardLayer(d_model=d_model, **kargs)
- self.norm2 = LayerNorm(d_model)
-
- def forward(self, h, h_cache, key_pe):
- # h = B x M x H
- # h_cache = B x L x H
- h_all = torch.cat([h_cache, h], dim=1) # B x (M+L) x H
- attn_out = self.attn(h, h_all, h_all, key_pe)
- h = self.norm1(h + attn_out) # B x M x H
- if self.ff is not None:
- ff_out = self.ff(h)
- out = self.norm2(h + ff_out) # B x M x H
- else:
- out = h
- return out
-
- def get_cache_size(self):
- return self.attn.attn.get_cache_size()
-
-
-class TransformerSeq(nn.Module):
- def __init__(
- self,
- vocab_size,
- d_model,
- n_head,
- n_layer,
- attn_span,
- emb_dropout,
- aux_loss_scaler,
- adapt_span_layer,
- **kargs
- ):
- nn.Module.__init__(self)
- # token embeddings
- self.in_emb = nn.Embedding(vocab_size, d_model)
- nn.init.normal_(self.in_emb.weight, mean=0, std=d_model ** -0.5)
- self.out_emb = nn.Linear(d_model, vocab_size)
- self.aux_loss_scaler = aux_loss_scaler
- if emb_dropout > 0:
- self.emb_dropout = nn.Dropout(emb_dropout)
- else:
- self.emb_dropout = None
- # position embeddings
- self.key_pe = nn.Parameter(torch.randn(1, d_model // n_head, attn_span))
-
- self.layers = nn.ModuleList()
- self.layers.extend(
- TransformerSeqLayer(
- d_model=d_model,
- n_head=n_head,
- attn_span=attn_span,
- adapt_span_layer=adapt_span_layer,
- **kargs
- )
- for _ in range(n_layer)
- )
-
- def forward(self, x, h_cache, target=None):
- # x size = B x M
- block_size = x.size(1)
- h = self.in_emb(x) # B x M x H
- if self.emb_dropout is not None:
- h = self.emb_dropout(h)
-
- h_cache_next = []
- for l, layer in enumerate(self.layers):
- cache_size = layer.attn.attn.get_cache_size()
- if cache_size > block_size:
- h_cache_next_l = torch.cat(
- [h_cache[l][:, -cache_size + block_size :, :], h], dim=1
- ).detach()
- else:
- h_cache_next_l = h[:, -cache_size:, :].detach()
- h_cache_next.append(h_cache_next_l)
- h = layer(h, h_cache[l], self.key_pe) # B x M x H
-
- if self.emb_dropout is not None:
- h = self.emb_dropout(h)
-
- out = F.log_softmax(self.out_emb(h).float(), dim=-1).type_as(h)
- dummy_loss = None
-
- return out, h_cache_next, dummy_loss
-
- def get_aux_loss(self):
- loss = 0.0
- for layer in self.layers:
- loss += layer.attn.attn.adaptive_span.get_loss()
- return self.aux_loss_scaler * loss
-
- def get_current_max_span(self):
- max_span = 0.0
- for layer in self.layers:
- max_span = max(
- max_span, layer.attn.attn.adaptive_span.get_current_max_span()
- )
- return max_span
-
- def get_current_avg_span(self):
- avg_span = 0.0
- for layer in self.layers:
- avg_span += layer.attn.attn.adaptive_span.get_current_avg_span()
- return avg_span / len(self.layers)
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/audio/speech_to_text_joint_dataset.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/audio/speech_to_text_joint_dataset.py
deleted file mode 100644
index 885ee7e0a32a246ce249810a6622c808f1a15e09..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/audio/speech_to_text_joint_dataset.py
+++ /dev/null
@@ -1,288 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from pathlib import Path
-from typing import Dict, List, Optional, NamedTuple
-
-import torch
-from fairseq.data import (
- ConcatDataset,
- Dictionary,
- ResamplingDataset,
- data_utils as fairseq_data_utils,
-)
-from fairseq.data.audio.speech_to_text_dataset import (
- SpeechToTextDataset,
- S2TDataConfig,
- SpeechToTextDatasetCreator,
-)
-
-
-logger = logging.getLogger(__name__)
-
-
-class S2TJointDataConfig(S2TDataConfig):
- """Wrapper class for data config YAML"""
-
- @property
- def src_vocab_filename(self):
- """fairseq vocabulary file under data root"""
- return self.config.get("src_vocab_filename", "src_dict.txt")
-
- @property
- def src_pre_tokenizer(self) -> Dict:
- """Pre-tokenizer to apply before subword tokenization. Returning
- a dictionary with `tokenizer` providing the tokenizer name and
- the other items providing the tokenizer-specific arguments.
- Tokenizers are defined in `fairseq.data.encoders.*`"""
- return self.config.get("src_pre_tokenizer", {"tokenizer": None})
-
- @property
- def src_bpe_tokenizer(self) -> Dict:
- """Subword tokenizer to apply on source text after pre-tokenization.
- Returning a dictionary with `bpe` providing the tokenizer name and
- the other items providing the tokenizer-specific arguments.
- Tokenizers are defined in `fairseq.data.encoders.*`"""
- return self.config.get("src_bpe_tokenizer", {"bpe": None})
-
- @property
- def prepend_tgt_lang_tag_no_change(self) -> bool:
- """Prepend target lang ID token as the prev_output_tokens BOS (e.g. for
- to-many multilingual setting). No change needed during inference.
- """
- return self.config.get("prepend_tgt_lang_tag_no_change", False)
-
-
-class SpeechToTextJointDatasetItem(NamedTuple):
- index: int
- source: torch.Tensor
- target: Optional[torch.Tensor] = None
- src_txt_tokens: Optional[torch.Tensor] = None
- tgt_lang_tag: Optional[int] = None
-
-
-class SpeechToTextJointDataset(SpeechToTextDataset):
- def __init__(
- self,
- split: str,
- is_train_split: bool,
- cfg: S2TJointDataConfig,
- audio_paths: List[str],
- n_frames: List[int],
- src_texts: Optional[List[str]] = None,
- tgt_texts: Optional[List[str]] = None,
- speakers: Optional[List[str]] = None,
- src_langs: Optional[List[str]] = None,
- tgt_langs: Optional[List[str]] = None,
- ids: Optional[List[str]] = None,
- tgt_dict: Optional[Dictionary] = None,
- src_dict: Optional[Dictionary] = None,
- pre_tokenizer=None,
- bpe_tokenizer=None,
- src_pre_tokenizer=None,
- src_bpe_tokenizer=None,
- ):
- super().__init__(
- split,
- is_train_split,
- cfg,
- audio_paths,
- n_frames,
- src_texts=src_texts,
- tgt_texts=tgt_texts,
- speakers=speakers,
- src_langs=src_langs,
- tgt_langs=tgt_langs,
- ids=ids,
- tgt_dict=tgt_dict,
- pre_tokenizer=pre_tokenizer,
- bpe_tokenizer=bpe_tokenizer,
- )
-
- self.src_dict = src_dict
- self.src_pre_tokenizer = src_pre_tokenizer
- self.src_bpe_tokenizer = src_bpe_tokenizer
-
- def get_tokenized_src_text(self, index: int):
- text = self.tokenize(self.src_pre_tokenizer, self.src_texts[index])
- text = self.tokenize(self.src_bpe_tokenizer, text)
- return text
-
- def __getitem__(self, index: int) -> SpeechToTextJointDatasetItem:
- s2t_dataset_item = super().__getitem__(index)
- src_tokens = None
- if self.src_texts is not None and self.src_dict is not None:
- src_tokens = self.get_tokenized_src_text(index)
- src_tokens = self.src_dict.encode_line(
- src_tokens, add_if_not_exist=False, append_eos=True
- ).long()
- tgt_lang_tag = None
- if self.cfg.prepend_tgt_lang_tag_no_change:
- # prepend_tgt_lang_tag_no_change: modify prev_output_tokens instead
- tgt_lang_tag = self.get_lang_tag_idx(self.tgt_langs[index], self.tgt_dict)
-
- return SpeechToTextJointDatasetItem(
- index=index,
- source=s2t_dataset_item.source,
- target=s2t_dataset_item.target,
- src_txt_tokens=src_tokens,
- tgt_lang_tag=tgt_lang_tag,
- )
-
- def __len__(self):
- return self.n_samples
-
- def collater(self, samples: List[SpeechToTextJointDatasetItem]) -> Dict:
- s2t_out = super().collater(samples, return_order=True)
- if s2t_out == {}:
- return s2t_out
- net_input, order = s2t_out["net_input"], s2t_out["order"]
-
- if self.src_texts is not None and self.src_dict is not None:
- src_txt_tokens = fairseq_data_utils.collate_tokens(
- [x.src_txt_tokens for x in samples],
- self.src_dict.pad(),
- self.src_dict.eos(),
- left_pad=False,
- move_eos_to_beginning=False,
- )
- src_txt_tokens = src_txt_tokens.index_select(0, order)
- src_txt_lengths = torch.tensor(
- [x.src_txt_tokens.size()[0] for x in samples], dtype=torch.long
- ).index_select(0, order)
- net_input["src_txt_tokens"] = src_txt_tokens
- net_input["src_txt_lengths"] = src_txt_lengths
-
- if self.tgt_texts is not None and samples[0].tgt_lang_tag is not None:
- for i in range(len(samples)):
- net_input["prev_output_tokens"][i][0] = samples[order[i]].tgt_lang_tag
-
- out = {
- "id": s2t_out["id"],
- "net_input": net_input,
- "target": s2t_out["target"],
- "target_lengths": s2t_out["target_lengths"],
- "ntokens": s2t_out["ntokens"],
- "nsentences": len(samples),
- }
- return out
-
-
-class SpeechToTextJointDatasetCreator(SpeechToTextDatasetCreator):
- @classmethod
- def _from_list(
- cls,
- split_name: str,
- is_train_split,
- samples: List[Dict],
- cfg: S2TJointDataConfig,
- tgt_dict,
- src_dict,
- pre_tokenizer,
- bpe_tokenizer,
- src_pre_tokenizer,
- src_bpe_tokenizer,
- ) -> SpeechToTextJointDataset:
- audio_root = Path(cfg.audio_root)
- ids = [s[cls.KEY_ID] for s in samples]
- audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples]
- n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples]
- tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples]
- src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples]
- speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples]
- src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples]
- tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples]
- return SpeechToTextJointDataset(
- split_name,
- is_train_split,
- cfg,
- audio_paths,
- n_frames,
- src_texts=src_texts,
- tgt_texts=tgt_texts,
- speakers=speakers,
- src_langs=src_langs,
- tgt_langs=tgt_langs,
- ids=ids,
- tgt_dict=tgt_dict,
- src_dict=src_dict,
- pre_tokenizer=pre_tokenizer,
- bpe_tokenizer=bpe_tokenizer,
- src_pre_tokenizer=src_pre_tokenizer,
- src_bpe_tokenizer=src_bpe_tokenizer,
- )
-
- @classmethod
- def _from_tsv(
- cls,
- root: str,
- cfg: S2TJointDataConfig,
- split: str,
- tgt_dict,
- src_dict,
- is_train_split: bool,
- pre_tokenizer,
- bpe_tokenizer,
- src_pre_tokenizer,
- src_bpe_tokenizer,
- ) -> SpeechToTextJointDataset:
- samples = cls._load_samples_from_tsv(root, split)
- return cls._from_list(
- split,
- is_train_split,
- samples,
- cfg,
- tgt_dict,
- src_dict,
- pre_tokenizer,
- bpe_tokenizer,
- src_pre_tokenizer,
- src_bpe_tokenizer,
- )
-
- @classmethod
- def from_tsv(
- cls,
- root: str,
- cfg: S2TJointDataConfig,
- splits: str,
- tgt_dict,
- src_dict,
- pre_tokenizer,
- bpe_tokenizer,
- src_pre_tokenizer,
- src_bpe_tokenizer,
- is_train_split: bool,
- epoch: int,
- seed: int,
- ) -> SpeechToTextJointDataset:
- datasets = [
- cls._from_tsv(
- root,
- cfg,
- split,
- tgt_dict,
- src_dict,
- is_train_split,
- pre_tokenizer,
- bpe_tokenizer,
- src_pre_tokenizer,
- src_bpe_tokenizer,
- )
- for split in splits.split(",")
- ]
-
- if is_train_split and len(datasets) > 1 and cfg.sampling_alpha != 1.0:
- # temperature-based sampling
- size_ratios = cls.get_size_ratios(datasets, alpha=cfg.sampling_alpha)
- datasets = [
- ResamplingDataset(
- d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0)
- )
- for r, d in zip(size_ratios, datasets)
- ]
-
- return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0]
diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/speech_recognition/asr_test_base.py b/spaces/mshukor/UnIVAL/fairseq/tests/speech_recognition/asr_test_base.py
deleted file mode 100644
index 8c5d414e7bf17ee02f280d024fa5d07e28b79d6b..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/tests/speech_recognition/asr_test_base.py
+++ /dev/null
@@ -1,557 +0,0 @@
-#!/usr/bin/env python3
-
-import argparse
-import os
-import unittest
-from inspect import currentframe, getframeinfo
-
-import numpy as np
-import torch
-from examples.speech_recognition.data.data_utils import lengths_to_encoder_padding_mask
-from fairseq.data import data_utils as fairseq_data_utils
-from fairseq.data.dictionary import Dictionary
-from fairseq.models import (
- BaseFairseqModel,
- FairseqDecoder,
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- FairseqEncoderModel,
- FairseqModel,
-)
-from fairseq.tasks.fairseq_task import LegacyFairseqTask
-
-
-DEFAULT_TEST_VOCAB_SIZE = 100
-
-
-# ///////////////////////////////////////////////////////////////////////////
-# utility function to setup dummy dict/task/input
-# ///////////////////////////////////////////////////////////////////////////
-
-
-def get_dummy_dictionary(vocab_size=DEFAULT_TEST_VOCAB_SIZE):
- dummy_dict = Dictionary()
- # add dummy symbol to satisfy vocab size
- for id, _ in enumerate(range(vocab_size)):
- dummy_dict.add_symbol("{}".format(id), 1000)
- return dummy_dict
-
-
-class DummyTask(LegacyFairseqTask):
- def __init__(self, args):
- super().__init__(args)
- self.dictionary = get_dummy_dictionary()
- if getattr(self.args, "ctc", False):
- self.dictionary.add_symbol("")
- self.tgt_dict = self.dictionary
-
- @property
- def target_dictionary(self):
- return self.dictionary
-
-
-def get_dummy_task_and_parser():
- """
- to build a fariseq model, we need some dummy parse and task. This function
- is used to create dummy task and parser to faciliate model/criterion test
-
- Note: we use FbSpeechRecognitionTask as the dummy task. You may want
- to use other task by providing another function
- """
- parser = argparse.ArgumentParser(
- description="test_dummy_s2s_task", argument_default=argparse.SUPPRESS
- )
- DummyTask.add_args(parser)
- args = parser.parse_args([])
- task = DummyTask.setup_task(args)
- return task, parser
-
-
-def get_dummy_input(T=100, D=80, B=5, K=100):
- forward_input = {}
- # T max sequence length
- # D feature vector dimension
- # B batch size
- # K target dimension size
- feature = torch.randn(B, T, D)
- # this (B, T, D) layout is just a convention, you can override it by
- # write your own _prepare_forward_input function
- src_lengths = torch.from_numpy(
- np.random.randint(low=1, high=T, size=B, dtype=np.int64)
- )
- src_lengths[0] = T # make sure the maximum length matches
- prev_output_tokens = []
- for b in range(B):
- token_length = np.random.randint(low=1, high=src_lengths[b].item() + 1)
- tokens = np.random.randint(low=0, high=K, size=token_length, dtype=np.int64)
- prev_output_tokens.append(torch.from_numpy(tokens))
-
- prev_output_tokens = fairseq_data_utils.collate_tokens(
- prev_output_tokens,
- pad_idx=1,
- eos_idx=2,
- left_pad=False,
- move_eos_to_beginning=False,
- )
- src_lengths, sorted_order = src_lengths.sort(descending=True)
- forward_input["src_tokens"] = feature.index_select(0, sorted_order)
- forward_input["src_lengths"] = src_lengths
- forward_input["prev_output_tokens"] = prev_output_tokens
-
- return forward_input
-
-
-def get_dummy_encoder_output(encoder_out_shape=(100, 80, 5)):
- """
- This only provides an example to generate dummy encoder output
- """
- (T, B, D) = encoder_out_shape
- encoder_out = {}
-
- encoder_out["encoder_out"] = torch.from_numpy(
- np.random.randn(*encoder_out_shape).astype(np.float32)
- )
- seq_lengths = torch.from_numpy(np.random.randint(low=1, high=T, size=B))
- # some dummy mask
- encoder_out["encoder_padding_mask"] = torch.arange(T).view(1, T).expand(
- B, -1
- ) >= seq_lengths.view(B, 1).expand(-1, T)
- encoder_out["encoder_padding_mask"].t_()
-
- # encoer_padding_mask is (T, B) tensor, with (t, b)-th element indicate
- # whether encoder_out[t, b] is valid (=0) or not (=1)
- return encoder_out
-
-
-def _current_postion_info():
- cf = currentframe()
- frameinfo = " (at {}:{})".format(
- os.path.basename(getframeinfo(cf).filename), cf.f_back.f_lineno
- )
- return frameinfo
-
-
-def check_encoder_output(encoder_output, batch_size=None):
- """we expect encoder_output to be a dict with the following
- key/value pairs:
- - encoder_out: a Torch.Tensor
- - encoder_padding_mask: a binary Torch.Tensor
- """
- if not isinstance(encoder_output, dict):
- msg = (
- "FairseqEncoderModel.forward(...) must be a dict" + _current_postion_info()
- )
- return False, msg
-
- if "encoder_out" not in encoder_output:
- msg = (
- "FairseqEncoderModel.forward(...) must contain encoder_out"
- + _current_postion_info()
- )
- return False, msg
-
- if "encoder_padding_mask" not in encoder_output:
- msg = (
- "FairseqEncoderModel.forward(...) must contain encoder_padding_mask"
- + _current_postion_info()
- )
- return False, msg
-
- if not isinstance(encoder_output["encoder_out"], torch.Tensor):
- msg = "encoder_out must be a torch.Tensor" + _current_postion_info()
- return False, msg
-
- if encoder_output["encoder_out"].dtype != torch.float32:
- msg = "encoder_out must have float32 dtype" + _current_postion_info()
- return False, msg
-
- mask = encoder_output["encoder_padding_mask"]
- if mask is not None:
- if not isinstance(mask, torch.Tensor):
- msg = (
- "encoder_padding_mask must be a torch.Tensor" + _current_postion_info()
- )
- return False, msg
- if mask.dtype != torch.uint8 and (
- not hasattr(torch, "bool") or mask.dtype != torch.bool
- ):
- msg = (
- "encoder_padding_mask must have dtype of uint8"
- + _current_postion_info()
- )
- return False, msg
-
- if mask.dim() != 2:
- msg = (
- "we expect encoder_padding_mask to be a 2-d tensor, in shape (T, B)"
- + _current_postion_info()
- )
- return False, msg
-
- if batch_size is not None and mask.size(1) != batch_size:
- msg = (
- "we expect encoder_padding_mask to be a 2-d tensor, with size(1)"
- + " being the batch size"
- + _current_postion_info()
- )
- return False, msg
- return True, None
-
-
-def check_decoder_output(decoder_output):
- """we expect output from a decoder is a tuple with the following constraint:
- - the first element is a torch.Tensor
- - the second element can be anything (reserved for future use)
- """
- if not isinstance(decoder_output, tuple):
- msg = "FariseqDecoder output must be a tuple" + _current_postion_info()
- return False, msg
-
- if len(decoder_output) != 2:
- msg = "FairseqDecoder output must be 2-elem tuple" + _current_postion_info()
- return False, msg
-
- if not isinstance(decoder_output[0], torch.Tensor):
- msg = (
- "FariseqDecoder output[0] must be a torch.Tensor" + _current_postion_info()
- )
- return False, msg
-
- return True, None
-
-
-# ///////////////////////////////////////////////////////////////////////////
-# Base Test class
-# ///////////////////////////////////////////////////////////////////////////
-
-
-class TestBaseFairseqModelBase(unittest.TestCase):
- """
- This class is used to facilitate writing unittest for any class derived from
- `BaseFairseqModel`.
- """
-
- @classmethod
- def setUpClass(cls):
- if cls is TestBaseFairseqModelBase:
- raise unittest.SkipTest("Skipping test case in base")
- super().setUpClass()
-
- def setUpModel(self, model):
- self.assertTrue(isinstance(model, BaseFairseqModel))
- self.model = model
-
- def setupInput(self):
- pass
-
- def setUp(self):
- self.model = None
- self.forward_input = None
- pass
-
-
-class TestFairseqEncoderDecoderModelBase(TestBaseFairseqModelBase):
- """
- base code to test FairseqEncoderDecoderModel (formally known as
- `FairseqModel`) must be derived from this base class
- """
-
- @classmethod
- def setUpClass(cls):
- if cls is TestFairseqEncoderDecoderModelBase:
- raise unittest.SkipTest("Skipping test case in base")
- super().setUpClass()
-
- def setUpModel(self, model_cls, extra_args_setters=None):
- self.assertTrue(
- issubclass(model_cls, (FairseqEncoderDecoderModel, FairseqModel)),
- msg="This class only tests for FairseqModel subclasses",
- )
-
- task, parser = get_dummy_task_and_parser()
- model_cls.add_args(parser)
-
- args = parser.parse_args([])
-
- if extra_args_setters is not None:
- for args_setter in extra_args_setters:
- args_setter(args)
- model = model_cls.build_model(args, task)
- self.model = model
-
- def setUpInput(self, input=None):
- self.forward_input = get_dummy_input() if input is None else input
-
- def setUp(self):
- super().setUp()
-
- def test_forward(self):
- if self.model and self.forward_input:
- forward_output = self.model.forward(**self.forward_input)
- # for FairseqEncoderDecoderModel, forward returns a tuple of two
- # elements, the first one is a Torch.Tensor
- succ, msg = check_decoder_output(forward_output)
- if not succ:
- self.assertTrue(succ, msg=msg)
- self.forward_output = forward_output
-
- def test_get_normalized_probs(self):
- if self.model and self.forward_input:
- forward_output = self.model.forward(**self.forward_input)
- logprob = self.model.get_normalized_probs(forward_output, log_probs=True)
- prob = self.model.get_normalized_probs(forward_output, log_probs=False)
-
- # in order for different models/criterion to play with each other
- # we need to know whether the logprob or prob output is batch_first
- # or not. We assume an additional attribute will be attached to logprob
- # or prob. If you find your code failed here, simply override
- # FairseqModel.get_normalized_probs, see example at
- # https://fburl.com/batch_first_example
- self.assertTrue(hasattr(logprob, "batch_first"))
- self.assertTrue(hasattr(prob, "batch_first"))
-
- self.assertTrue(torch.is_tensor(logprob))
- self.assertTrue(torch.is_tensor(prob))
-
-
-class TestFairseqEncoderModelBase(TestBaseFairseqModelBase):
- """
- base class to test FairseqEncoderModel
- """
-
- @classmethod
- def setUpClass(cls):
- if cls is TestFairseqEncoderModelBase:
- raise unittest.SkipTest("Skipping test case in base")
- super().setUpClass()
-
- def setUpModel(self, model_cls, extra_args_setters=None):
- self.assertTrue(
- issubclass(model_cls, FairseqEncoderModel),
- msg="This class is only used for testing FairseqEncoderModel",
- )
- task, parser = get_dummy_task_and_parser()
- model_cls.add_args(parser)
- args = parser.parse_args([])
- if extra_args_setters is not None:
- for args_setter in extra_args_setters:
- args_setter(args)
-
- model = model_cls.build_model(args, task)
- self.model = model
-
- def setUpInput(self, input=None):
- self.forward_input = get_dummy_input() if input is None else input
- # get_dummy_input() is originally for s2s, here we delete extra dict
- # items, so it can be used for EncoderModel / Encoder as well
- self.forward_input.pop("prev_output_tokens", None)
-
- def setUp(self):
- super().setUp()
-
- def test_forward(self):
- if self.forward_input and self.model:
- bsz = self.forward_input["src_tokens"].size(0)
- forward_output = self.model.forward(**self.forward_input)
-
- # we expect forward_output to be a dict with the following
- # key/value pairs:
- # - encoder_out: a Torch.Tensor
- # - encoder_padding_mask: a binary Torch.Tensor
- succ, msg = check_encoder_output(forward_output, batch_size=bsz)
- if not succ:
- self.assertTrue(succ, msg=msg)
- self.forward_output = forward_output
-
- def test_get_normalized_probs(self):
- if self.model and self.forward_input:
- forward_output = self.model.forward(**self.forward_input)
- logprob = self.model.get_normalized_probs(forward_output, log_probs=True)
- prob = self.model.get_normalized_probs(forward_output, log_probs=False)
-
- # in order for different models/criterion to play with each other
- # we need to know whether the logprob or prob output is batch_first
- # or not. We assume an additional attribute will be attached to logprob
- # or prob. If you find your code failed here, simply override
- # FairseqModel.get_normalized_probs, see example at
- # https://fburl.com/batch_first_example
- self.assertTrue(hasattr(logprob, "batch_first"))
- self.assertTrue(hasattr(prob, "batch_first"))
-
- self.assertTrue(torch.is_tensor(logprob))
- self.assertTrue(torch.is_tensor(prob))
-
-
-class TestFairseqEncoderBase(unittest.TestCase):
- """
- base class to test FairseqEncoder
- """
-
- @classmethod
- def setUpClass(cls):
- if cls is TestFairseqEncoderBase:
- raise unittest.SkipTest("Skipping test case in base")
- super().setUpClass()
-
- def setUpEncoder(self, encoder):
- self.assertTrue(
- isinstance(encoder, FairseqEncoder),
- msg="This class is only used for test FairseqEncoder",
- )
- self.encoder = encoder
-
- def setUpInput(self, input=None):
- self.forward_input = get_dummy_input() if input is None else input
- # get_dummy_input() is originally for s2s, here we delete extra dict
- # items, so it can be used for EncoderModel / Encoder as well
- self.forward_input.pop("prev_output_tokens", None)
-
- def setUp(self):
- self.encoder = None
- self.forward_input = None
-
- def test_forward(self):
- if self.encoder and self.forward_input:
- bsz = self.forward_input["src_tokens"].size(0)
-
- forward_output = self.encoder.forward(**self.forward_input)
- succ, msg = check_encoder_output(forward_output, batch_size=bsz)
- if not succ:
- self.assertTrue(succ, msg=msg)
- self.forward_output = forward_output
-
-
-class TestFairseqDecoderBase(unittest.TestCase):
- """
- base class to test FairseqDecoder
- """
-
- @classmethod
- def setUpClass(cls):
- if cls is TestFairseqDecoderBase:
- raise unittest.SkipTest("Skipping test case in base")
- super().setUpClass()
-
- def setUpDecoder(self, decoder):
- self.assertTrue(
- isinstance(decoder, FairseqDecoder),
- msg="This class is only used for test FairseqDecoder",
- )
- self.decoder = decoder
-
- def setUpInput(self, input=None):
- self.forward_input = get_dummy_encoder_output() if input is None else input
-
- def setUpPrevOutputTokens(self, tokens=None):
- if tokens is None:
- self.encoder_input = get_dummy_input()
- self.prev_output_tokens = self.encoder_input["prev_output_tokens"]
- else:
- self.prev_output_tokens = tokens
-
- def setUp(self):
- self.decoder = None
- self.forward_input = None
- self.prev_output_tokens = None
-
- def test_forward(self):
- if (
- self.decoder is not None
- and self.forward_input is not None
- and self.prev_output_tokens is not None
- ):
- forward_output = self.decoder.forward(
- prev_output_tokens=self.prev_output_tokens,
- encoder_out=self.forward_input,
- )
- succ, msg = check_decoder_output(forward_output)
- if not succ:
- self.assertTrue(succ, msg=msg)
- self.forward_input = forward_output
-
-
-class DummyEncoderModel(FairseqEncoderModel):
- def __init__(self, encoder):
- super().__init__(encoder)
-
- @classmethod
- def build_model(cls, args, task):
- return cls(DummyEncoder())
-
- def get_logits(self, net_output):
- # Inverse of sigmoid to use with BinaryCrossEntropyWithLogitsCriterion as
- # F.binary_cross_entropy_with_logits combines sigmoid and CE
- return torch.log(
- torch.div(net_output["encoder_out"], 1 - net_output["encoder_out"])
- )
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- lprobs = super().get_normalized_probs(net_output, log_probs, sample=sample)
- lprobs.batch_first = True
- return lprobs
-
-
-class DummyEncoder(FairseqEncoder):
- def __init__(self):
- super().__init__(None)
-
- def forward(self, src_tokens, src_lengths):
- mask, max_len = lengths_to_encoder_padding_mask(src_lengths)
- return {"encoder_out": src_tokens, "encoder_padding_mask": mask}
-
-
-class CrossEntropyCriterionTestBase(unittest.TestCase):
- @classmethod
- def setUpClass(cls):
- if cls is CrossEntropyCriterionTestBase:
- raise unittest.SkipTest("Skipping base class test case")
- super().setUpClass()
-
- def setUpArgs(self):
- args = argparse.Namespace()
- args.sentence_avg = False
- args.threshold = 0.1 # to use with BinaryCrossEntropyWithLogitsCriterion
- return args
-
- def setUp(self):
- args = self.setUpArgs()
- self.model = DummyEncoderModel(encoder=DummyEncoder())
- self.criterion = self.criterion_cls.build_criterion(args, task=DummyTask(args))
-
- def get_src_tokens(self, correct_prediction, aggregate):
- """
- correct_prediction: True if the net_output (src_tokens) should
- predict the correct target
- aggregate: True if the criterion expects net_output (src_tokens)
- aggregated across time axis
- """
- predicted_idx = 0 if correct_prediction else 1
- if aggregate:
- src_tokens = torch.zeros((2, 2), dtype=torch.float)
- for b in range(2):
- src_tokens[b][predicted_idx] = 1.0
- else:
- src_tokens = torch.zeros((2, 10, 2), dtype=torch.float)
- for b in range(2):
- for t in range(10):
- src_tokens[b][t][predicted_idx] = 1.0
- return src_tokens
-
- def get_target(self, soft_target):
- if soft_target:
- target = torch.zeros((2, 2), dtype=torch.float)
- for b in range(2):
- target[b][0] = 1.0
- else:
- target = torch.zeros((2, 10), dtype=torch.long)
- return target
-
- def get_test_sample(self, correct, soft_target, aggregate):
- src_tokens = self.get_src_tokens(correct, aggregate)
- target = self.get_target(soft_target)
- L = src_tokens.size(1)
- return {
- "net_input": {"src_tokens": src_tokens, "src_lengths": torch.tensor([L])},
- "target": target,
- "ntokens": src_tokens.size(0) * src_tokens.size(1),
- }
diff --git a/spaces/msy127/app_rag_llama2_paper/sentence-transformers/all-MiniLM-L6-v2/README.md b/spaces/msy127/app_rag_llama2_paper/sentence-transformers/all-MiniLM-L6-v2/README.md
deleted file mode 100644
index b3406917f3229edc5165ba222038d9bffe957a2f..0000000000000000000000000000000000000000
--- a/spaces/msy127/app_rag_llama2_paper/sentence-transformers/all-MiniLM-L6-v2/README.md
+++ /dev/null
@@ -1,176 +0,0 @@
----
-pipeline_tag: sentence-similarity
-tags:
-- sentence-transformers
-- feature-extraction
-- sentence-similarity
-language: en
-license: apache-2.0
-datasets:
-- s2orc
-- flax-sentence-embeddings/stackexchange_xml
-- ms_marco
-- gooaq
-- yahoo_answers_topics
-- code_search_net
-- search_qa
-- eli5
-- snli
-- multi_nli
-- wikihow
-- natural_questions
-- trivia_qa
-- embedding-data/sentence-compression
-- embedding-data/flickr30k-captions
-- embedding-data/altlex
-- embedding-data/simple-wiki
-- embedding-data/QQP
-- embedding-data/SPECTER
-- embedding-data/PAQ_pairs
-- embedding-data/WikiAnswers
-
----
-
-
-# all-MiniLM-L6-v2
-This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
-
-## Usage (Sentence-Transformers)
-Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
-
-```
-pip install -U sentence-transformers
-```
-
-Then you can use the model like this:
-```python
-from sentence_transformers import SentenceTransformer
-sentences = ["This is an example sentence", "Each sentence is converted"]
-
-model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
-embeddings = model.encode(sentences)
-print(embeddings)
-```
-
-## Usage (HuggingFace Transformers)
-Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
-
-```python
-from transformers import AutoTokenizer, AutoModel
-import torch
-import torch.nn.functional as F
-
-#Mean Pooling - Take attention mask into account for correct averaging
-def mean_pooling(model_output, attention_mask):
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
-
-
-# Sentences we want sentence embeddings for
-sentences = ['This is an example sentence', 'Each sentence is converted']
-
-# Load model from HuggingFace Hub
-tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
-model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
-
-# Tokenize sentences
-encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
-
-# Compute token embeddings
-with torch.no_grad():
- model_output = model(**encoded_input)
-
-# Perform pooling
-sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
-
-# Normalize embeddings
-sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
-
-print("Sentence embeddings:")
-print(sentence_embeddings)
-```
-
-## Evaluation Results
-
-For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
-
-------
-
-## Background
-
-The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
-contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
-1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
-
-We developped this model during the
-[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
-organized by Hugging Face. We developped this model as part of the project:
-[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
-
-## Intended uses
-
-Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
-the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
-
-By default, input text longer than 256 word pieces is truncated.
-
-
-## Training procedure
-
-### Pre-training
-
-We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
-
-### Fine-tuning
-
-We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
-We then apply the cross entropy loss by comparing with true pairs.
-
-#### Hyper parameters
-
-We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
-We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
-a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
-
-#### Training data
-
-We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
-We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
-
-
-| Dataset | Paper | Number of training tuples |
-|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
-| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
-| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
-| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
-| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
-| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
-| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
-| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
-| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
-| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
-| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
-| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
-| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
-| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
-| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
-| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
-| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
-| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
-| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
-| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
-| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
-| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
-| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
-| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
-| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
-| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
-| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
-| **Total** | | **1,170,060,424** |
\ No newline at end of file
diff --git a/spaces/multimodalart/dreambooth-training/README.md b/spaces/multimodalart/dreambooth-training/README.md
deleted file mode 100644
index a8834301ccb4f7f80a3f5526d96ad2129cf2cab3..0000000000000000000000000000000000000000
--- a/spaces/multimodalart/dreambooth-training/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Dreambooth Training
-emoji: ☁️
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-suggested_hardware: "t4-small"
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/myrad01/Inpaint-Anything/README.md b/spaces/myrad01/Inpaint-Anything/README.md
deleted file mode 100644
index d9e84fb131f23cbfd12091f4b26cba2b6370379f..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Inpaint Anything
-emoji: ⚡
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: InpaintAI/Inpaint-Anything
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/data/__init__.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/data/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/network/__init__.py b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/network/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Crack LINK Sonic Academy KICK Nicky Romero Edition V1.01 WiN MacOSX Incl. K.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Crack LINK Sonic Academy KICK Nicky Romero Edition V1.01 WiN MacOSX Incl. K.md
deleted file mode 100644
index e18b80d8ab2efe0ed7c2d009633d7d83b7fee118..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Crack LINK Sonic Academy KICK Nicky Romero Edition V1.01 WiN MacOSX Incl. K.md
+++ /dev/null
@@ -1,149 +0,0 @@
-
-
-
-
CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
-
If you are an electronic dance music (EDM ) producer, you might have heard of Sonic Academy KICK Nicky Romero Edition, a powerful and versatile drum synthesizer plugin that lets you create custom kick drums and other percussion sounds for your tracks. This software is popular among EDM producers because it offers a lot of control and flexibility over the sound design, as well as a huge library of presets and a custom skin designed by the famous DJ and producer Nicky Romero. However, you might also have heard of CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K, a modified version of the software that claims to bypass the copy protection and allow you to use it for free. In this article, we will explore what CRACK software is, how to download and install CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K, how to use it to create kick drums and other percussion sounds, how to mix and produce EDM with it, and what are the pros and cons of using it compared to other drum synthesizers and samplers. We will also answer some frequently asked questions about CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K at the end of the article.
-
CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
CRACK software is a term used to describe software that has been modified or hacked to remove or bypass the copy protection or digital rights management (DRM) mechanisms that prevent unauthorized use or distribution. CRACK software is usually distributed by groups or individuals who claim to provide free access to paid or licensed software for educational or testing purposes. However, CRACK software is also illegal, unethical, and risky to use, as it may violate the intellectual property rights of the original developers, expose your computer to malware or viruses, compromise your personal data or privacy, or cause errors or damage to your system.
-
The history of CRACK software dates back to the early days of computing, when software piracy was rampant and software protection schemes were weak or nonexistent. Some of the earliest examples of CRACK software include games that had their copy protection codes removed or bypassed, such as Castle Wolfenstein (1981), Ultima III: Exodus (1983), or Leisure Suit Larry in the Land of the Lounge Lizards (1987). Later on, as software protection became more sophisticated and complex, so did the methods and techniques of cracking them. Some of the most notorious examples of CRACK software include Adobe Photoshop CS2 (2005), Windows XP (2001), or Microsoft Office 2007 (2006). Nowadays, CRACK software is still widely available on the internet, especially for popular or expensive software products, such as video games, music production software, or graphic design software.
-
-
-
How to download and install CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
-
If you are interested in downloading and installing CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K, you will need to follow these steps:
-
-
Find a reliable source for downloading CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K. There are many websites that claim to offer free downloads of CRACK software, but not all of them are trustworthy or safe. Some of them may contain malware or viruses that can harm your computer or steal your personal information. Some of them may also require you to complete surveys or register for an account before allowing you to download anything. To avoid these risks, you should look for reputable sources that have positive reviews and feedback from other users, such as torrent sites, file-sharing platforms, or online forums.
-
Download the CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K file from the source you have chosen. The file will usually be in a compressed format, such as ZIP or RAR, and will contain several files inside, such as the installer, the crack file, the readme file, and sometimes additional files or folders.
-
Extract the CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K file to a folder on your computer using a program like WinRAR or 7-Zip . Make sure you have enough space on your hard drive for the extraction process.
-
Run the installer file from the extracted folder and follow the instructions on the screen. The installer will usually ask you to choose a destination folder for installing the software, accept the terms and conditions, and select some options or preferences.
-
Copy the crack file from the extracted folder and paste it to the destination folder where you installed the software, replacing the original file. The crack file is usually named after the software or the group that cracked it, such as KICK.Nicky.Romero.Edition.v1.01.Incl.K-GEN.exe or KICK.Nicky.Romero.Edition.v1.01.Incl.K-R2R.exe. The crack file is the key component of CRACK software, as it modifies or patches the software to bypass the copy protection or DRM mechanisms.
-
Run the crack file from the destination folder and wait for it to finish. The crack file will usually display a message or a logo when it is done, indicating that the software has been successfully cracked and activated. Sometimes, the crack file may also require you to enter a serial number or a license key, which you can find in the readme file or on the source website.
-
Enjoy using CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K for free. You can now launch the software from your desktop or start menu and use it to create kick drums and other percussion sounds for your EDM tracks.
-
-
Here are some screenshots of the download and installation process of CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K:
-
-
Screenshot 1: Downloading CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K from a torrent site
-
-
Screenshot 2: Extracting CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K to a folder
-
-
-
Screenshot 3: Running the installer file of CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
-
-
Screenshot 4: Copying the crack file of CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K to the destination folder
-
-
Screenshot 5: Running the crack file of CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K and waiting for it to finish
-
-
Screenshot 6: Launching CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K and using it to create kick drums
-
-
-
How to use CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
-
Now that you have downloaded and installed CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K, you might be wondering how to use it to create kick drums and other percussion sounds for your EDM tracks.
-
The first thing you need to know is that CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K has a simple and intuitive interface that consists of four main sections: the sound engines, the effects, the presets, and the skin. Let's take a look at each one of them and see how they work.
-
How to create kick drums with CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
-
The sound engines are the core of CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K, as they allow you to synthesize kick drums from scratch or from samples. There are two sound engines in the software: the kick engine and the click engine. The kick engine is responsible for generating the low-frequency part of the kick drum, while the click engine is responsible for generating the high-frequency part of the kick drum. You can use either one or both of them to create your kick drum sound.
-
The kick engine has four main controls: the pitch envelope, the amp envelope, the drive control, and the sub control. The pitch envelope allows you to adjust the pitch of the kick drum over time, creating a pitch bend effect that can make your kick drum sound more punchy or boomy. The amp envelope allows you to adjust the volume of the kick drum over time, creating a fade in or fade out effect that can make your kick drum sound more snappy or smooth. The drive control allows you to add distortion or saturation to your kick drum, making it sound more gritty or warm. The sub control allows you to add a sub-bass layer to your kick drum, making it sound more powerful or deep.
-
The click engine has three main controls: the click editor, the filter envelope, and the noise generator. The click editor allows you to load a sample of your choice and edit it to create the high-frequency part of your kick drum. You can choose from a variety of samples provided by the software, such as acoustic kicks, electronic kicks, snares, claps, hats, or percussions, or you can import your own samples from your computer. You can also trim, reverse, loop, pitch-shift, or time-stretch your sample to fit your needs. The filter envelope allows you to apply a low-pass or high-pass filter to your sample and adjust its cutoff frequency and resonance over time, creating a sweep effect that can make your kick drum sound more dynamic or interesting. The noise generator allows you to add white noise or pink noise to your sample and adjust its volume and color over time, creating a hiss effect that can make your kick drum sound more airy or bright.
-
Here are some tips and tricks on how to create kick drums with CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K:
-
-
Start with a preset that suits your genre or style and tweak it to your liking. You can browse through hundreds of presets created by Nicky Romero and other professional producers in different categories, such as house, techno, trap, dubstep, or dance-pop. You can also save your own presets for future use.
-
Use the pitch envelope to create different types of kicks, such as short kicks, long kicks, hard kicks, soft kicks, etc. For example, if you want a short and punchy kick, you can set a fast decay and a low sustain on the pitch envelope. If you want a long and boomy kick, you can set a slow decay and a high sustain on the pitch envelope.
-
Use the amp envelope to control the shape and length of your kick drum. For example, if you want a snappy and tight kick, you can set a fast attack and a fast release on the amp envelope. If you want a smooth and loose kick, you can set a slow attack and a slow release on the amp envelope.
-
Use the drive control to add some character and warmth to your kick drum. For example, if you want a gritty and dirty kick , you can increase the drive control and choose a hard or soft clipping mode. If you want a warm and analog kick, you can decrease the drive control and choose a tube or tape saturation mode.
-
Use the sub control to add some low-end and power to your kick drum. For example, if you want a deep and subby kick, you can increase the sub control and choose a sine or triangle wave shape. If you want a punchy and solid kick, you can decrease the sub control and choose a square or saw wave shape.
-
Use the click editor to add some high-end and definition to your kick drum. For example, if you want a crisp and clear kick, you can load an acoustic or electronic kick sample and adjust its pitch, volume, and start point. If you want a metallic and noisy kick, you can load a snare or hat sample and adjust its pitch, volume, and reverse option.
-
Use the filter envelope to add some movement and variation to your kick drum. For example, if you want a sweeping and evolving kick, you can apply a low-pass or high-pass filter to your sample and set a fast attack and a slow decay on the filter envelope. If you want a static and consistent kick, you can apply a low-pass or high-pass filter to your sample and set a slow attack and a fast decay on the filter envelope.
-
Use the noise generator to add some texture and brightness to your kick drum. For example, if you want a hissy and airy kick, you can add white noise or pink noise to your sample and adjust its volume, color, and attack. If you want a clean and pure kick, you can avoid adding noise to your sample or lower its volume, color, and attack.
-
-
How to create other percussion sounds with CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
-
Besides creating kick drums, you can also use CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K to create other percussion sounds, such as snares, claps, hats, toms, or shakers. You can use the same sound engines and controls as for creating kick drums, but with some different settings and adjustments. Here are some tips and tricks on how to create other percussion sounds with CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K:
-
-
To create snares, you can use the click engine with a snare sample or the noise generator with white noise or pink noise. You can also use the pitch envelope with a fast decay and a low sustain, the amp envelope with a fast attack and a fast release, the filter envelope with a high-pass filter and a fast attack and a slow decay, and the drive control with a hard or soft clipping mode.
-
To create claps, you can use the click engine with a clap sample or the noise generator with white noise or pink noise. You can also use the pitch envelope with a fast decay and a low sustain, the amp envelope with a fast attack and a fast release, the filter envelope with a high-pass filter and a fast attack and a slow decay, and the drive control with a tube or tape saturation mode.
-
To create hats, you can use the click engine with a hat sample or the noise generator with white noise or pink noise. You can also use the pitch envelope with a fast decay and a low sustain, the amp envelope with a fast attack and a fast release, the filter envelope with a high-pass filter and a fast attack and a fast decay, and the drive control with a hard or soft clipping mode.
-
To create toms, you can use the kick engine with a low pitch and a high sustain, the click engine with a tom sample or the noise generator with white noise or pink noise. You can also use the pitch envelope with a slow decay and a high sustain, the amp envelope with a slow attack and a slow release, the filter envelope with a low-pass filter and a slow attack and a slow decay, and the drive control with a tube or tape saturation mode.
-
To create shakers, you can use the click engine with a shaker sample or the noise generator with white noise or pink noise. You can also use the pitch envelope with a fast decay and a low sustain, the amp envelope with a fast attack and a fast release, the filter envelope with a high-pass filter and a fast attack and a fast decay, and the drive control with a hard or soft clipping mode.
-
-
How to use the Nicky Romero presets and skin with CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
-
One of the main features of CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K is that it comes with a custom skin and over 200 presets designed by Nicky Romero himself. Nicky Romero is one of the most influential and successful EDM producers in the world, known for his hits such as "I Could Be The One", "Toulouse", "Legacy", or "Lighthouse". He is also the founder of Protocol Recordings, a label that supports many talented EDM artists. By using his presets and skin, you can get access to his signature sound and style, as well as learn from his techniques and tips.
-
However, there are also some benefits and drawbacks of using the Nicky Romero presets and skin with CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K. Here are some of them:
-
-
-
Benefits
-
Drawbacks
-
-
-
-
You can save time and effort by using ready-made sounds that fit your genre or style.
-
You can get inspired by listening to different sounds and experimenting with them.
-
You can learn from Nicky Romero's sound design skills and techniques by analyzing his presets.
-
You can enjoy a sleek and modern interface that matches Nicky Romero's brand and image.
-
-
-
You may lose your originality and creativity by relying too much on someone else's sounds.
-
You may face legal or ethical issues by using sounds that belong to Nicky Romero without his permission or credit.
-
You may miss out on some features or options that are available in the original software but not in the CRACK version.
-
You may encounter some bugs or errors that are caused by the CRACK modification or patching.
-
-
-
-
Here are some tips on how to use the Nicky Romero presets and skin with CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K:
-
-
Use the presets as a starting point, not as an end result. You can tweak them to suit your needs, such as changing the pitch, volume, drive, sub, filter, or noise settings. You can also layer them with other sounds to create more complex or unique sounds.
-
Use the skin as an aesthetic preference, not as an essential feature. You can switch between the Nicky Romero skin and the original skin by clicking on the logo at the top left corner of the interface. You can also customize the skin's color, brightness, or contrast by clicking on the settings icon at the top right corner of the interface.
-
Use the presets and skin as a learning opportunity, not as a shortcut. You can study how Nicky Romero creates his sounds and apply his techniques to your own sound design. You can also compare his sounds with other presets or sounds and see what makes them different or better.
-
Use the presets and skin as a reference, not as a copy. You can use them to get an idea of what kind of sounds work well for your genre or style, but you should also try to create your own sounds that reflect your personality and vision. You should also give credit to Nicky Romero if you use his sounds or skin in your projects.
-
-
How to mix and produce EDM with CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
-
Once you have created your kick drums and other percussion sounds with CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K, you might want to know how to mix and produce EDM with them. Mixing and producing EDM is a complex and creative process that involves many steps and skills, such as arranging, composing, sound design, mixing, mastering, and more. However, there are some general tips and techniques that can help you improve your EDM production skills and achieve better results. Here are some of them:
-
-
Define the low end of your track. The low end is the most important part of any EDM track, as it provides the energy, groove, and impact that make people dance. To define the low end of your track, you need to make sure that your kick drum and bass sound well together, without clashing or overlapping. You can do this by using EQ, compression, sidechain, or saturation to balance their frequencies, levels, dynamics, and harmonics. You can also use CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K's sub control to add some extra low-end to your kick drum if needed.
-
Choose the right sounds for your track. The sounds you use in your track can make a big difference in how it sounds and feels. To choose the right sounds for your track, you need to consider your genre, style, mood, theme, and audience. You can use CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K's presets or samples to find some suitable sounds for your track, or you can create your own sounds with the software's sound engines and controls. You can also use other plugins or instruments to add some variety and diversity to your sound palette.
-
Keep the melodies simple and catchy. The melodies are the most memorable part of any EDM track, as they provide the emotion, melody, and hook that make people sing along. To keep the melodies simple and catchy, you need to use simple chord progressions, catchy rhythms, clear notes, and expressive modulations. You can use CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K's pitch envelope to create some pitch bend effects on your kick drum or other percussion sounds that can add some melody or harmony to your track.
-
Make space for everything in your track. The space is the most crucial part of any EDM track , as it provides the clarity, depth, and width that make your track sound professional and polished. To make space for everything in your track, you need to use EQ, panning, reverb, delay, or stereo imaging to separate and position your sounds in the frequency spectrum and the stereo field. You can also use CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K's EQ, compression, or drive controls to shape and enhance your kick drum or other percussion sounds and make them fit better in your mix.
-
Pick a theme for your track. The theme is the most distinctive part of any EDM track, as it provides the identity, direction, and purpose of your track. To pick a theme for your track, you need to think about what you want to express, communicate, or achieve with your track. You can use CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K's presets or samples to find some inspiration or ideas for your theme, or you can create your own theme with the software's sound engines and controls. You can also use other plugins or instruments to add some elements or details that support or enhance your theme.
-
-
Pros and cons of using CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K
-
After learning how to use CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K to create kick drums and other percussion sounds, and how to mix and produce EDM with them, you might want to know what are the pros and cons of using this software compared to other drum synthesizers and samplers. There are many options available in the market for creating drum sounds, such as Native Instruments Battery 4 , Xfer Records Serum , or Ableton Live Drum Racks . Each one of them has its own advantages and disadvantages, depending on your needs, preferences, and budget. Here are some of the pros and cons of using CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K compared to other drum synthesizers and samplers:
-
-
-
Pros
-
Cons
-
-
-
-
You can use it for free without paying for a license or subscription.
-
You can access a huge library of presets and samples created by Nicky Romero and other professional producers.
-
You can customize the interface with a custom skin designed by Nicky Romero.
-
You can create kick drums and other percussion sounds from scratch or from samples with a lot of control and flexibility.
-
You can use it in different EDM genres and styles with ease and versatility.
-
-
-
You may violate the intellectual property rights of Sonic Academy and Nicky Romero by using their software without their permission or credit.
-
You may expose your computer to malware or viruses that may harm your system or steal your personal information.
-
You may compromise your personal data or privacy by downloading or installing CRACK software from unreliable sources.
-
You may encounter errors or bugs that may affect the performance or functionality of the software.
-
You may miss out on some features or updates that are available in the original software but not in the CRACK version.
-
-
-
-
Conclusion
-
In conclusion, CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K is a powerful and versatile drum synthesizer plugin that lets you create kick drums and other percussion sounds for your EDM tracks. It offers a lot of control and flexibility over the sound design, as well as a huge library of presets and a custom skin designed by Nicky Romero. However, it is also illegal, unethical, and risky to use, as it may violate the intellectual property rights of Sonic Academy and Nicky Romero, expose your computer to malware or viruses, compromise your personal data or privacy, or cause errors or damage to your system.
-
Therefore, we recommend that you avoid using CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K and instead use the original software from Sonic Academy's website . The original software is more reliable, safe, and updated than the CRACK version, and it also supports the developers who created it. You can also try other drum synthesizers and samplers that are available in the market, such as Native Instruments Battery 4 , Xfer Records Serum , or Ableton Live Drum Racks . These software products are also powerful and versatile, but they have different features and options that may suit your needs, preferences, and budget better. You can also learn from their tutorials and resources to improve your drum sound design skills and techniques.
-
We hope that this article has helped you understand what CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K is, how to download and install it, how to use it to create kick drums and other percussion sounds, how to mix and produce EDM with it, and what are the pros and cons of using it compared to other drum synthesizers and samplers. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading and happy producing!
-
FAQs
-
Here are some frequently asked questions about CRACK Sonic Academy KICK Nicky Romero Edition v1.01 WiN MacOSX Incl. K:
-
-
Is it legal to use CRACK software?
-
No, it is not legal to use CRACK software, as it violates the intellectual property rights of the original developers and publishers. CRACK software is considered a form of software piracy, which is a criminal offense in many countries. Using CRACK software may result in legal actions, fines, or penalties from the authorities or the rights holders.
-
What are the risks of using CRACK software?
-
There are many risks of using CRACK software, such as exposing your computer to malware or viruses, compromising your personal data or privacy, causing errors or damage to your system, losing access to updates or support, or facing ethical or moral dilemmas. Using CRACK software may also affect your reputation or credibility as a producer or artist, as it may be seen as unprofessional, dishonest, or disrespectful by your peers or fans.
-
How can I update or uninstall CRACK software?
-
To update or uninstall CRACK software, you need to follow the instructions provided by the source website or the readme file that came with the software. However, updating or uninstalling CRACK software may not be easy or possible, as it may require additional steps or tools that are not available or reliable. Updating or uninstalling CRACK software may also cause more errors or problems to your system, as it may interfere with other programs or files on your computer.
-
How can I get support or feedback for CRACK software?
-
To get support or feedback for CRACK software, you need to contact the source website or the group that cracked the software. However, getting support or feedback for CRACK software may not be easy or possible, as they may not respond to your queries or requests, or they may provide inaccurate or misleading information. Getting support or feedback for CRACK software may also expose you to more risks, such as malware or viruses, personal data or privacy breaches, or legal actions.
-
Where can I find more resources or tutorials for CRACK software?
-
To find more resources or tutorials for CRACK software, you need to search online for blogs, videos, forums, or websites that offer such content. However, finding more resources or tutorials for CRACK software may not be easy or possible, as they may not be trustworthy or safe, or they may contain outdated or incorrect information. Finding more resources or tutorials for CRACK software may also expose you to more risks, such as malware or viruses, personal data or privacy breaches, or legal actions.
If you are looking for a powerful and easy-to-use partition software tool for your Windows PC, you may have heard of EaseUS Partition Master. This program allows you to create, resize, move, format, delete, merge, split, clone, convert, migrate, recover, optimize, and manage your disk partitions in a few clicks. It supports all kinds of storage devices, including HDDs, SSDs, USB drives, memory cards, RAID arrays, and more. It also supports various file systems, such as NTFS, FAT32, exFAT, EXT2/3/4, ReFS, etc.
-
However, EaseUS Partition Master is not a free program. You need to pay for a license key to activate its full features and functions. That's why some people may look for a keygen, which is a small program that can generate a serial number or activation code for a software product. By using a keygen, you can bypass the registration process and unlock the software without paying anything.
But not all keygens are safe and reliable. Some of them may contain viruses, malware, spyware, or adware that can harm your computer or steal your personal information. Some of them may not work properly or generate invalid or expired codes that can cause errors or crashes. Some of them may even be detected by your antivirus software as threats and be blocked or deleted.
-
That's why you need to be careful when downloading and using keygens from unknown sources. You need to make sure that they are clean, tested, verified, and working. You also need to make sure that they are compatible with your software version and operating system.
-
One of the most trusted and popular keygens for EaseUS Partition Master is the one created by TSZ, which stands for The Software Zone. This keygen can generate a valid and working serial number for EaseUS Partition Master v10.2, which is one of the latest versions of the program. It also comes with a PATCHED version of the program, which means that it has been modified or fixed to remove any bugs, errors, limitations, or restrictions that may affect its performance or functionality.
-
By using the PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ, you can enjoy all the benefits of the program without any hassle or risk. You can create and manage your disk partitions as you wish, without worrying about data loss, system crash, or compatibility issues. You can also use the program in any language you prefer, as it supports multiple languages, including English, German, French, Spanish, Italian, Portuguese, Japanese, Chinese, and more.
-
In this article, we will show you how to download and install PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ on your Windows PC. We will also show you how to use the program to perform various disk management tasks and give you some tips and tricks for using it effectively. Let's get started!
-
How to download and install PATCHED EaseUS Partition Master v10 2 Multilingual Incl Keygen-TSZ
-
The first step is to download the PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ from a reliable and secure source. You can find the download link at the end of this article. The file size is about 33 MB and it is in a ZIP format.
-
Before you download the file, you need to verify its authenticity and integrity. You can do this by checking its MD5 hash value, which is a unique identifier that can be used to confirm that the file has not been tampered with or corrupted. The MD5 hash value of the file is 3F9A7E6C0E8F4B9F8A1C8D9B3E6B7C0D. You can use an online tool like MD5 Hash Generator to calculate the MD5 hash value of the file and compare it with the one we provided. If they match, then you can proceed with the download. If they don't match, then you should avoid downloading the file as it may be fake or infected.
-
After you download the file, you need to extract it using a program like WinRAR or 7-Zip . You will see two folders inside the ZIP file: one named EaseUS Partition Master 10.2 Technician Edition and another named Keygen-TSZ. The first folder contains the setup file of the PATCHED EaseUS Partition Master v10.2 Multilingual and the second folder contains the keygen file.
-
-
To install the program, you need to run the setup file as an administrator. You can do this by right-clicking on the file and choosing Run as administrator. Then, follow the instructions on the screen to complete the installation process. You can choose the destination folder and the language of the program during the installation.
-
To activate the program, you need to use the keygen file. You can do this by running the keygen file as an administrator as well. Then, you will see a window with a button that says Generate. Click on this button to generate a serial number for EaseUS Partition Master v10.2. Copy this serial number and paste it into the activation window of the program. Click on Activate to finish the activation process.
-
Congratulations! You have successfully installed and activated PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ on your Windows PC. Now you can use it to manage your disk partitions with ease.
-
How to use PATCHED EaseUS Partition Master v10 2 Multilingual Incl Keygen-TSZ
-
To use PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ, you need to launch it from your desktop or start menu. You will see a main interface with four tabs: Disk & Partition Management, Disk & Partition Copy Wizard, Partition Recovery Wizard, and Migrate OS to SSD/HDD Wizard. Each tab contains different functions and features that you can use to perform various disk management tasks.
-
In this section, we will show you how to use the first tab, Disk & Partition Management, which is the most commonly used one. This tab allows you to perform basic and advanced disk management tasks, such as resizing, moving, formatting, deleting, and merging partitions. Here are the steps to follow:
-
How to resize a partition
-
Resizing a partition means changing its size by adding or subtracting some disk space. This can be useful when you want to create more free space for a new partition, or when you want to extend an existing partition that is running out of space. To resize a partition, you need to do the following:
-
-
Select the partition that you want to resize from the disk map. You can also right-click on it and choose Resize/Move partition.
-
Drag the left or right border of the partition to adjust its size. You can also enter the exact size in the text box below. You will see the changes in the preview window.
-
Click on OK to confirm your operation.
-
Click on Apply on the top left corner to execute your operation. You may need to restart your computer for the changes to take effect.
-
-
Note: When resizing a partition, you need to make sure that there is enough unallocated space on the same disk. If not, you may need to shrink another partition or delete an unused one to create some free space.
-
How to move a partition
-
Moving a partition means changing its location on the disk. This can be useful when you want to rearrange your partitions for better organization or performance. To move a partition, you need to do the following:
-
-
Select the partition that you want to move from the disk map. You can also right-click on it and choose Resize/Move partition.
-
Drag the entire partition to the left or right until it reaches the desired position. You will see the changes in the preview window.
-
Click on OK to confirm your operation.
-
Click on Apply on the top left corner to execute your operation. You may need to restart your computer for the changes to take effect.
-
-
Note: When moving a partition, you need to make sure that there is enough unallocated space on both sides of the partition. If not, you may need to resize or delete other partitions to create some free space.
-
How to format a partition
-
Formatting a partition means erasing all the data on it and assigning a new file system and label. This can be useful when you want to clean up a partition that is corrupted, infected, or full of junk files. It can also be useful when you want to change the file system of a partition for compatibility or performance reasons. To format a partition, you need to do the following:
-
-
Select the partition that you want to format from the disk map. You can also right-click on it and choose Format partition.
-
Select the file system that you want to use from the drop-down menu. You can choose from NTFS, FAT32, exFAT, EXT2/3/4, ReFS, etc. You can also enter a label for your partition in the text box below.
-
Click on OK to confirm your operation.
-
Click on Apply on the top left corner to execute your operation.
-
-
Note: When formatting a partition, you need to be aware that all the data on it will be permanently deleted. Therefore, you should backup any important files before formatting. You should also avoid formatting system partitions or boot partitions as this may cause your computer to fail to start.
-
How to delete a partition
-
Deleting a partition means removing it from the disk and freeing up its space. This can be useful when you want to get rid of an unwanted or unnecessary partition and create more unallocated space for other purposes. To delete a partition, you need to do the following:
-
-
Select the partition that you want to delete from the disk map. You can also right-click on it and choose Delete partition.
-
Click on OK to confirm your operation.
-
Click on Apply on the top left corner to execute your operation.
-
-
Note: When deleting a partition, you need to be aware that all the data on it will be permanently deleted. Therefore, you should backup any important files before deleting. You should also avoid deleting system partitions or boot partitions as this may cause your computer to fail to start.
-
How to merge two partitions
-
Merging two partitions means combining them into one larger partition and keeping all the data on them. This can be useful when you want to consolidate your disk space and reduce the number of partitions on your disk. To merge two partitions, you need to do the following:
-
-
Select the two adjacent partitions that you want to merge from the disk map. You can also right-click on one of them and choose Merge partition.
-
Select the destination partition where you want to keep all the data from the two partitions. You can also choose the file system and the label for the merged partition.
-
Click on OK to confirm your operation.
-
Click on Apply on the top left corner to execute your operation.
-
-
Note: When merging two partitions, you need to make sure that they are adjacent and on the same disk. If not, you may need to move or delete other partitions to create some free space. You also need to make sure that the destination partition has enough space to hold all the data from the two partitions. If not, you may need to resize it before merging.
-
Tips and tricks for using PATCHED EaseUS Partition Master v10 2 Multilingual Incl Keygen-TSZ
-
PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ is a powerful and easy-to-use partition software tool, but there are some tips and tricks that can help you use it more effectively and safely. Here are some of them:
-
How to backup your data before making any changes to your disk partitions
-
One of the most important things that you should do before making any changes to your disk partitions is to backup your data. This can prevent data loss or corruption in case something goes wrong during the operation or after it. To backup your data, you can use the built-in backup function of EaseUS Partition Master or any other backup software that you prefer. To use the backup function of EaseUS Partition Master, you need to do the following:
-
-
Select the partition or disk that you want to backup from the disk map. You can also right-click on it and choose Backup partition/disk.
-
Select the destination where you want to save the backup image file. You can choose a local drive, an external drive, a network drive, or a cloud drive.
-
Select the backup mode that you want to use. You can choose from Full backup, Differential backup, or Incremental backup. A full backup will create a complete copy of your partition or disk, while a differential backup will only backup the changes since the last full backup, and an incremental backup will only backup the changes since the last backup of any type.
-
Select the compression level that you want to use. You can choose from None, Normal, or High. A higher compression level will reduce the size of the backup image file, but it will also take longer time and more CPU resources.
-
Select the encryption option if you want to protect your backup image file with a password.
-
Select the schedule option if you want to set up a regular backup plan for your partition or disk.
-
Click on Proceed to start the backup process.
-
-
Note: When backing up your data, you need to make sure that there is enough free space on the destination drive to store the backup image file. You also need to make sure that the destination drive is accessible and writable during the backup process. You also need to avoid using or modifying the partition or disk that you are backing up until the backup process is finished.
-
How to use the preview function to see the effects of your operations before applying them
-
Another useful feature of EaseUS Partition Master is the preview function, which allows you to see the effects of your operations before applying them. This can help you avoid making mistakes or unwanted changes to your disk partitions. To use the preview function, you need to do the following:
-
-
Perform any operation that you want to do on your disk partitions, such as resizing, moving, formatting, deleting, or merging them.
-
Before clicking on Apply, click on Preview on the top right corner of the main interface.
-
You will see a window with a before-and-after comparison of your disk partitions. You can zoom in or out, switch between different disks, or change the view mode to see more details.
-
If you are satisfied with the preview, you can click on Apply to execute your operation. If you are not satisfied, you can click on Undo or Discard to cancel your operation.
-
-
Note: When using the preview function, you need to be aware that it is only a simulation and not a guarantee of the final result. There may be some differences or errors due to various factors, such as disk conditions, system settings, or hardware compatibility. Therefore, you should always backup your data before making any changes to your disk partitions.
-
How to use the bootable media builder to create a bootable CD or USB drive for emergency situations
-
Sometimes, you may encounter some situations where you cannot boot into your Windows system or access your disk partitions normally. This may be caused by some reasons, such as virus infection, system crash, partition corruption, boot sector damage, or hardware failure. In these cases, you may need a bootable CD or USB drive that can help you boot into a pre-installed environment and perform some disk management tasks without loading Windows. To create a bootable CD or USB drive, you can use the bootable media builder function of EaseUS Partition Master. To use this function, you need to do the following:
-
-
Insert a blank CD or USB drive into your computer.
-
Launch EaseUS Partition Master and click on WinPE Bootable Disk on the top right corner of the main interface.
-
Select the device that you want to use as the bootable media from the drop-down menu. You can also customize the ISO file name and location if you want.
-
Click on Proceed to start creating the bootable media.
-
Wait for the process to finish and then eject the device from your computer.
-
-
Note: When creating a bootable media, you need to make sure that the device has enough free space to store the ISO file. You also need to make sure that your computer supports booting from CD or USB drive and that you have set the correct boot order in your BIOS settings.
-
Conclusion
-
PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ is a powerful and easy-to-use partition software tool that can help you create and manage your disk partitions in a few clicks. It supports all kinds of storage devices and file systems and offers various functions and features for different disk management tasks. It also comes with a keygen that can generate a valid and working serial number for activating the program without paying anything.
-
In this article, we have shown you how to download and install PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ on your Windows PC. We have also shown you how to use the program to perform basic and advanced disk management tasks and given you some tips and tricks for using it effectively and safely. We hope that this article has been helpful and informative for you.
-
If you are interested in trying PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ for yourself, you can download it from the link below. But remember, this is only for educational purposes and we do not encourage or endorse any illegal or unethical use of this program. You should always respect the intellectual property rights of the software developers and purchase a legitimate license key if you want to use their products.
Here are some frequently asked questions and answers about PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ that you may find useful:
-
What are some common problems that users may encounter when using PATCHED EaseUS Partition Master v10 2 Multilingual Incl Keygen-TSZ and how to solve them?
-
Some of the common problems that users may encounter when using PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ are:
-
-
The program fails to launch or crashes during the operation. This may be caused by some compatibility issues, corrupted files, insufficient system resources, or antivirus interference. To solve this problem, you can try to update your system drivers, reinstall the program, run the program as an administrator, disable your antivirus software temporarily, or contact the support team for assistance.
-
The program cannot detect or recognize your disk or partition. This may be caused by some disk errors, bad sectors, loose connections, or unsupported formats. To solve this problem, you can try to check and repair your disk errors, scan and fix your bad sectors, reconnect your disk properly, or convert your disk or partition to a supported format.
-
The program cannot complete the operation or shows an error message. This may be caused by some logical conflicts, physical limitations, system restrictions, or user mistakes. To solve this problem, you can try to check and resolve any logical conflicts, such as overlapping partitions, unallocated space, or drive letters. You can also check and adjust any physical limitations, such as disk size, partition size, or alignment. You can also check and modify any system restrictions, such as permissions, policies, or registry settings. You can also check and correct any user mistakes, such as wrong operations, incorrect parameters, or invalid codes.
-
-
What are some alternative partition software tools that users can try if they are not satisfied with PATCHED EaseUS Partition Master v10 2 Multilingual Incl Keygen-TSZ?
-
Some of the alternative partition software tools that users can try if they are not satisfied with PATCHED EaseUS Partition Master v10.2 Multilingual Incl Keygen-TSZ are:
-
-
MiniTool Partition Wizard: This is another popular and powerful partition software tool that offers similar functions and features as EaseUS Partition Master. It also supports various storage devices and file systems and has a user-friendly interface and a bootable media builder.
-
AOMEI Partition Assistant: This is another reliable and easy-to-use partition software tool that provides various disk management solutions as EaseUS Partition Master. It also supports various storage devices and file systems and has a secure boot mode and a data protection mode.
-
Paragon Partition Manager: This is another professional and comprehensive partition software tool that delivers high-quality disk management services as EaseUS Partition Master. It also supports various storage devices and file systems and has a backup and recovery function and a disk optimization function.
-
-
What are some best practices for disk partitioning and maintenance that users should follow to ensure optimal performance and data security?
-
Some of the best practices for disk partitioning and maintenance that users should follow to ensure optimal performance and data security are:
-
-
Backup your data regularly before making any changes to your disk partitions.
-
Use the preview function to see the effects of your operations before applying them.
-
Keep at least 10% of free space on each partition to avoid fragmentation and slowdowns.
-
Align your partitions properly to improve the read/write speed and lifespan of your disks.
-
Defragment your partitions regularly to optimize the disk space usage and performance.
-
Check and repair your disk errors periodically to prevent data loss or corruption.
-
Use a reliable antivirus software to protect your disks from virus infection or damage.
-
-
What are some features that are only available in the paid version of EaseUS Partition Master and how can users upgrade their license if they want them?
-
Some of the features that are only available in the paid version of EaseUS Partition Master are:
-
-
Disk conversion: This feature allows you to convert your disk between MBR and GPT styles without losing data.
-
Dynamic disk management: This feature allows you to create, resize, move, format, delete, merge, split, clone, convert, migrate, recover, optimize, and manage your dynamic disks and volumes.
-
Command line support: This feature allows you to perform disk management tasks using command lines instead of graphical interface.
-
Free lifetime upgrade: This feature allows you to get free access to all the latest versions of EaseUS Partition Master without paying anything.
-
Free technical support: This feature allows you to get free and professional technical support from the EaseUS team whenever you have any questions or issues with the program.
-
-
To upgrade your license from the free version to the paid version of EaseUS Partition Master, you need to do the following:
-
-
Visit the official website of EaseUS Partition Master and choose the edition that suits your needs. You can choose from Professional, Server, Unlimited, or Technician.
-
Click on Buy Now and fill in your payment information and personal details. You can pay with credit card, PayPal, or other methods.
-
After your payment is confirmed, you will receive an email with your license key and download link.
-
Download and install the paid version of EaseUS Partition Master and enter your license key to activate it.
-
-
Congratulations! You have successfully upgraded your license and unlocked all the features of EaseUS Partition Master.
-
How can users contact the support team of EaseUS Partition Master if they have any questions or issues with the program?
-
If users have any questions or issues with EaseUS Partition Master, they can contact the support team of EaseUS Partition Master by using one of the following methods:
-
-
Email: Users can send an email to support@easeus.com and describe their problem in detail. They should also attach some screenshots or logs if possible. They will receive a reply within 24 hours.
-
Phone: Users can call the toll-free number +1-800-570-4634 and speak to a customer service representative. They should have their license key and order number ready. They can call from Monday to Friday, 9:00 AM to 5:30 PM (GMT+8).
-
Live chat: Users can click on the Live Chat button on the bottom right corner of the official website of EaseUS Partition Master and chat with an online agent. They should provide their name, email address, and problem description. They can chat from Monday to Friday, 9:00 AM to 5:30 PM (GMT+8).
-
Forum: Users can visit the EaseUS Forum and post their question or issue on the relevant section. They can also browse through the existing topics and see if they can find a solution or suggestion from other users or moderators.
-
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/nihaldsouza1/clearlydefined_license_summarizer/src/doc2vec.py b/spaces/nihaldsouza1/clearlydefined_license_summarizer/src/doc2vec.py
deleted file mode 100644
index 4a6610ed0759bbfba4f5557670a85c2e09724dc7..0000000000000000000000000000000000000000
--- a/spaces/nihaldsouza1/clearlydefined_license_summarizer/src/doc2vec.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import os
-import gensim
-from gensim.models.doc2vec import Doc2Vec, TaggedDocument
-import pandas as pd
-import json
-import streamlit as st
-
-try:
- from src.clean import preprocess_text, script_cleaner
-except:
- from clean import preprocess_text, script_cleaner
-
-
-MODEL_PATH = 'models/d2v.model'
-LICENSE_INDEX_PATH = 'data/index_license_map.json'
-
-if os.path.exists(LICENSE_INDEX_PATH):
- license_index_name_map = json.load(open(LICENSE_INDEX_PATH))
-elif os.path.exists("../" + LICENSE_INDEX_PATH):
- license_index_name_map = json.load(open("../" + LICENSE_INDEX_PATH))
-else:
- print("index_license_map Not Found!")
-
-
-def load_model():
- '''
- Load trained model parameters from file
-
- Args:
-
- Returns: Doc2Vec
- Model object
- '''
- if os.path.exists(MODEL_PATH):
- model = Doc2Vec.load(MODEL_PATH)
- elif os.path.exists("../" + MODEL_PATH):
- model = Doc2Vec.load("../" + MODEL_PATH)
- else:
- print("d2v.model Not Found!")
- return None
-
- return model
-
-
-def preprocess(input):
- '''
- Preprocess the input from the textbox
-
- Args:
- input: str
- Input string containing contents of license text
-
- Return: TaggedDocument
- TaggedDocument Object
- '''
- clean_input = preprocess_text(script_cleaner(input))
- tokens = gensim.utils.simple_preprocess(clean_input)
- tagged_doc = TaggedDocument(words=tokens, tags=[1])
- return tagged_doc
-
-
-def inference_vector(model, tagged_doc):
- '''
- Return inference vector
-
- Args:
- tagged_doc: TaggedDocument
- Input processed by 'preprocess' and converted to TaggedDocument
- model: Doc2Vec
- Doc2Vec Model object
-
- Return:
- model.infer_vector object
- Inference vector from model
- '''
- return model.infer_vector(tagged_doc.words)
-
-
-def similarity_ranking(model, infer_vector):
- '''
- Returns a list of tuples containing predictions and confidence scores
-
- Args:
- model: Doc2Vec
- infer_vector: Doc2Vec.infer_vector
-
- Returns: list
- list of tuples containing predictions and confidence scores
-
- '''
- similar_doc = model.dv.most_similar([infer_vector], topn=len(model.dv))
- pred_ranking = []
- for pred in similar_doc:
- pred_ranking.append((license_index_name_map[pred[0]], pred[1]))
- return pred_ranking
-
-def scores_to_df(scores):
- ''''
- Covert list of tuples containing predictions and confidence values to a df
-
- Args:
- scores: list
- list of tuples containing predictions and confidence
-
- Return: DataFrame
- Dataframe containing license names and confidence scores
- '''
- license_names = []
- license_scores = []
- for score in scores:
- license_names.append(score[0])
- license_scores.append(score[1])
-
- data = {'License': license_names, 'Similarity Scores': license_scores}
- return pd.DataFrame.from_dict(data)
-
-def inference(input):
- '''
- Given text input, returns list of tuples containing predictions and confidence scores
-
- Args:
- input: str
- the input from the textbox
-
- Returns: list
- list of tuples containing predictions and confidence scores
- '''
- model = load_model()
- processed_text = preprocess(input)
- infer_vec = inference_vector(model, processed_text)
- results = similarity_ranking(model, infer_vec)
- results_df = scores_to_df(results)
- return results_df
\ No newline at end of file
diff --git a/spaces/nomic-ai/lambdalabs_pokemon-blip-captions/README.md b/spaces/nomic-ai/lambdalabs_pokemon-blip-captions/README.md
deleted file mode 100644
index fd736a5016741627c5a37d860112180ae9dba25c..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/lambdalabs_pokemon-blip-captions/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: lambdalabs/pokemon-blip-captions
-emoji: 🗺️
-colorFrom: purple
-colorTo: red
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ochyai/ochyai_food/constraints.md b/spaces/ochyai/ochyai_food/constraints.md
deleted file mode 100644
index 9baa21e734d5dbcd3acdeeff62aad0be0164925d..0000000000000000000000000000000000000000
--- a/spaces/ochyai/ochyai_food/constraints.md
+++ /dev/null
@@ -1,13 +0,0 @@
-#constraints
-
-ALOs(Food):
-- Ingredients: Identify, Store, Measure, Types, Seasonality, Allergens, Freshness, Quantity
-- Recipes: Follow, Create, Modify, Types, Cuisine, DietaryRestrictions, Complexity, ServingSize
-- Cuisine: Appreciate, Discover, Compare, Regions, Traditions, PopularDishes, Authenticity, Popularity
-- NutritionalValue: Calculate, Optimize, Balance, Macronutrients, Micronutrients, Calories, Healthiness, Satisfaction
-- PreparationMethods: Master, Improve, Teach, Techniques, Tools, CookingTemperatures, Proficiency, Efficiency
-- MealTypes: Plan, Organize, Pair, Breakfast, Lunch, Dinner, Snacks, Dessert, Variety, Enjoyment
-
-Execute ALO(Food) to generate novel, state of the art completely new recipe, instruction for new food, possible voice from the people who ate new recipe, visual representation of dish by words for generative AI that includes photgraphic settings of key image of dish, according to user input food domains and cheracteristics. Generate details as far as you can by brainstorming to fullfill all parameters. Implement linguistic adjustments to prevent and rectify errors.
-
-#templates
diff --git a/spaces/oscars47/Thinking_Parrot_1.1.0/README.md b/spaces/oscars47/Thinking_Parrot_1.1.0/README.md
deleted file mode 100644
index 923feb31aaeb3b28d7bd91eab1a41fab31387dcb..0000000000000000000000000000000000000000
--- a/spaces/oscars47/Thinking_Parrot_1.1.0/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Thinking Parrot 1.1.0
-emoji: 🌍
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/training/controlnet.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/training/controlnet.md
deleted file mode 100644
index 40632d67b81ee8be9157eefe18cbb4a634a29a65..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/training/controlnet.md
+++ /dev/null
@@ -1,333 +0,0 @@
-
-
-# ControlNet
-
-[Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) (ControlNet) by Lvmin Zhang and Maneesh Agrawala.
-
-This example is based on the [training example in the original ControlNet repository](https://github.com/lllyasviel/ControlNet/blob/main/docs/train.md). It trains a ControlNet to fill circles using a [small synthetic dataset](https://huggingface.co/datasets/fusing/fill50k).
-
-## Installing the dependencies
-
-Before running the scripts, make sure to install the library's training dependencies.
-
-
-
-To successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the installation up to date. We update the example scripts frequently and install example-specific requirements.
-
-
-
-To do this, execute the following steps in a new virtual environment:
-```bash
-git clone https://github.com/huggingface/diffusers
-cd diffusers
-pip install -e .
-```
-
-Then navigate into the [example folder](https://github.com/huggingface/diffusers/tree/main/examples/controlnet)
-```bash
-cd examples/controlnet
-```
-
-Now run:
-```bash
-pip install -r requirements.txt
-```
-
-And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-Or for a default 🤗Accelerate configuration without answering questions about your environment:
-
-```bash
-accelerate config default
-```
-
-Or if your environment doesn't support an interactive shell like a notebook:
-
-```python
-from accelerate.utils import write_basic_config
-
-write_basic_config()
-```
-
-## Circle filling dataset
-
-The original dataset is hosted in the ControlNet [repo](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip), but we re-uploaded it [here](https://huggingface.co/datasets/fusing/fill50k) to be compatible with 🤗 Datasets so that it can handle the data loading within the training script.
-
-Our training examples use [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) because that is what the original set of ControlNet models was trained on. However, ControlNet can be trained to augment any compatible Stable Diffusion model (such as [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4)) or [`stabilityai/stable-diffusion-2-1`](https://huggingface.co/stabilityai/stable-diffusion-2-1).
-
-To use your own dataset, take a look at the [Create a dataset for training](create_dataset) guide.
-
-## Training
-
-Download the following images to condition our training with:
-
-```sh
-wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png
-
-wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png
-```
-
-Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) argument.
-
-The training script creates and saves a `diffusion_pytorch_model.bin` file in your repository.
-
-```bash
-export MODEL_DIR="runwayml/stable-diffusion-v1-5"
-export OUTPUT_DIR="path to save model"
-
-accelerate launch train_controlnet.py \
- --pretrained_model_name_or_path=$MODEL_DIR \
- --output_dir=$OUTPUT_DIR \
- --dataset_name=fusing/fill50k \
- --resolution=512 \
- --learning_rate=1e-5 \
- --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
- --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
- --train_batch_size=4 \
- --push_to_hub
-```
-
-This default configuration requires ~38GB VRAM.
-
-By default, the training script logs outputs to tensorboard. Pass `--report_to wandb` to use Weights &
-Biases.
-
-Gradient accumulation with a smaller batch size can be used to reduce training requirements to ~20 GB VRAM.
-
-```bash
-export MODEL_DIR="runwayml/stable-diffusion-v1-5"
-export OUTPUT_DIR="path to save model"
-
-accelerate launch train_controlnet.py \
- --pretrained_model_name_or_path=$MODEL_DIR \
- --output_dir=$OUTPUT_DIR \
- --dataset_name=fusing/fill50k \
- --resolution=512 \
- --learning_rate=1e-5 \
- --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
- --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
- --train_batch_size=1 \
- --gradient_accumulation_steps=4 \
- --push_to_hub
-```
-
-## Training with multiple GPUs
-
-`accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch)
-for running distributed training with `accelerate`. Here is an example command:
-
-```bash
-export MODEL_DIR="runwayml/stable-diffusion-v1-5"
-export OUTPUT_DIR="path to save model"
-
-accelerate launch --mixed_precision="fp16" --multi_gpu train_controlnet.py \
- --pretrained_model_name_or_path=$MODEL_DIR \
- --output_dir=$OUTPUT_DIR \
- --dataset_name=fusing/fill50k \
- --resolution=512 \
- --learning_rate=1e-5 \
- --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
- --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
- --train_batch_size=4 \
- --mixed_precision="fp16" \
- --tracker_project_name="controlnet-demo" \
- --report_to=wandb \
- --push_to_hub
-```
-
-## Example results
-
-#### After 300 steps with batch size 8
-
-| | |
-|-------------------|:-------------------------:|
-| | red circle with blue background |
- |  |
-| | cyan circle with brown floral background |
- |  |
-
-
-#### After 6000 steps with batch size 8:
-
-| | |
-|-------------------|:-------------------------:|
-| | red circle with blue background |
- |  |
-| | cyan circle with brown floral background |
- |  |
-
-## Training on a 16 GB GPU
-
-Enable the following optimizations to train on a 16GB GPU:
-
-- Gradient checkpointing
-- bitsandbyte's 8-bit optimizer (take a look at the [installation]((https://github.com/TimDettmers/bitsandbytes#requirements--installation) instructions if you don't already have it installed)
-
-Now you can launch the training script:
-
-```bash
-export MODEL_DIR="runwayml/stable-diffusion-v1-5"
-export OUTPUT_DIR="path to save model"
-
-accelerate launch train_controlnet.py \
- --pretrained_model_name_or_path=$MODEL_DIR \
- --output_dir=$OUTPUT_DIR \
- --dataset_name=fusing/fill50k \
- --resolution=512 \
- --learning_rate=1e-5 \
- --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
- --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
- --train_batch_size=1 \
- --gradient_accumulation_steps=4 \
- --gradient_checkpointing \
- --use_8bit_adam \
- --push_to_hub
-```
-
-## Training on a 12 GB GPU
-
-Enable the following optimizations to train on a 12GB GPU:
-- Gradient checkpointing
-- bitsandbyte's 8-bit optimizer (take a look at the [installation]((https://github.com/TimDettmers/bitsandbytes#requirements--installation) instructions if you don't already have it installed)
-- xFormers (take a look at the [installation](https://huggingface.co/docs/diffusers/training/optimization/xformers) instructions if you don't already have it installed)
-- set gradients to `None`
-
-```bash
-export MODEL_DIR="runwayml/stable-diffusion-v1-5"
-export OUTPUT_DIR="path to save model"
-
-accelerate launch train_controlnet.py \
- --pretrained_model_name_or_path=$MODEL_DIR \
- --output_dir=$OUTPUT_DIR \
- --dataset_name=fusing/fill50k \
- --resolution=512 \
- --learning_rate=1e-5 \
- --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
- --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
- --train_batch_size=1 \
- --gradient_accumulation_steps=4 \
- --gradient_checkpointing \
- --use_8bit_adam \
- --enable_xformers_memory_efficient_attention \
- --set_grads_to_none \
- --push_to_hub
-```
-
-When using `enable_xformers_memory_efficient_attention`, please make sure to install `xformers` by `pip install xformers`.
-
-## Training on an 8 GB GPU
-
-We have not exhaustively tested DeepSpeed support for ControlNet. While the configuration does
-save memory, we have not confirmed whether the configuration trains successfully. You will very likely
-have to make changes to the config to have a successful training run.
-
-Enable the following optimizations to train on a 8GB GPU:
-- Gradient checkpointing
-- bitsandbyte's 8-bit optimizer (take a look at the [installation]((https://github.com/TimDettmers/bitsandbytes#requirements--installation) instructions if you don't already have it installed)
-- xFormers (take a look at the [installation](https://huggingface.co/docs/diffusers/training/optimization/xformers) instructions if you don't already have it installed)
-- set gradients to `None`
-- DeepSpeed stage 2 with parameter and optimizer offloading
-- fp16 mixed precision
-
-[DeepSpeed](https://www.deepspeed.ai/) can offload tensors from VRAM to either
-CPU or NVME. This requires significantly more RAM (about 25 GB).
-
-You'll have to configure your environment with `accelerate config` to enable DeepSpeed stage 2.
-
-The configuration file should look like this:
-
-```yaml
-compute_environment: LOCAL_MACHINE
-deepspeed_config:
- gradient_accumulation_steps: 4
- offload_optimizer_device: cpu
- offload_param_device: cpu
- zero3_init_flag: false
- zero_stage: 2
-distributed_type: DEEPSPEED
-```
-
-
-
-See [documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more DeepSpeed configuration options.
-
-
-
-Changing the default Adam optimizer to DeepSpeed's Adam
-`deepspeed.ops.adam.DeepSpeedCPUAdam` gives a substantial speedup but
-it requires a CUDA toolchain with the same version as PyTorch. 8-bit optimizer
-does not seem to be compatible with DeepSpeed at the moment.
-
-```bash
-export MODEL_DIR="runwayml/stable-diffusion-v1-5"
-export OUTPUT_DIR="path to save model"
-
-accelerate launch train_controlnet.py \
- --pretrained_model_name_or_path=$MODEL_DIR \
- --output_dir=$OUTPUT_DIR \
- --dataset_name=fusing/fill50k \
- --resolution=512 \
- --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
- --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
- --train_batch_size=1 \
- --gradient_accumulation_steps=4 \
- --gradient_checkpointing \
- --enable_xformers_memory_efficient_attention \
- --set_grads_to_none \
- --mixed_precision fp16 \
- --push_to_hub
-```
-
-## Inference
-
-The trained model can be run with the [`StableDiffusionControlNetPipeline`].
-Set `base_model_path` and `controlnet_path` to the values `--pretrained_model_name_or_path` and
-`--output_dir` were respectively set to in the training script.
-
-```py
-from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
-from diffusers.utils import load_image
-import torch
-
-base_model_path = "path to model"
-controlnet_path = "path to controlnet"
-
-controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16, use_safetensors=True)
-pipe = StableDiffusionControlNetPipeline.from_pretrained(
- base_model_path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True
-)
-
-# speed up diffusion process with faster scheduler and memory optimization
-pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
-# remove following line if xformers is not installed
-pipe.enable_xformers_memory_efficient_attention()
-
-pipe.enable_model_cpu_offload()
-
-control_image = load_image("./conditioning_image_1.png")
-prompt = "pale golden rod circle with old lace background"
-
-# generate image
-generator = torch.manual_seed(0)
-image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0]
-
-image.save("./output.png")
-```
-
-## Stable Diffusion XL
-
-Training with [Stable Diffusion XL](https://huggingface.co/papers/2307.01952) is also supported via the `train_controlnet_sdxl.py` script. Please refer to the docs [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md).
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/api/pipelines/stable_diffusion/stable_diffusion_xl.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/api/pipelines/stable_diffusion/stable_diffusion_xl.md
deleted file mode 100644
index ab5a03ae81a0fc0f0da7b6105ccc3886f537b64c..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/api/pipelines/stable_diffusion/stable_diffusion_xl.md
+++ /dev/null
@@ -1,400 +0,0 @@
-
-
-# Stable diffusion XL
-
-Stable Diffusion XL은 Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, Robin Rombach에 의해 [SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis](https://arxiv.org/abs/2307.01952)에서 제안되었습니다.
-
-논문 초록은 다음을 따릅니다:
-
-*text-to-image의 latent diffusion 모델인 SDXL을 소개합니다. 이전 버전의 Stable Diffusion과 비교하면, SDXL은 세 배 더큰 규모의 UNet 백본을 포함합니다: 모델 파라미터의 증가는 많은 attention 블럭을 사용하고 더 큰 cross-attention context를 SDXL의 두 번째 텍스트 인코더에 사용하기 때문입니다. 다중 종횡비에 다수의 새로운 conditioning 방법을 구성했습니다. 또한 후에 수정하는 image-to-image 기술을 사용함으로써 SDXL에 의해 생성된 시각적 품질을 향상하기 위해 정제된 모델을 소개합니다. SDXL은 이전 버전의 Stable Diffusion보다 성능이 향상되었고, 이러한 black-box 최신 이미지 생성자와 경쟁력있는 결과를 달성했습니다.*
-
-## 팁
-
-- Stable Diffusion XL은 특히 786과 1024사이의 이미지에 잘 작동합니다.
-- Stable Diffusion XL은 아래와 같이 학습된 각 텍스트 인코더에 대해 서로 다른 프롬프트를 전달할 수 있습니다. 동일한 프롬프트의 다른 부분을 텍스트 인코더에 전달할 수도 있습니다.
-- Stable Diffusion XL 결과 이미지는 아래에 보여지듯이 정제기(refiner)를 사용함으로써 향상될 수 있습니다.
-
-### 이용가능한 체크포인트:
-
-- *Text-to-Image (1024x1024 해상도)*: [`StableDiffusionXLPipeline`]을 사용한 [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
-- *Image-to-Image / 정제기(refiner) (1024x1024 해상도)*: [`StableDiffusionXLImg2ImgPipeline`]를 사용한 [stabilityai/stable-diffusion-xl-refiner-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0)
-
-## 사용 예시
-
-SDXL을 사용하기 전에 `transformers`, `accelerate`, `safetensors` 와 `invisible_watermark`를 설치하세요.
-다음과 같이 라이브러리를 설치할 수 있습니다:
-
-```
-pip install transformers
-pip install accelerate
-pip install safetensors
-pip install invisible-watermark>=0.2.0
-```
-
-### 워터마커
-
-Stable Diffusion XL로 이미지를 생성할 때 워터마크가 보이지 않도록 추가하는 것을 권장하는데, 이는 다운스트림(downstream) 어플리케이션에서 기계에 합성되었는지를 식별하는데 도움을 줄 수 있습니다. 그렇게 하려면 [invisible_watermark 라이브러리](https://pypi.org/project/invisible-watermark/)를 통해 설치해주세요:
-
-
-```
-pip install invisible-watermark>=0.2.0
-```
-
-`invisible-watermark` 라이브러리가 설치되면 워터마커가 **기본적으로** 사용될 것입니다.
-
-생성 또는 안전하게 이미지를 배포하기 위해 다른 규정이 있다면, 다음과 같이 워터마커를 비활성화할 수 있습니다:
-
-```py
-pipe = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False)
-```
-
-### Text-to-Image
-
-*text-to-image*를 위해 다음과 같이 SDXL을 사용할 수 있습니다:
-
-```py
-from diffusers import StableDiffusionXLPipeline
-import torch
-
-pipe = StableDiffusionXLPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
-)
-pipe.to("cuda")
-
-prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
-image = pipe(prompt=prompt).images[0]
-```
-
-### Image-to-image
-
-*image-to-image*를 위해 다음과 같이 SDXL을 사용할 수 있습니다:
-
-```py
-import torch
-from diffusers import StableDiffusionXLImg2ImgPipeline
-from diffusers.utils import load_image
-
-pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
-)
-pipe = pipe.to("cuda")
-url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"
-
-init_image = load_image(url).convert("RGB")
-prompt = "a photo of an astronaut riding a horse on mars"
-image = pipe(prompt, image=init_image).images[0]
-```
-
-### 인페인팅
-
-*inpainting*를 위해 다음과 같이 SDXL을 사용할 수 있습니다:
-
-```py
-import torch
-from diffusers import StableDiffusionXLInpaintPipeline
-from diffusers.utils import load_image
-
-pipe = StableDiffusionXLInpaintPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
-)
-pipe.to("cuda")
-
-img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
-mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
-
-init_image = load_image(img_url).convert("RGB")
-mask_image = load_image(mask_url).convert("RGB")
-
-prompt = "A majestic tiger sitting on a bench"
-image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0]
-```
-
-### 이미지 결과물을 정제하기
-
-[base 모델 체크포인트](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)에서, StableDiffusion-XL 또한 고주파 품질을 향상시키는 이미지를 생성하기 위해 낮은 노이즈 단계 이미지를 제거하는데 특화된 [refiner 체크포인트](huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0)를 포함하고 있습니다. 이 refiner 체크포인트는 이미지 품질을 향상시키기 위해 base 체크포인트를 실행한 후 "두 번째 단계" 파이프라인에 사용될 수 있습니다.
-
-refiner를 사용할 때, 쉽게 사용할 수 있습니다
-- 1.) base 모델과 refiner을 사용하는데, 이는 *Denoisers의 앙상블*을 위한 첫 번째 제안된 [eDiff-I](https://research.nvidia.com/labs/dir/eDiff-I/)를 사용하거나
-- 2.) base 모델을 거친 후 [SDEdit](https://arxiv.org/abs/2108.01073) 방법으로 단순하게 refiner를 실행시킬 수 있습니다.
-
-**참고**: SD-XL base와 refiner를 앙상블로 사용하는 아이디어는 커뮤니티 기여자들이 처음으로 제안했으며, 이는 다음과 같은 `diffusers`를 구현하는 데도 도움을 주셨습니다.
-- [SytanSD](https://github.com/SytanSD)
-- [bghira](https://github.com/bghira)
-- [Birch-san](https://github.com/Birch-san)
-- [AmericanPresidentJimmyCarter](https://github.com/AmericanPresidentJimmyCarter)
-
-#### 1.) Denoisers의 앙상블
-
-base와 refiner 모델을 denoiser의 앙상블로 사용할 때, base 모델은 고주파 diffusion 단계를 위한 전문가의 역할을 해야하고, refiner는 낮은 노이즈 diffusion 단계를 위한 전문가의 역할을 해야 합니다.
-
-2.)에 비해 1.)의 장점은 전체적으로 denoising 단계가 덜 필요하므로 속도가 훨씬 더 빨라집니다. 단점은 base 모델의 결과를 검사할 수 없다는 것입니다. 즉, 여전히 노이즈가 심하게 제거됩니다.
-
-base 모델과 refiner를 denoiser의 앙상블로 사용하기 위해 각각 고노이즈(high-nosise) (*즉* base 모델)와 저노이즈 (*즉* refiner 모델)의 노이즈를 제거하는 단계를 거쳐야하는 타임스텝의 기간을 정의해야 합니다.
-base 모델의 [`denoising_end`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline.__call__.denoising_end)와 refiner 모델의 [`denoising_start`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLImg2ImgPipeline.__call__.denoising_start)를 사용해 간격을 정합니다.
-
-`denoising_end`와 `denoising_start` 모두 0과 1사이의 실수 값으로 전달되어야 합니다.
-전달되면 노이즈 제거의 끝과 시작은 모델 스케줄에 의해 정의된 이산적(discrete) 시간 간격의 비율로 정의됩니다.
-노이즈 제거 단계의 수는 모델이 학습된 불연속적인 시간 간격과 선언된 fractional cutoff에 의해 결정되므로 '강도' 또한 선언된 경우 이 값이 '강도'를 재정의합니다.
-
-예시를 들어보겠습니다.
-우선, 두 개의 파이프라인을 가져옵니다. 텍스트 인코더와 variational autoencoder는 동일하므로 refiner를 위해 다시 불러오지 않아도 됩니다.
-
-```py
-from diffusers import DiffusionPipeline
-import torch
-
-base = DiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
-)
-pipe.to("cuda")
-
-refiner = DiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-refiner-1.0",
- text_encoder_2=base.text_encoder_2,
- vae=base.vae,
- torch_dtype=torch.float16,
- use_safetensors=True,
- variant="fp16",
-)
-refiner.to("cuda")
-```
-
-이제 추론 단계의 수와 고노이즈에서 노이즈를 제거하는 단계(*즉* base 모델)를 거쳐 실행되는 지점을 정의합니다.
-
-```py
-n_steps = 40
-high_noise_frac = 0.8
-```
-
-Stable Diffusion XL base 모델은 타임스텝 0-999에 학습되며 Stable Diffusion XL refiner는 포괄적인 낮은 노이즈 타임스텝인 0-199에 base 모델로 부터 파인튜닝되어, 첫 800 타임스텝 (높은 노이즈)에 base 모델을 사용하고 마지막 200 타입스텝 (낮은 노이즈)에서 refiner가 사용됩니다. 따라서, `high_noise_frac`는 0.8로 설정하고, 모든 200-999 스텝(노이즈 제거 타임스텝의 첫 80%)은 base 모델에 의해 수행되며 0-199 스텝(노이즈 제거 타임스텝의 마지막 20%)은 refiner 모델에 의해 수행됩니다.
-
-기억하세요, 노이즈 제거 절차는 **높은 값**(높은 노이즈) 타임스텝에서 시작되고, **낮은 값** (낮은 노이즈) 타임스텝에서 끝납니다.
-
-이제 두 파이프라인을 실행해봅시다. `denoising_end`과 `denoising_start`를 같은 값으로 설정하고 `num_inference_steps`는 상수로 유지합니다. 또한 base 모델의 출력은 잠재 공간에 있어야 한다는 점을 기억하세요:
-
-```py
-prompt = "A majestic lion jumping from a big stone at night"
-
-image = base(
- prompt=prompt,
- num_inference_steps=n_steps,
- denoising_end=high_noise_frac,
- output_type="latent",
-).images
-image = refiner(
- prompt=prompt,
- num_inference_steps=n_steps,
- denoising_start=high_noise_frac,
- image=image,
-).images[0]
-```
-
-이미지를 살펴보겠습니다.
-
-| 원래의 이미지 | Denoiser들의 앙상블 |
-|---|---|
-|  | 
-
-동일한 40 단계에서 base 모델을 실행한다면, 이미지의 디테일(예: 사자의 눈과 코)이 떨어졌을 것입니다:
-
-
-
-앙상블 방식은 사용 가능한 모든 스케줄러에서 잘 작동합니다!
-
-
-
-#### 2.) 노이즈가 완전히 제거된 기본 이미지에서 이미지 출력을 정제하기
-
-일반적인 [`StableDiffusionImg2ImgPipeline`] 방식에서, 기본 모델에서 생성된 완전히 노이즈가 제거된 이미지는 [refiner checkpoint](huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0)를 사용해 더 향상시킬 수 있습니다.
-
-이를 위해, 보통의 "base" text-to-image 파이프라인을 수행 후에 image-to-image 파이프라인으로써 refiner를 실행시킬 수 있습니다. base 모델의 출력을 잠재 공간에 남겨둘 수 있습니다.
-
-```py
-from diffusers import DiffusionPipeline
-import torch
-
-pipe = DiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
-)
-pipe.to("cuda")
-
-refiner = DiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-refiner-1.0",
- text_encoder_2=pipe.text_encoder_2,
- vae=pipe.vae,
- torch_dtype=torch.float16,
- use_safetensors=True,
- variant="fp16",
-)
-refiner.to("cuda")
-
-prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
-
-image = pipe(prompt=prompt, output_type="latent" if use_refiner else "pil").images[0]
-image = refiner(prompt=prompt, image=image[None, :]).images[0]
-```
-
-| 원래의 이미지 | 정제된 이미지 |
-|---|---|
-|  |  |
-
-
-
-refiner는 또한 인페인팅 설정에 잘 사용될 수 있습니다. 아래에 보여지듯이 [`StableDiffusionXLInpaintPipeline`] 클래스를 사용해서 만들어보세요.
-
-
-
-Denoiser 앙상블 설정에서 인페인팅에 refiner를 사용하려면 다음을 수행하면 됩니다:
-
-```py
-from diffusers import StableDiffusionXLInpaintPipeline
-from diffusers.utils import load_image
-
-pipe = StableDiffusionXLInpaintPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
-)
-pipe.to("cuda")
-
-refiner = StableDiffusionXLInpaintPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-refiner-1.0",
- text_encoder_2=pipe.text_encoder_2,
- vae=pipe.vae,
- torch_dtype=torch.float16,
- use_safetensors=True,
- variant="fp16",
-)
-refiner.to("cuda")
-
-img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
-mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
-
-init_image = load_image(img_url).convert("RGB")
-mask_image = load_image(mask_url).convert("RGB")
-
-prompt = "A majestic tiger sitting on a bench"
-num_inference_steps = 75
-high_noise_frac = 0.7
-
-image = pipe(
- prompt=prompt,
- image=init_image,
- mask_image=mask_image,
- num_inference_steps=num_inference_steps,
- denoising_start=high_noise_frac,
- output_type="latent",
-).images
-image = refiner(
- prompt=prompt,
- image=image,
- mask_image=mask_image,
- num_inference_steps=num_inference_steps,
- denoising_start=high_noise_frac,
-).images[0]
-```
-
-일반적인 SDE 설정에서 인페인팅에 refiner를 사용하기 위해, `denoising_end`와 `denoising_start`를 제거하고 refiner의 추론 단계의 수를 적게 선택하세요.
-
-### 단독 체크포인트 파일 / 원래의 파일 형식으로 불러오기
-
-[`~diffusers.loaders.FromSingleFileMixin.from_single_file`]를 사용함으로써 원래의 파일 형식을 `diffusers` 형식으로 불러올 수 있습니다:
-
-```py
-from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
-import torch
-
-pipe = StableDiffusionXLPipeline.from_single_file(
- "./sd_xl_base_1.0.safetensors", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
-)
-pipe.to("cuda")
-
-refiner = StableDiffusionXLImg2ImgPipeline.from_single_file(
- "./sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16, use_safetensors=True, variant="fp16"
-)
-refiner.to("cuda")
-```
-
-### 모델 offloading을 통해 메모리 최적화하기
-
-out-of-memory 에러가 난다면, [`StableDiffusionXLPipeline.enable_model_cpu_offload`]을 사용하는 것을 권장합니다.
-
-```diff
-- pipe.to("cuda")
-+ pipe.enable_model_cpu_offload()
-```
-
-그리고
-
-```diff
-- refiner.to("cuda")
-+ refiner.enable_model_cpu_offload()
-```
-
-### `torch.compile`로 추론 속도를 올리기
-
-`torch.compile`를 사용함으로써 추론 속도를 올릴 수 있습니다. 이는 **ca.** 20% 속도 향상이 됩니다.
-
-```diff
-+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
-+ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True)
-```
-
-### `torch < 2.0`일 때 실행하기
-
-**참고** Stable Diffusion XL을 `torch`가 2.0 버전 미만에서 실행시키고 싶을 때, xformers 어텐션을 사용해주세요:
-
-```
-pip install xformers
-```
-
-```diff
-+pipe.enable_xformers_memory_efficient_attention()
-+refiner.enable_xformers_memory_efficient_attention()
-```
-
-## StableDiffusionXLPipeline
-
-[[autodoc]] StableDiffusionXLPipeline
- - all
- - __call__
-
-## StableDiffusionXLImg2ImgPipeline
-
-[[autodoc]] StableDiffusionXLImg2ImgPipeline
- - all
- - __call__
-
-## StableDiffusionXLInpaintPipeline
-
-[[autodoc]] StableDiffusionXLInpaintPipeline
- - all
- - __call__
-
-### 각 텍스트 인코더에 다른 프롬프트를 전달하기
-
-Stable Diffusion XL는 두 개의 텍스트 인코더에 학습되었습니다. 기본 동작은 각 프롬프트에 동일한 프롬프트를 전달하는 것입니다. 그러나 [일부 사용자](https://github.com/huggingface/diffusers/issues/4004#issuecomment-1627764201)가 품질을 향상시킬 수 있다고 지적한 것처럼 텍스트 인코더마다 다른 프롬프트를 전달할 수 있습니다. 그렇게 하려면, `prompt_2`와 `negative_prompt_2`를 `prompt`와 `negative_prompt`에 전달해야 합니다. 그렇게 함으로써, 원래의 프롬프트들(`prompt`)과 부정 프롬프트들(`negative_prompt`)를 `텍스트 인코더`에 전달할 것입니다.(공식 SDXL 0.9/1.0의 [OpenAI CLIP-ViT/L-14](https://huggingface.co/openai/clip-vit-large-patch14)에서 볼 수 있습니다.) 그리고 `prompt_2`와 `negative_prompt_2`는 `text_encoder_2`에 전달됩니다.(공식 SDXL 0.9/1.0의 [OpenCLIP-ViT/bigG-14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)에서 볼 수 있습니다.)
-
-```py
-from diffusers import StableDiffusionXLPipeline
-import torch
-
-pipe = StableDiffusionXLPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-base-0.9", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
-)
-pipe.to("cuda")
-
-# OAI CLIP-ViT/L-14에 prompt가 전달됩니다
-prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
-# OpenCLIP-ViT/bigG-14에 prompt_2가 전달됩니다
-prompt_2 = "monet painting"
-image = pipe(prompt=prompt, prompt_2=prompt_2).images[0]
-```
\ No newline at end of file
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/unconditional_image_generation/train_unconditional.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/unconditional_image_generation/train_unconditional.py
deleted file mode 100644
index 4925c74c8ccf9be76bda4b9c8511c772158ac154..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/unconditional_image_generation/train_unconditional.py
+++ /dev/null
@@ -1,713 +0,0 @@
-import argparse
-import inspect
-import logging
-import math
-import os
-import shutil
-from datetime import timedelta
-from pathlib import Path
-from typing import Optional
-
-import accelerate
-import datasets
-import torch
-import torch.nn.functional as F
-from accelerate import Accelerator, InitProcessGroupKwargs
-from accelerate.logging import get_logger
-from accelerate.utils import ProjectConfiguration
-from datasets import load_dataset
-from huggingface_hub import HfFolder, Repository, create_repo, whoami
-from packaging import version
-from torchvision import transforms
-from tqdm.auto import tqdm
-
-import diffusers
-from diffusers import DDPMPipeline, DDPMScheduler, UNet2DModel
-from diffusers.optimization import get_scheduler
-from diffusers.training_utils import EMAModel
-from diffusers.utils import check_min_version, is_accelerate_version, is_tensorboard_available, is_wandb_available
-from diffusers.utils.import_utils import is_xformers_available
-
-
-# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
-check_min_version("0.22.0.dev0")
-
-logger = get_logger(__name__, log_level="INFO")
-
-
-def _extract_into_tensor(arr, timesteps, broadcast_shape):
- """
- Extract values from a 1-D numpy array for a batch of indices.
-
- :param arr: the 1-D numpy array.
- :param timesteps: a tensor of indices into the array to extract.
- :param broadcast_shape: a larger shape of K dimensions with the batch
- dimension equal to the length of timesteps.
- :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims.
- """
- if not isinstance(arr, torch.Tensor):
- arr = torch.from_numpy(arr)
- res = arr[timesteps].float().to(timesteps.device)
- while len(res.shape) < len(broadcast_shape):
- res = res[..., None]
- return res.expand(broadcast_shape)
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
- parser.add_argument(
- "--dataset_name",
- type=str,
- default=None,
- help=(
- "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
- " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
- " or to a folder containing files that HF Datasets can understand."
- ),
- )
- parser.add_argument(
- "--dataset_config_name",
- type=str,
- default=None,
- help="The config of the Dataset, leave as None if there's only one config.",
- )
- parser.add_argument(
- "--model_config_name_or_path",
- type=str,
- default=None,
- help="The config of the UNet model to train, leave as None to use standard DDPM configuration.",
- )
- parser.add_argument(
- "--train_data_dir",
- type=str,
- default=None,
- help=(
- "A folder containing the training data. Folder contents must follow the structure described in"
- " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
- " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
- ),
- )
- parser.add_argument(
- "--output_dir",
- type=str,
- default="ddpm-model-64",
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument("--overwrite_output_dir", action="store_true")
- parser.add_argument(
- "--cache_dir",
- type=str,
- default=None,
- help="The directory where the downloaded models and datasets will be stored.",
- )
- parser.add_argument(
- "--resolution",
- type=int,
- default=64,
- help=(
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
- " resolution"
- ),
- )
- parser.add_argument(
- "--center_crop",
- default=False,
- action="store_true",
- help=(
- "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
- " cropped. The images will be resized to the resolution first before cropping."
- ),
- )
- parser.add_argument(
- "--random_flip",
- default=False,
- action="store_true",
- help="whether to randomly flip images horizontally",
- )
- parser.add_argument(
- "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
- )
- parser.add_argument(
- "--eval_batch_size", type=int, default=16, help="The number of images to generate for evaluation."
- )
- parser.add_argument(
- "--dataloader_num_workers",
- type=int,
- default=0,
- help=(
- "The number of subprocesses to use for data loading. 0 means that the data will be loaded in the main"
- " process."
- ),
- )
- parser.add_argument("--num_epochs", type=int, default=100)
- parser.add_argument("--save_images_epochs", type=int, default=10, help="How often to save images during training.")
- parser.add_argument(
- "--save_model_epochs", type=int, default=10, help="How often to save the model during training."
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=1e-4,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument(
- "--lr_scheduler",
- type=str,
- default="cosine",
- help=(
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
- ' "constant", "constant_with_warmup"]'
- ),
- )
- parser.add_argument(
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument("--adam_beta1", type=float, default=0.95, help="The beta1 parameter for the Adam optimizer.")
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
- parser.add_argument(
- "--adam_weight_decay", type=float, default=1e-6, help="Weight decay magnitude for the Adam optimizer."
- )
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer.")
- parser.add_argument(
- "--use_ema",
- action="store_true",
- help="Whether to use Exponential Moving Average for the final model weights.",
- )
- parser.add_argument("--ema_inv_gamma", type=float, default=1.0, help="The inverse gamma value for the EMA decay.")
- parser.add_argument("--ema_power", type=float, default=3 / 4, help="The power value for the EMA decay.")
- parser.add_argument("--ema_max_decay", type=float, default=0.9999, help="The maximum decay magnitude for EMA.")
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--hub_model_id",
- type=str,
- default=None,
- help="The name of the repository to keep in sync with the local `output_dir`.",
- )
- parser.add_argument(
- "--hub_private_repo", action="store_true", help="Whether or not to create a private repository."
- )
- parser.add_argument(
- "--logger",
- type=str,
- default="tensorboard",
- choices=["tensorboard", "wandb"],
- help=(
- "Whether to use [tensorboard](https://www.tensorflow.org/tensorboard) or [wandb](https://www.wandb.ai)"
- " for experiment tracking and logging of model metrics and model checkpoints"
- ),
- )
- parser.add_argument(
- "--logging_dir",
- type=str,
- default="logs",
- help=(
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
- ),
- )
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
- parser.add_argument(
- "--mixed_precision",
- type=str,
- default="no",
- choices=["no", "fp16", "bf16"],
- help=(
- "Whether to use mixed precision. Choose"
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
- "and an Nvidia Ampere GPU."
- ),
- )
- parser.add_argument(
- "--prediction_type",
- type=str,
- default="epsilon",
- choices=["epsilon", "sample"],
- help="Whether the model should predict the 'epsilon'/noise error or directly the reconstructed image 'x0'.",
- )
- parser.add_argument("--ddpm_num_steps", type=int, default=1000)
- parser.add_argument("--ddpm_num_inference_steps", type=int, default=1000)
- parser.add_argument("--ddpm_beta_schedule", type=str, default="linear")
- parser.add_argument(
- "--checkpointing_steps",
- type=int,
- default=500,
- help=(
- "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
- " training using `--resume_from_checkpoint`."
- ),
- )
- parser.add_argument(
- "--checkpoints_total_limit",
- type=int,
- default=None,
- help=("Max number of checkpoints to store."),
- )
- parser.add_argument(
- "--resume_from_checkpoint",
- type=str,
- default=None,
- help=(
- "Whether training should be resumed from a previous checkpoint. Use a path saved by"
- ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
- ),
- )
- parser.add_argument(
- "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
- )
-
- args = parser.parse_args()
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
- if env_local_rank != -1 and env_local_rank != args.local_rank:
- args.local_rank = env_local_rank
-
- if args.dataset_name is None and args.train_data_dir is None:
- raise ValueError("You must specify either a dataset name from the hub or a train data directory.")
-
- return args
-
-
-def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
- if token is None:
- token = HfFolder.get_token()
- if organization is None:
- username = whoami(token)["name"]
- return f"{username}/{model_id}"
- else:
- return f"{organization}/{model_id}"
-
-
-def main(args):
- logging_dir = os.path.join(args.output_dir, args.logging_dir)
- accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
-
- kwargs = InitProcessGroupKwargs(timeout=timedelta(seconds=7200)) # a big number for high resolution or big dataset
- accelerator = Accelerator(
- gradient_accumulation_steps=args.gradient_accumulation_steps,
- mixed_precision=args.mixed_precision,
- log_with=args.logger,
- project_config=accelerator_project_config,
- kwargs_handlers=[kwargs],
- )
-
- if args.logger == "tensorboard":
- if not is_tensorboard_available():
- raise ImportError("Make sure to install tensorboard if you want to use it for logging during training.")
-
- elif args.logger == "wandb":
- if not is_wandb_available():
- raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
- import wandb
-
- # `accelerate` 0.16.0 will have better support for customized saving
- if version.parse(accelerate.__version__) >= version.parse("0.16.0"):
- # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format
- def save_model_hook(models, weights, output_dir):
- if accelerator.is_main_process:
- if args.use_ema:
- ema_model.save_pretrained(os.path.join(output_dir, "unet_ema"))
-
- for i, model in enumerate(models):
- model.save_pretrained(os.path.join(output_dir, "unet"))
-
- # make sure to pop weight so that corresponding model is not saved again
- weights.pop()
-
- def load_model_hook(models, input_dir):
- if args.use_ema:
- load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DModel)
- ema_model.load_state_dict(load_model.state_dict())
- ema_model.to(accelerator.device)
- del load_model
-
- for i in range(len(models)):
- # pop models so that they are not loaded again
- model = models.pop()
-
- # load diffusers style into model
- load_model = UNet2DModel.from_pretrained(input_dir, subfolder="unet")
- model.register_to_config(**load_model.config)
-
- model.load_state_dict(load_model.state_dict())
- del load_model
-
- accelerator.register_save_state_pre_hook(save_model_hook)
- accelerator.register_load_state_pre_hook(load_model_hook)
-
- # Make one log on every process with the configuration for debugging.
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO,
- )
- logger.info(accelerator.state, main_process_only=False)
- if accelerator.is_local_main_process:
- datasets.utils.logging.set_verbosity_warning()
- diffusers.utils.logging.set_verbosity_info()
- else:
- datasets.utils.logging.set_verbosity_error()
- diffusers.utils.logging.set_verbosity_error()
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.push_to_hub:
- if args.hub_model_id is None:
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
- else:
- repo_name = args.hub_model_id
- create_repo(repo_name, exist_ok=True, token=args.hub_token)
- repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token)
-
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
- if "step_*" not in gitignore:
- gitignore.write("step_*\n")
- if "epoch_*" not in gitignore:
- gitignore.write("epoch_*\n")
- elif args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
-
- # Initialize the model
- if args.model_config_name_or_path is None:
- model = UNet2DModel(
- sample_size=args.resolution,
- in_channels=3,
- out_channels=3,
- layers_per_block=2,
- block_out_channels=(128, 128, 256, 256, 512, 512),
- down_block_types=(
- "DownBlock2D",
- "DownBlock2D",
- "DownBlock2D",
- "DownBlock2D",
- "AttnDownBlock2D",
- "DownBlock2D",
- ),
- up_block_types=(
- "UpBlock2D",
- "AttnUpBlock2D",
- "UpBlock2D",
- "UpBlock2D",
- "UpBlock2D",
- "UpBlock2D",
- ),
- )
- else:
- config = UNet2DModel.load_config(args.model_config_name_or_path)
- model = UNet2DModel.from_config(config)
-
- # Create EMA for the model.
- if args.use_ema:
- ema_model = EMAModel(
- model.parameters(),
- decay=args.ema_max_decay,
- use_ema_warmup=True,
- inv_gamma=args.ema_inv_gamma,
- power=args.ema_power,
- model_cls=UNet2DModel,
- model_config=model.config,
- )
-
- if args.enable_xformers_memory_efficient_attention:
- if is_xformers_available():
- import xformers
-
- xformers_version = version.parse(xformers.__version__)
- if xformers_version == version.parse("0.0.16"):
- logger.warn(
- "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
- )
- model.enable_xformers_memory_efficient_attention()
- else:
- raise ValueError("xformers is not available. Make sure it is installed correctly")
-
- # Initialize the scheduler
- accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys())
- if accepts_prediction_type:
- noise_scheduler = DDPMScheduler(
- num_train_timesteps=args.ddpm_num_steps,
- beta_schedule=args.ddpm_beta_schedule,
- prediction_type=args.prediction_type,
- )
- else:
- noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule)
-
- # Initialize the optimizer
- optimizer = torch.optim.AdamW(
- model.parameters(),
- lr=args.learning_rate,
- betas=(args.adam_beta1, args.adam_beta2),
- weight_decay=args.adam_weight_decay,
- eps=args.adam_epsilon,
- )
-
- # Get the datasets: you can either provide your own training and evaluation files (see below)
- # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
-
- # In distributed training, the load_dataset function guarantees that only one local process can concurrently
- # download the dataset.
- if args.dataset_name is not None:
- dataset = load_dataset(
- args.dataset_name,
- args.dataset_config_name,
- cache_dir=args.cache_dir,
- split="train",
- )
- else:
- dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train")
- # See more about loading custom images at
- # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
-
- # Preprocessing the datasets and DataLoaders creation.
- augmentations = transforms.Compose(
- [
- transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
- transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
- transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
- transforms.ToTensor(),
- transforms.Normalize([0.5], [0.5]),
- ]
- )
-
- def transform_images(examples):
- images = [augmentations(image.convert("RGB")) for image in examples["image"]]
- return {"input": images}
-
- logger.info(f"Dataset size: {len(dataset)}")
-
- dataset.set_transform(transform_images)
- train_dataloader = torch.utils.data.DataLoader(
- dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers
- )
-
- # Initialize the learning rate scheduler
- lr_scheduler = get_scheduler(
- args.lr_scheduler,
- optimizer=optimizer,
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=(len(train_dataloader) * args.num_epochs),
- )
-
- # Prepare everything with our `accelerator`.
- model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- model, optimizer, train_dataloader, lr_scheduler
- )
-
- if args.use_ema:
- ema_model.to(accelerator.device)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- run = os.path.split(__file__)[-1].split(".")[0]
- accelerator.init_trackers(run)
-
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- max_train_steps = args.num_epochs * num_update_steps_per_epoch
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(dataset)}")
- logger.info(f" Num Epochs = {args.num_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {max_train_steps}")
-
- global_step = 0
- first_epoch = 0
-
- # Potentially load in the weights and states from a previous save
- if args.resume_from_checkpoint:
- if args.resume_from_checkpoint != "latest":
- path = os.path.basename(args.resume_from_checkpoint)
- else:
- # Get the most recent checkpoint
- dirs = os.listdir(args.output_dir)
- dirs = [d for d in dirs if d.startswith("checkpoint")]
- dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
- path = dirs[-1] if len(dirs) > 0 else None
-
- if path is None:
- accelerator.print(
- f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
- )
- args.resume_from_checkpoint = None
- else:
- accelerator.print(f"Resuming from checkpoint {path}")
- accelerator.load_state(os.path.join(args.output_dir, path))
- global_step = int(path.split("-")[1])
-
- resume_global_step = global_step * args.gradient_accumulation_steps
- first_epoch = global_step // num_update_steps_per_epoch
- resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps)
-
- # Train!
- for epoch in range(first_epoch, args.num_epochs):
- model.train()
- progress_bar = tqdm(total=num_update_steps_per_epoch, disable=not accelerator.is_local_main_process)
- progress_bar.set_description(f"Epoch {epoch}")
- for step, batch in enumerate(train_dataloader):
- # Skip steps until we reach the resumed step
- if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step:
- if step % args.gradient_accumulation_steps == 0:
- progress_bar.update(1)
- continue
-
- clean_images = batch["input"]
- # Sample noise that we'll add to the images
- noise = torch.randn(
- clean_images.shape, dtype=(torch.float32 if args.mixed_precision == "no" else torch.float16)
- ).to(clean_images.device)
- bsz = clean_images.shape[0]
- # Sample a random timestep for each image
- timesteps = torch.randint(
- 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=clean_images.device
- ).long()
-
- # Add noise to the clean images according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps)
-
- with accelerator.accumulate(model):
- # Predict the noise residual
- model_output = model(noisy_images, timesteps).sample
-
- if args.prediction_type == "epsilon":
- loss = F.mse_loss(model_output, noise) # this could have different weights!
- elif args.prediction_type == "sample":
- alpha_t = _extract_into_tensor(
- noise_scheduler.alphas_cumprod, timesteps, (clean_images.shape[0], 1, 1, 1)
- )
- snr_weights = alpha_t / (1 - alpha_t)
- loss = snr_weights * F.mse_loss(
- model_output, clean_images, reduction="none"
- ) # use SNR weighting from distillation paper
- loss = loss.mean()
- else:
- raise ValueError(f"Unsupported prediction type: {args.prediction_type}")
-
- accelerator.backward(loss)
-
- if accelerator.sync_gradients:
- accelerator.clip_grad_norm_(model.parameters(), 1.0)
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- if args.use_ema:
- ema_model.step(model.parameters())
- progress_bar.update(1)
- global_step += 1
-
- if global_step % args.checkpointing_steps == 0:
- # _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
- if args.checkpoints_total_limit is not None:
- checkpoints = os.listdir(args.output_dir)
- checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
- checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
-
- # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
- if len(checkpoints) >= args.checkpoints_total_limit:
- num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
- removing_checkpoints = checkpoints[0:num_to_remove]
-
- logger.info(
- f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
- )
- logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
-
- for removing_checkpoint in removing_checkpoints:
- removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
- shutil.rmtree(removing_checkpoint)
-
- if accelerator.is_main_process:
- save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
- accelerator.save_state(save_path)
- logger.info(f"Saved state to {save_path}")
-
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step}
- if args.use_ema:
- logs["ema_decay"] = ema_model.cur_decay_value
- progress_bar.set_postfix(**logs)
- accelerator.log(logs, step=global_step)
- progress_bar.close()
-
- accelerator.wait_for_everyone()
-
- # Generate sample images for visual inspection
- if accelerator.is_main_process:
- if epoch % args.save_images_epochs == 0 or epoch == args.num_epochs - 1:
- unet = accelerator.unwrap_model(model)
-
- if args.use_ema:
- ema_model.store(unet.parameters())
- ema_model.copy_to(unet.parameters())
-
- pipeline = DDPMPipeline(
- unet=unet,
- scheduler=noise_scheduler,
- )
-
- generator = torch.Generator(device=pipeline.device).manual_seed(0)
- # run pipeline in inference (sample random noise and denoise)
- images = pipeline(
- generator=generator,
- batch_size=args.eval_batch_size,
- num_inference_steps=args.ddpm_num_inference_steps,
- output_type="numpy",
- ).images
-
- if args.use_ema:
- ema_model.restore(unet.parameters())
-
- # denormalize the images and save to tensorboard
- images_processed = (images * 255).round().astype("uint8")
-
- if args.logger == "tensorboard":
- if is_accelerate_version(">=", "0.17.0.dev0"):
- tracker = accelerator.get_tracker("tensorboard", unwrap=True)
- else:
- tracker = accelerator.get_tracker("tensorboard")
- tracker.add_images("test_samples", images_processed.transpose(0, 3, 1, 2), epoch)
- elif args.logger == "wandb":
- # Upcoming `log_images` helper coming in https://github.com/huggingface/accelerate/pull/962/files
- accelerator.get_tracker("wandb").log(
- {"test_samples": [wandb.Image(img) for img in images_processed], "epoch": epoch},
- step=global_step,
- )
-
- if epoch % args.save_model_epochs == 0 or epoch == args.num_epochs - 1:
- # save the model
- unet = accelerator.unwrap_model(model)
-
- if args.use_ema:
- ema_model.store(unet.parameters())
- ema_model.copy_to(unet.parameters())
-
- pipeline = DDPMPipeline(
- unet=unet,
- scheduler=noise_scheduler,
- )
-
- pipeline.save_pretrained(args.output_dir)
-
- if args.use_ema:
- ema_model.restore(unet.parameters())
-
- if args.push_to_hub:
- repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=False)
-
- accelerator.end_training()
-
-
-if __name__ == "__main__":
- args = parse_args()
- main(args)
diff --git a/spaces/phyloforfun/VoucherVision/vouchervision/emoji_rain.py b/spaces/phyloforfun/VoucherVision/vouchervision/emoji_rain.py
deleted file mode 100644
index 5fb370e1da873757e42991f1d98b6036c532e7e0..0000000000000000000000000000000000000000
--- a/spaces/phyloforfun/VoucherVision/vouchervision/emoji_rain.py
+++ /dev/null
@@ -1,243 +0,0 @@
-from typing import Union
-
-import streamlit as st
-
-import inspect
-from importlib import import_module
-from pathlib import Path
-from typing import Any, Callable, Optional, TypeVar, Union, overload
-
-try:
- from streamlit.runtime.metrics_util import gather_metrics as _gather_metrics
-except ImportError:
-
- def _gather_metrics(name, func): # type: ignore
- return func
-
-
-F = TypeVar("F", bound=Callable[..., Any])
-
-# Typing overloads here are actually required so that you can correctly (= with correct typing) use the decorator in different ways:
-# 1) as a decorator without parameters @extra
-# 2) as a decorator with parameters (@extra(foo="bar") but this also refers to empty parameters @extra()
-# 3) as a function: extra(my_function)
-
-
-@overload
-def extra(
- func: F,
-) -> F:
- ...
-
-
-@overload
-def extra(
- func: None = None,
-) -> Callable[[F], F]:
- ...
-
-
-def extra(
- func: Optional[F] = None,
-) -> Union[Callable[[F], F], F]:
-
- if func:
-
- filename = inspect.stack()[1].filename
- submodule = Path(filename).parent.name
- extra_name = "streamlit_extras." + submodule
- module = import_module(extra_name)
-
- if hasattr(module, "__funcs__"):
- module.__funcs__ += [func] # type: ignore
- else:
- module.__funcs__ = [func] # type: ignore
-
- profiling_name = f"{submodule}.{func.__name__}"
- try:
- return _gather_metrics(name=profiling_name, func=func)
- except TypeError:
- # Don't fail on streamlit==1.13.0, which only expects a callable
- pass
-
- def wrapper(f: F) -> F:
- return f
-
- return wrapper
-
-
-@extra
-def proportional_rain(
- emoji1: str,
- count1: int,
- emoji2: str,
- count2: int,
- font_size: int = 64,
- falling_speed: int = 5,
- animation_length: Union[int, str] = "infinite"
-):
- """
- Creates a CSS animation where input emojis fall from top to bottom of the screen.
- The proportion of emojis is based on the provided counts.
- """
-
- if isinstance(animation_length, int):
- animation_length = f"{animation_length}"
-
- # CSS Code ...
- st.write(
- f"""
-
- """,
- unsafe_allow_html=True,
- )
-
- # Create emoji strings based on counts
- emoji_str1 = "".join([f'
{emoji1}
' for _ in range(count1)])
- emoji_str2 = "".join([f'
{emoji2}
' for _ in range(count2)])
-
- st.write(
- f"""
-
-
- {emoji_str1}
- {emoji_str2}
-
- """,
- unsafe_allow_html=True,
- )
\ No newline at end of file
diff --git a/spaces/pinkq/Newbing/src/components/chat-notification.tsx b/spaces/pinkq/Newbing/src/components/chat-notification.tsx
deleted file mode 100644
index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000
--- a/spaces/pinkq/Newbing/src/components/chat-notification.tsx
+++ /dev/null
@@ -1,77 +0,0 @@
-import { useEffect } from 'react'
-import Image from 'next/image'
-
-import IconWarning from '@/assets/images/warning.svg'
-import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types'
-import { ExternalLink } from './external-link'
-import { useBing } from '@/lib/hooks/use-bing'
-
-export interface ChatNotificationProps extends Pick, 'bot'> {
- message?: ChatMessageModel
-}
-
-function getAction(error: ChatError, reset: () => void) {
- if (error.code === ErrorCode.THROTTLE_LIMIT) {
- reset()
- return (
-
-
-... a Celemony Melodyne Studio Edition 3.2.2.2 MAC OSX UB.rar governor of poker 3 crack serial keygen cd key.rar. Dans La Maison Soundtrack I have loaded ... 1fdad05405
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Dil Dhadakne Do Movie Download 720p Kickasstorrents [BETTER].md b/spaces/quidiaMuxgu/Expedit-SAM/Dil Dhadakne Do Movie Download 720p Kickasstorrents [BETTER].md
deleted file mode 100644
index c50d7e1040909f4e6039f8c728ef859ae4987717..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Dil Dhadakne Do Movie Download 720p Kickasstorrents [BETTER].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
dil dhadakne do movie download 720p kickasstorrents
-
-Torrent Dil Dhadakne Do 2015 Hindi 720p BluRay x265 mkv Download - Nyaa. Dil Dhadakne Do Full Movie Download Free HD Dil Dhadakne Do Full Movie ... 4d29de3e1b
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download Buku Psikologi Perkembangan Anak.md b/spaces/quidiaMuxgu/Expedit-SAM/Download Buku Psikologi Perkembangan Anak.md
deleted file mode 100644
index d61385cbfe08b301adcf31284dc6891dcb9f14bb..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Download Buku Psikologi Perkembangan Anak.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-Buku "Perkembangan Anak" membahas topik tumbuh -kembang... ...how to download this book? like · 11 months ago · Add your answer ... Shelves: buku-psikologi. ru/pochtovaya-pismo-v-samolet-kak-napisat-pismo-v-samolet.html - How to write a letter to the plane? ...
-How to write a letter to an airplane.
-How to send an email to...
-In the letter that arrives on the plane, ...
-http://www.youtube.com/watch?v=d9ZiWpzQOeU - how to write a letter to the plane.
-http://www.youtube.com/watch?v=L4i6aqdYcN8 - how to write a letter to the plane.
-How to write a letter to the plane. 8a78ff9644
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Euro Truck Simulator 2 Going East Dlc Activation 46.md b/spaces/quidiaMuxgu/Expedit-SAM/Euro Truck Simulator 2 Going East Dlc Activation 46.md
deleted file mode 100644
index 1fdd62dd107dadc671a57aad5d87fdb80a74ff32..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Euro Truck Simulator 2 Going East Dlc Activation 46.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
we are also looking into adding the ability to 'drive off-road' to euro truck simulator 2. this is where the truck would drive through forests, mountains or off-road routes. the off-road mode would feature a different visual theme and terrain.
-
euro truck simulator 2 going east dlc activation 46
the next major update for euro truck simulator 2 will be a complete overhaul of the loading system. this will allow players to seamlessly load their trucks without having to wait for the loading screen to finish. new vehicles will also be added as we progress with this update.
-
we will be releasing the last dlc before christmas, which will include a major overhaul to the game. the biggest change in this dlc will be the addition of customizable vehicles. we will be adding several new car types to euro truck simulator 2, with a customization system to make sure that you have the option to have an extraordinary car to drive. this will be the first dlc that will include european cities, and even though this content will be free to play, we will be giving away a free vehicle to everyone who has played the game for a certain amount of hours. the vehicle will be a ford transit and will be included in the game’s launch, which is scheduled to be on the 28th of december.
-
starting in 2016, scs will be focusing more of our efforts on content creation. euro truck simulator 2 will be released for pc on december 28th, and we will be developing additional content with an option to expand the game after launch, both on steam and the gog.com network. if you are looking for some more information about the game, you can visit the official euro truck simulator 2 website (link available below), where you can also follow us on twitter, facebook and instagram.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Himsa Noah 4 Crack UPD.md b/spaces/quidiaMuxgu/Expedit-SAM/Himsa Noah 4 Crack UPD.md
deleted file mode 100644
index 22c4eae0844836ac3ce0ceae5e35d9f1bb17b39d..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Himsa Noah 4 Crack UPD.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Noah “Shark†Robertson has issued a longer statement on his departure from Motograter, ... I gave my life to this band for 4 years. ... Crack, heroin, meth, whatever he can get his greedy hands on. ... Show · Higher Power · Highly Suspect · HIM · Himsa · Hinder · Hit Parader · Hjelvik · HO99O9 · Hogan's Goat ... 1fdad05405
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/I Mind Map 9 Crackedl.md b/spaces/quidiaMuxgu/Expedit-SAM/I Mind Map 9 Crackedl.md
deleted file mode 100644
index ec69c98659152f17649d3f8a297fd4f645aec027..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/I Mind Map 9 Crackedl.md
+++ /dev/null
@@ -1,150 +0,0 @@
-
-
How to Download and Use iMindMap 9 Crackedl Full Version
-
-
iMindMap 9 Crackedl is a software that helps you create mind maps in a matter of minutes. A mind map is a one-page visual method of organizing and capturing information. It is based on a circular format, with relevant topics branching off a central theme. Mind maps can help you brainstorm, plan, present, study, and more.
-
-
iMindMap 9 Crackedl is the latest version of the software that has many features and benefits that make it a great choice for mind mapping. You can use it to create stunning mind maps and flowcharts, with icons, images, links, and more. You can also export your mind maps into various formats, such as Word, PowerPoint, PDF, and images. iMindMap 9 Crackedl also has a serial key that activates the full version of the software and unlocks all the features.
If you want to download and use iMindMap 9 Crackedl Full Version, you can follow these steps:
-
-
-
Go to the website or source where you want to download the software. You can find many online sources that offer free or paid downloads of iMindMap 9 Crackedl Full Version.
-
Click on the download link or button and wait for the download to start.
-
Choose a location on your computer where you want to save the downloaded file.
-
Once the download is complete, open the file and follow the installation instructions.
-
Enter the serial key when prompted and activate the full version of the software.
-
Launch the software and start creating mind maps.
-
-
-
You can also watch this video tutorial on how to download and use iMindMap 9 Crackedl Full Version:
-
-
-
-
Advantages of iMindMap 9 Crackedl Full Version
-
-
iMindMap 9 Crackedl Full Version has many advantages that make it worth using for mind mapping. Some of these advantages are:
-
-
-
It is endorsed by the inventor of mind mapping, Tony Buzan, who has praised its features and performance.
-
It has a user-friendly interface that guides you through the steps of creating your mind maps.
-
It has a fast capture view that allows you to quickly capture ideas and create a web of thoughts.
-
It has a free-form brainstorm view that allows you to sort ideas before developing them in a mind map.
-
It has a flexible mind map and flowchart view that allows you to create stunning diagrams with icons, images, links, and more.
-
It has a presentation view that allows you to design and deliver memorable presentations with 3D view, templates, slide notes, transitions, and more.
-
It has an outline panel that allows you to locate and sort your work in sync with mind map and brainstorm views.
-
It has an export feature that allows you to export your mind maps into various formats, such as Word, PowerPoint, PDF, images, and more.
-
It has a serial key that activates the full version of the software and unlocks all the features.
-
-
-
Conclusion
-
-
iMindMap 9 Crackedl Full Version is a software that helps you create mind maps in a matter of minutes. It has many features and benefits that make it a great choice for mind mapping. You can use it to brainstorm, plan, present, study, and more. You can also export your mind maps into various formats, such as Word, PowerPoint, PDF, images, and more. iMindMap 9 Crackedl Full Version is a software that you can trust and enjoy using for mind mapping.
-
-
If you want to download and use iMindMap 9 Crackedl Full Version, you can visit the website or source where you want to download the software or watch this video tutorial on how to download and use iMindMap 9 Crackedl Full Version:
-
-
-
-
Don't wait any longer and get iMindMap 9 Crackedl Full Version today and start creating mind maps in minutes!
-
How to Fix Common Problems with iMindMap 9 Crackedl Full Version
-
-
iMindMap 9 Crackedl Full Version is a software that helps you create mind maps in a matter of minutes. However, sometimes you may encounter some problems or errors while using the software. Here are some of the common issues and how to fix them:
-
-
-
If you cannot install or run the software, make sure you have the minimum system requirements and that your antivirus or firewall is not blocking the software.
-
If you cannot enter or activate the serial key, make sure you have entered it correctly and that your internet connection is stable.
-
If you cannot load or edit your photo, make sure the photo format and size are supported by the software and that the photo is not corrupted or damaged.
-
If you cannot print or save your mind map, make sure your printer or storage device is connected and working properly and that you have enough ink or space.
-
If you have any other problems or errors, you can contact the customer support of the software or visit the official website for more information and help.
-
-
-
How to Create Mind Maps with iMindMap 9 Crackedl Full Version
-
-
iMindMap 9 Crackedl Full Version is a software that helps you create mind maps in a matter of minutes. A mind map is a one-page visual method of organizing and capturing information. It is based on a circular format, with relevant topics branching off a central theme. Mind maps can help you brainstorm, plan, present, study, and more.
-
-
-
If you want to create mind maps with iMindMap 9 Crackedl Full Version, you can follow these steps:
-
-
-
Launch the software and enter the serial key when prompted.
-
Select the mind map view and choose a template or start from scratch.
-
Click on the central theme and type your main idea.
-
Click on the plus sign to add subtopics and type your related ideas.
-
Use the tools on the right panel to customize your mind map with icons, images, links, notes, and more.
-
When you are satisfied with your mind map, click on the print or save button and choose your preferred format and size.
-
-
-
You can also watch this video tutorial on how to create mind maps with iMindMap 9 Crackedl Full Version:
-
-
-
How to Update iMindMap 9 Crackedl Full Version
-
-
iMindMap 9 Crackedl Full Version is a software that helps you create mind maps in a matter of minutes. However, you may want to update it to the latest version available to enjoy the new features and improvements. Updating the software can also help you fix any bugs or errors that may occur while using the software.
-
-
If you want to update iMindMap 9 Crackedl Full Version, you can follow these steps:
-
-
-
Open iMindMap 9 Crackedl Full Version on your computer.
-
Go to the "Help" menu and click on "Check for Updates".
-
The software will automatically check for any available updates and notify you if there are any.
-
If there are any updates, click on "Download and Install" and wait for the update to finish.
-
Restart the software and enjoy using the updated version.
-
-
-
You can also check for updates manually by visiting the official website of the software or by contacting the customer support.
-
-
How to Uninstall iMindMap 9 Crackedl Full Version
-
-
If you want to uninstall iMindMap 9 Crackedl Full Version from your computer, you can follow these steps:
-
-
-
Go to the "Start" menu and click on "Control Panel".
-
Click on "Programs and Features" or "Uninstall a Program".
-
Find iMindMap 9 Crackedl Full Version in the list of installed programs and click on it.
-
Click on "Uninstall" or "Remove" and follow the instructions.
-
Restart your computer and delete any leftover files or folders of the software.
-
-
-
You can also use a third-party uninstaller tool to remove iMindMap 9 Crackedl Full Version from your computer.
-
How to Compare iMindMap 9 Crackedl Full Version with Other Mind Mapping Software
-
-
iMindMap 9 Crackedl Full Version is a software that helps you create mind maps in a matter of minutes. But how does it compare with other mind mapping software available in the market? Here are some of the criteria and features that you can use to compare iMindMap 9 Crackedl Full Version with other mind mapping software:
-
-
-
Price: iMindMap 9 Crackedl Full Version is a software that you can download for free or for a low price from various online sources. However, you may not get the official support or updates from the developer. Other mind mapping software may have different pricing plans, such as subscription, one-time payment, or freemium.
-
Features: iMindMap 9 Crackedl Full Version has many features and benefits that make it a great choice for mind mapping. Some of these features are: fast capture view, free-form brainstorm view, flexible mind map and flowchart view, presentation view, outline panel, export feature, and serial key. Other mind mapping software may have similar or different features, such as cloud storage, collaboration, templates, themes, integrations, and more.
-
Performance: iMindMap 9 Crackedl Full Version is a software that runs smoothly and quickly on your computer. It has a user-friendly interface that guides you through the steps of creating your mind maps. It also has a face detection algorithm that automatically rotates and crops the image according to the ID requirements. Other mind mapping software may have different performance levels, such as speed, stability, reliability, and usability.
-
Reviews: iMindMap 9 Crackedl Full Version is a software that has received positive reviews and feedback from many users and experts. It is endorsed by the inventor of mind mapping, Tony Buzan, who has praised its features and performance. It is also rated highly by various websites and blogs that review mind mapping software. Other mind mapping software may have different reviews and ratings, depending on the opinions and experiences of the users and experts.
-
-
-
You can use these criteria and features to compare iMindMap 9 Crackedl Full Version with other mind mapping software and decide which one suits your needs and preferences better.
-
-
How to Learn More about iMindMap 9 Crackedl Full Version
-
-
If you want to learn more about iMindMap 9 Crackedl Full Version, you can use these resources:
-
-
-
The official website of the software: https://imindmap.com/
-
The official blog of the software: https://imindmap.com/blog/
-
The official YouTube channel of the software: https://www.youtube.com/user/iMindMap
-
The official Facebook page of the software: https://www.facebook.com/iMindMap/
-
The official Twitter account of the software: https://twitter.com/iMindMap
-
The official Instagram account of the software: https://www.instagram.com/imindmap/
-
The official Pinterest account of the software: https://www.pinterest.com/imindmap/
-
The official LinkedIn page of the software: https://www.linkedin.com/company/imindmap/
-
The official Reddit community of the software: https://www.reddit.com/r/iMindMap/
-
The official Quora topic of the software: https://www.quora.com/topic/iMindMap
-
-
-
You can also contact the customer support of the software or visit the official website for more information and help.
-
Conclusion
-
-
iMindMap 9 Crackedl Full Version is a software that helps you create mind maps in a matter of minutes. It has many features and benefits that make it a great choice for mind mapping. You can use it to brainstorm, plan, present, study, and more. You can also export your mind maps into various formats, such as Word, PowerPoint, PDF, images, and more. iMindMap 9 Crackedl Full Version is a software that you can trust and enjoy using for mind mapping.
-
-
If you want to download and use iMindMap 9 Crackedl Full Version, you can visit the website or source where you want to download the software or watch this video tutorial on how to download and use iMindMap 9 Crackedl Full Version:
-
-
-
-
Don't wait any longer and get iMindMap 9 Crackedl Full Version today and start creating mind maps in minutes!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Mergus Atlas De Laquarium Marin Pdf 11.md b/spaces/quidiaMuxgu/Expedit-SAM/Mergus Atlas De Laquarium Marin Pdf 11.md
deleted file mode 100644
index f4e0b91b0f108dae641edaca481caf1379d0077d..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Mergus Atlas De Laquarium Marin Pdf 11.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-the aquarium trade. This research examines a DNA barcoding approach for ornamental cyprinid fishes (Teleostei: Cypriniformes), an important group in terms of ... 4d29de3e1b
-
-
-
diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/robloader.py b/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/robloader.py
deleted file mode 100644
index 471f2262c77d7c71b48b0fb187159e9d2dbc517a..0000000000000000000000000000000000000000
--- a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/robloader.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import os
-import numbers
-import torch
-import torch.utils.data as data
-import torch
-import torchvision.transforms as transforms
-import random
-from PIL import Image, ImageOps
-import numpy as np
-import torchvision
-from . import flow_transforms
-import pdb
-import cv2
-from utils.flowlib import read_flow
-from utils.util_flow import readPFM
-
-
-def default_loader(path):
- return Image.open(path).convert('RGB')
-
-def flow_loader(path):
- if '.pfm' in path:
- data = readPFM(path)[0]
- data[:,:,2] = 1
- return data
- else:
- return read_flow(path)
-
-
-def disparity_loader(path):
- if '.png' in path:
- data = Image.open(path)
- data = np.ascontiguousarray(data,dtype=np.float32)/256
- return data
- else:
- return readPFM(path)[0]
-
-class myImageFloder(data.Dataset):
- def __init__(self, iml0, iml1, flowl0, loader=default_loader, dploader= flow_loader, scale=1.,shape=[320,448], order=1, noise=0.06, pca_augmentor=True, prob = 1., cover=False, black=False, scale_aug=[0.4,0.2]):
- self.iml0 = iml0
- self.iml1 = iml1
- self.flowl0 = flowl0
- self.loader = loader
- self.dploader = dploader
- self.scale=scale
- self.shape=shape
- self.order=order
- self.noise = noise
- self.pca_augmentor = pca_augmentor
- self.prob = prob
- self.cover = cover
- self.black = black
- self.scale_aug = scale_aug
-
- def __getitem__(self, index):
- iml0 = self.iml0[index]
- iml1 = self.iml1[index]
- flowl0= self.flowl0[index]
- th, tw = self.shape
-
- iml0 = self.loader(iml0)
- iml1 = self.loader(iml1)
- iml1 = np.asarray(iml1)/255.
- iml0 = np.asarray(iml0)/255.
- iml0 = iml0[:,:,::-1].copy()
- iml1 = iml1[:,:,::-1].copy()
- flowl0 = self.dploader(flowl0)
- #flowl0[:,:,-1][flowl0[:,:,0]==np.inf]=0 # for gtav window pfm files
- #flowl0[:,:,0][~flowl0[:,:,2].astype(bool)]=0
- #flowl0[:,:,1][~flowl0[:,:,2].astype(bool)]=0 # avoid nan in grad
- flowl0 = np.ascontiguousarray(flowl0,dtype=np.float32)
- flowl0[np.isnan(flowl0)] = 1e6 # set to max
-
- ## following data augmentation procedure in PWCNet
- ## https://github.com/lmb-freiburg/flownet2/blob/master/src/caffe/layers/data_augmentation_layer.cu
- import __main__ # a workaround for "discount_coeff"
- try:
- with open('iter_counts-%d.txt'%int(__main__.args.logname.split('-')[-1]), 'r') as f:
- iter_counts = int(f.readline())
- except:
- iter_counts = 0
- schedule = [0.5, 1., 50000.] # initial coeff, final_coeff, half life
- schedule_coeff = schedule[0] + (schedule[1] - schedule[0]) * \
- (2/(1+np.exp(-1.0986*iter_counts/schedule[2])) - 1)
-
- if self.pca_augmentor:
- pca_augmentor = flow_transforms.pseudoPCAAug( schedule_coeff=schedule_coeff)
- else:
- pca_augmentor = flow_transforms.Scale(1., order=0)
-
- if np.random.binomial(1,self.prob):
- co_transform = flow_transforms.Compose([
- flow_transforms.Scale(self.scale, order=self.order),
- #flow_transforms.SpatialAug([th,tw], trans=[0.2,0.03], order=self.order, black=self.black),
- flow_transforms.SpatialAug([th,tw],scale=[self.scale_aug[0],0.03,self.scale_aug[1]],
- rot=[0.4,0.03],
- trans=[0.4,0.03],
- squeeze=[0.3,0.], schedule_coeff=schedule_coeff, order=self.order, black=self.black),
- #flow_transforms.pseudoPCAAug(schedule_coeff=schedule_coeff),
- flow_transforms.PCAAug(schedule_coeff=schedule_coeff),
- flow_transforms.ChromaticAug( schedule_coeff=schedule_coeff, noise=self.noise),
- ])
- else:
- co_transform = flow_transforms.Compose([
- flow_transforms.Scale(self.scale, order=self.order),
- flow_transforms.SpatialAug([th,tw], trans=[0.4,0.03], order=self.order, black=self.black)
- ])
-
- augmented,flowl0 = co_transform([iml0, iml1], flowl0)
- iml0 = augmented[0]
- iml1 = augmented[1]
-
- if self.cover:
- ## randomly cover a region
- # following sec. 3.2 of http://openaccess.thecvf.com/content_CVPR_2019/html/Yang_Hierarchical_Deep_Stereo_Matching_on_High-Resolution_Images_CVPR_2019_paper.html
- if np.random.binomial(1,0.5):
- #sx = int(np.random.uniform(25,100))
- #sy = int(np.random.uniform(25,100))
- sx = int(np.random.uniform(50,125))
- sy = int(np.random.uniform(50,125))
- #sx = int(np.random.uniform(50,150))
- #sy = int(np.random.uniform(50,150))
- cx = int(np.random.uniform(sx,iml1.shape[0]-sx))
- cy = int(np.random.uniform(sy,iml1.shape[1]-sy))
- iml1[cx-sx:cx+sx,cy-sy:cy+sy] = np.mean(np.mean(iml1,0),0)[np.newaxis,np.newaxis]
-
- iml0 = torch.Tensor(np.transpose(iml0,(2,0,1)))
- iml1 = torch.Tensor(np.transpose(iml1,(2,0,1)))
-
- return iml0, iml1, flowl0
-
- def __len__(self):
- return len(self.iml0)
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Amnesia The Dark Descent - Map pack v.3.0 game A review of the new maps and features.md b/spaces/raedeXanto/academic-chatgpt-beta/Amnesia The Dark Descent - Map pack v.3.0 game A review of the new maps and features.md
deleted file mode 100644
index 3266a915033302f58376a8e073b4348419164e4d..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Amnesia The Dark Descent - Map pack v.3.0 game A review of the new maps and features.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-
Amnesia: The Dark Descent - Map pack v.3.0 game
-
If you are a fan of horror games, you might have heard of Amnesia: The Dark Descent, a first-person survival horror game that was released in 2010 by Frictional Games. The game is about a man named Daniel who wakes up in a dark castle with no memory of his past, and has to explore the eerie environment while avoiding monsters and solving puzzles.
-
But did you know that there is a huge collection of custom maps and mods for Amnesia: The Dark Descent that can extend your gameplay experience and add new features and challenges? In this article, we will introduce you to Map pack v.3.0, a compilation of over 1000 custom maps and mods for Amnesia: The Dark Descent that you can download and play for free.
Amnesia: The Dark Descent is a game that combines elements of adventure, puzzle, stealth, and horror genres. The game puts you in the role of Daniel, a young man who wakes up in a dark and mysterious castle with no memory of who he is or why he is there. As you explore the castle, you will find notes, diaries, and flashbacks that reveal fragments of your past and the secrets of the castle.
-
But you are not alone in the castle. There are terrifying creatures lurking in the shadows, waiting for an opportunity to attack you. You have no weapons to fight them, only your wits and your ability to hide and run away. You also have to manage your sanity, which will decrease if you witness disturbing events or stay in the dark for too long. If your sanity drops too low, you will start to hallucinate and lose control of yourself.
-
Amnesia: The Dark Descent is a game that relies on creating an immersive and atmospheric experience for the player, rather than relying on jump scares or gore. The game uses a realistic physics engine that allows you to interact with almost every object in the environment, such as opening doors, moving furniture, throwing items, etc. The game also features a dynamic sound system that adapts to your actions and surroundings, creating a sense of tension and dread.
Map pack v.3.0 is a compilation of over 1000 custom maps and mods for Amnesia: The Dark Descent that were created by fans and modders using the game's level editor and scripting tools. These maps and mods vary in length, quality, style, and difficulty, but they all offer new ways to enjoy Amnesia: The Dark Descent.
-
Some of these maps and mods are standalone stories that have their own characters, settings, plots, and endings. Some are expansions or modifications of the original game that add new features, mechanics, enemies, puzzles, etc. Some are experimental or humorous projects that test the limits of the game engine or parody other games or media.
-
Map pack v.3.0 was released in 2022 by Mod DB, a website that hosts mods for various games. It is the third version of the map pack series that started in 2011 with Map pack v.1.0. The map pack series aims to collect and showcase the best custom maps and mods for Amnesia: The Dark Descent that are available on Mod DB or other websites.
-
Features of Map pack v.3.0
-
Over 1000 custom maps and mods
-
The main feature of Map pack v.3.0 is the sheer amount and variety of custom maps and mods that it contains. There are over 1000 maps and mods in total, ranging from short demos to full-fledged campaigns, from realistic horror to fantasy adventure, from serious drama to comedy satire.
-
You can find maps and mods that suit your preferences and tastes, whether you want to experience more stories set in the Amnesia universe, explore different genres and themes, challenge yourself with harder puzzles or enemies, or just have fun with some silly or creative ideas.
-
You can also discover new maps and mods that you might have missed or overlooked before, as Map pack v.3.0 includes some obscure or hidden gems that are not widely known or popular among the Amnesia community.
-
New gameplay mechanics and challenges
-
Another feature of Map pack v.3.0 is the new gameplay mechanics and challenges that some of the custom maps and mods introduce or modify from the original game.
-
For example, some maps and mods add new items or tools that you can use to interact with the environment or solve puzzles, such as lanterns with different colors or effects, keys with special functions, weapons with limited ammo or durability, etc.
-
Some maps and mods change or remove some of the existing mechanics or features from the original game, such as sanity meter, inventory system, health system, saving system, etc.
-
Some maps and mods also create new scenarios or situations that require you to use different strategies or skills to survive or progress through them, such as stealth missions, escape sequences, boss fights, timed events, etc.
-
Enhanced graphics and sound effects
-
A final feature of Map pack v.3.0 is the enhanced graphics and sound effects that some of the custom maps and mods use to improve the visual and auditory quality of the game.
-
Some maps and mods use custom models, textures, lighting, shaders, particles, or other graphical elements to create more detailed, realistic, or stylized environments or characters.
-
Some maps and mods use custom sounds, music, voices, or other audio elements to create more immersive, atmospheric, or emotional experiences or dialogues.
-
How to install and play Map pack v.3.0
-
Downloading and extracting the files
-
To install and play Map pack v.3.0, you need to have Amnesia: The Dark Descent installed on your computer. You can buy the game from Steam, GOG.com, or other online platforms. You also need to have enough disk space to store the map pack files, which are about 30 GB in size.
-
To download the map pack files, you need to go to Mod DB's website and find the page for Map pack v.3.0. There, you will see a list of download links for different parts of the map pack. You need to download all 30 parts (each about 1 GB) and save them in one folder on your computer.
-
To extract the map pack files, you need to use a program like WinRAR or 7-Zip. You need to right-click on the first part (MapPackV30.part01.rar) and select "Extract here" or "Extract to MapPackV30". This will extract all 30 parts into one folder named "MapPackV30".
-
Launching the game and selecting a map or mod
-
To launch the game, you need to go to the folder where you installed Amnesia: The Dark Descent. There, you will see a file named "Amnesia.exe". You need to right-click on it and select "Run as administrator".
-
To select a map or mod, you need to go to the main menu of the game. There, you will see an option named "Custom Story". You need to click on it. in the map pack. You need to select one of them and click on "Start". This will launch the selected map or mod and you can start playing.
-
Troubleshooting common issues
-
Sometimes, you might encounter some issues when trying to play some of the custom maps or mods in the map pack. Here are some of the common issues and how to fix them:
-
-
If you get an error message saying "FATAL ERROR: Could not load world file", it means that the map or mod is not compatible with your game version or requires a full conversion mod to work properly. You can try to update your game version or install the required full conversion mod if available.
-
If you get an error message saying "FATAL ERROR: Could not load script file", it means that the map or mod uses some custom scripts that are missing or corrupted. You can try to redownload the map or mod files or contact the author for help.
-
If you get an error message saying "FATAL ERROR: Could not load main init file", it means that the map or mod is corrupted or incomplete. You can try to redownload the map or mod files or contact the author for help.
-
If you experience crashes, freezes, lag, or glitches, it might be because your computer does not meet the minimum requirements to run the game or the map or mod, or because there are some conflicts with other programs or processes running in the background. You can try to lower your graphics settings, close other programs or processes, or update your drivers.
-
-
Some of the best maps and mods in Map pack v.3.0
-
With over 1000 custom maps and mods in Map pack v.3.0, it might be hard to choose which ones to play first. To help you out, here are some of the best maps and mods in Map pack v.3.0 that we recommend you to try:
-
Final Revelations V.3
-
Final Revelations V.3 is a long and complex adventure map that follows the story of Lee Hawkins, a young man who goes on a trip to Gothfair Village in hope to solve the mysterious events that have occurred there. The map features several chapters, detailed environments, custom items, monsters, and music, a lot of voice acting, NPCs, a morality system, and multiple endings.
-
The map is based on previous works by the same author and stories by various authors, and it explains what happened to the characters in Alma, Brutal Lies, and Cursed Souls. The map also has some references and connections to other maps and mods in the Amnesia universe.
-
Final Revelations V.3 is a map that focuses on the storyline and puzzles, rather than on scares or action. It is a map that will challenge your mind and test your morality, as you uncover the secrets of Gothfair Village and decide your fate.
-
Quartz and Cordite - Wheezy Everlasting 2
-
Quartz and Cordite - Wheezy Everlasting 2 is a sequel to Wheezy Everlasting, a humorous and experimental map that parodies various games and media. The map features new gameplay mechanics, such as weapons with limited ammo or durability, stealth missions, escape sequences, boss fights, timed events, etc.
-
The map also features new graphics and sound effects, such as custom models, textures, lighting, shaders, particles, sounds, music, voices, etc. The map also has some easter eggs and secrets that you can find if you explore carefully.
-
Quartz and Cordite - Wheezy Everlasting 2 is a map that does not take itself too seriously and aims to entertain and amuse you with its absurdity and creativity. It is a map that will make you laugh and wonder what will happen next.
-
The Intersection
-
The Intersection is a short and atmospheric horror map that puts you in the role of Ezra Fischer, a wealthy jailer who gets involved in a mysterious plot by a man named Albert Luther. The map features a dark and creepy environment, custom sounds and music, voice acting, and multiple endings.
-
The map also has some puzzles and challenges that you have to solve or overcome to progress through the story. The map also has some hidden clues and hints that you can find if you pay attention to the details.
-
The Intersection is a map that relies on creating a tense and immersive experience for the player, rather than relying on jump scares or gore. It is a map that will keep you on edge and curious about what is going on.
-
Conclusion
-
In conclusion, Map pack v.3.0 is a compilation of over 1000 custom maps and mods for Amnesia: The Dark Descent that you can download and play for free. It offers new ways to enjoy Amnesia: The Dark Descent with new features, mechanics, challenges, graphics, and sound effects. It also showcases the creativity and talent of the Amnesia community, who have created amazing stories, environments, characters, and scenarios for the game.
-
If you are looking for more content for Amnesia: The Dark Descent, or if you want to support the modders who have worked hard on their projects, you should definitely check out Map pack v.3.0. You might find something that will surprise you, scare you, or make you laugh.
-
FAQs
-
-
Q: Where can I download Map pack v.3.0?
-
A: You can download Map pack v.3.0 from Mod DB's website. You need to download all 30 parts of the map pack and extract them into one folder.
-
Q: Do I need anything else to play Map pack v.3.0?
-
A: You need to have Amnesia: The Dark Descent installed on your computer with version 1.3 or higher. You also need to have enough disk space to store the map pack files (about 30 GB).
-
Q: How do I select a map or mod from Map pack v.3.0?
-
A: You need to launch Amnesia: The Dark Descent as administrator and go to the main menu. There, you need to click on "Custom Story" and select one of the maps or mods from the list.
-
Q: What if I encounter an issue when playing Map pack v.3.0?
-
A: You can try to fix some of the common issues by following the troubleshooting guide in this article. If that does not work, you can contact the author of the map or mod for help or report it on Mod DB's website.
-
Q: What are some of the best maps or mods in Map pack v.3.0?
-
A: Some of the best maps or mods in Map pack v.3.0 are Final Revelations V.3, Quartz and Cordite - Wheezy Everlasting 2, and The Intersection. But there are many more that you can discover by yourself.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Fisica blatt solucionario una gua para estudiantes de fsica general.md b/spaces/raedeXanto/academic-chatgpt-beta/Fisica blatt solucionario una gua para estudiantes de fsica general.md
deleted file mode 100644
index 9ff3f1cdba25a6d9614e5fa75fa3332c27b4637e..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Fisica blatt solucionario una gua para estudiantes de fsica general.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Fisica Blatt Solucionario: A Comprehensive Guide for Physics Students
-
Are you a physics student who is struggling with solving problems and understanding concepts? Do you wish you had a reliable source of answers and explanations for your homework and exams? If so, you might want to check out fisica blatt solucionario, a collection of solutions for the problems in the book Fundamentos de Fisica by Frank Blatt.
In this article, we will explain what fisica blatt solucionario is, who is Frank Blatt, and how to use fisica blatt solucionario effectively. By the end of this article, you will have a better idea of how fisica blatt solucionario can help you improve your physics skills and knowledge.
-
What is Fisica Blatt Solucionario?
-
Fisica blatt solucionario is a PDF document that contains the solutions for all the problems in the book Fundamentos de Fisica by Frank Blatt. The book covers topics such as mechanics, thermodynamics, electricity, magnetism, optics, relativity, quantum physics, nuclear physics, and more. The book is written in Spanish and is widely used in Latin American universities as a textbook for introductory physics courses.
-
The solutions in fisica blatt solucionario are detailed and clear, showing the steps and formulas used to obtain the final answer. The solutions also include diagrams, graphs, tables, and explanations to help the students understand the concepts and principles behind each problem. The solutions are organized by chapters and sections according to the book.
-
Some examples of problems and solutions from fisica blatt solucionario are:
-
-
-
Problem
-
Solution
-
-
-
A car travels at a constant speed of 60 km/h along a straight road. How long does it take to cover a distance of 120 km?
-
The time taken by the car to cover a distance of 120 km is given by: $$t = \fracdv$$ where $t$ is the time, $d$ is the distance, and $v$ is the speed. Substituting the given values, we get: $$t = \frac12060$$ $$t = 2 \text hours$$ Therefore, it takes 2 hours for the car to cover a distance of 120 km.
-
-
-
A block of mass 2 kg slides down an inclined plane that makes an angle of 30 degrees with the horizontal. The coefficient of kinetic friction between the block and the plane is 0.2. What is the acceleration of the block?
-
The forces acting on the block are: - The weight $W = mg$, where $m$ is the mass and $g$ is the gravitational acceleration. - The normal force $N$, which is perpendicular to the plane. - The friction force $f$, which opposes the motion of the block. The free-body diagram of the block is shown below:
- The components of the weight along and perpendicular to the plane are: $$W_\parallel = mg \sin \theta$$ $$W_\perp = mg \cos \theta$$ where $\theta$ is the angle of inclination. The normal force is equal to the component of the weight perpendicular to the plane: $$N = W_\perp = mg \cos \theta$$ The friction force is given by: $$f = \mu N = \mu mg \cos \theta$$ where $\mu$ is the coefficient of kinetic friction. The net force along the plane is: $$F_\parallel = W_\parallel - f = mg \sin \theta - \mu mg \cos \theta$$ According to Newton's second law, this net force causes an acceleration $a$ along the plane: $$F_\parallel = ma$$ Solving for $a$, we get: $$a = \fracF_\parallelm = g (\sin \theta - \mu \cos \theta)$$ Substituting the given values, we get: $$a = 9.8 (\sin 30^\circ - 0.2 \cos 30^\circ)$$ $$a = 3.27 \text m/s^2$$ Therefore, the acceleration of the block is 3.27 m/s.
-
-
-
A light ray travels from air into water at an angle of incidence of 45 degrees. The refractive index of air is 1.00 and that of water is 1.33. What is the angle of refraction?
-
The angle of refraction can be found using Snell's law: $$n_1 \sin i_1 = n_2 \sin i_2$$ where $n_1$ and $n_2$ are the refractive indices of air and water respectively, $i_1$ is the angle of incidence in air, and $i_2$ is the angle of refraction in water. Substituting the given values, we get: $$1.00 \sin 45^\circ = 1.33 \sin i_2$$ Solving for $i_2$, we get: $$\sin i_2 = \frac1.001.33 \sin 45^\circ$$ $$i_2 = \sin^-1 (0.53)$$ $$i_2 = 32^\circ$$ Therefore, the angle of refraction is 32 degrees.
-
-
-
Who is Frank Blatt and What is His Contribution to Physics Education?
-
Frank Blatt was a Canadian physicist and educator who was born in 1924 and died in 1999. He obtained his PhD in physics from McGill University in Montreal in 1953. He then worked as a professor at several universities in Canada and abroad, including Dalhousie University in Halifax, University College London in England, University of Alberta in Edmonton, University of British Columbia in Vancouver, University College Dublin in Ireland, University College Cork in Ireland, University College Galway in Ireland, University College Swansea in Wales, University College Cardiff in Wales, University College Belfast in Northern Ireland, University College Dublin again in Ireland.
-
He was also a visiting professor at several institutions around the world, such as Harvard University in USA, Massachusetts Institute of Technology (MIT) in USA, University of California Berkeley in USA, University of California Los Angeles (UCLA) in USA, University of California San Diego (UCSD) in USA, University of California Santa Barbara (UCSB) in USA, University of California Irvine (UCI) in USA, Continuing the article: University of Texas at Austin in USA, University of Illinois at Urbana-Champaign in USA, University of Chicago in USA, University of Michigan in USA, University of Toronto in Canada, McGill University in Canada, University of Montreal in Canada, University of Ottawa in Canada, University of Quebec in Canada, University of Waterloo in Canada, University of Calgary in Canada, University of British Columbia again in Canada, and many others.
-
He was also a prolific author of books and articles on physics and related subjects. He wrote several textbooks on physics, such as Principles of Physics, Modern Physics, Physics for Engineers, Physics for Scientists, Physics for Medical Students, and Fundamentos de Fisica. He also wrote books on topics such as nuclear energy, environmental science, biophysics, medical physics, astronomy, cosmology, and philosophy of science.
-
His books were widely acclaimed for their clarity, rigor, comprehensiveness, and pedagogical value. They were translated into several languages and used by millions of students and teachers around the world. His books also received many awards and honors, such as the American Association of Physics Teachers Award for Excellence in Physics Education, the Canadian Association of Physicists Medal for Outstanding Achievement in Physics Education, the Royal Society of Canada Medal for Distinguished Service to Science Education, and the Order of Canada for his contributions to science and society.
-
He was also a respected researcher and innovator in physics and related fields. He made significant contributions to the fields of nuclear physics, solid state physics, plasma physics, quantum mechanics, relativity, statistical mechanics, thermodynamics, electromagnetism, optics, acoustics, fluid mechanics, biophysics, medical physics, environmental physics, and more. He published over 300 papers in peer-reviewed journals and conference proceedings. He also held several patents for inventions such as a nuclear reactor design, a plasma generator, a laser device, a solar cell, a superconducting magnet, a magnetic resonance imaging (MRI) machine, a holographic display system, and more.
-
fisica blatt solucionario pdf
-fisica blatt solucionario descargar
-fisica blatt solucionario gratis
-fisica blatt solucionario ejercicios
-fisica blatt solucionario tercera edicion
-fisica blatt solucionario online
-fisica blatt solucionario libro
-fisica blatt solucionario capitulo 1
-fisica blatt solucionario capitulo 2
-fisica blatt solucionario capitulo 3
-fisica blatt solucionario capitulo 4
-fisica blatt solucionario capitulo 5
-fisica blatt solucionario capitulo 6
-fisica blatt solucionario capitulo 7
-fisica blatt solucionario capitulo 8
-fisica blatt solucionario capitulo 9
-fisica blatt solucionario capitulo 10
-fisica blatt solucionario capitulo 11
-fisica blatt solucionario capitulo 12
-fisica blatt solucionario capitulo 13
-fisica blatt solucionario capitulo 14
-fisica blatt solucionario capitulo 15
-fisica blatt solucionario capitulo 16
-fisica blatt solucionario capitulo 17
-fisica blatt solucionario capitulo 18
-fisica blatt solucionario capitulo 19
-fisica blatt solucionario capitulo 20
-fundamentos de fisica frank blatt solucionario[^3^]
-fundamentos de fisica frank j. blatt solucionario[^3^]
-fundamentos de fisica frank j. blatt tercera edicion solucionario[^3^]
-ejercicios de fisica libro blatt[^2^]
-ejercicios resueltos de fisica libro blatt[^2^]
-ejercicios propuestos de fisica libro blatt[^2^]
-ejercicios de propiedades mecanicas de la materia libro blatt[^2^]
-ejercicios de movimientos libro blatt[^2^]
-ejercicios de energia libro blatt[^2^]
-ejercicios de trabajo y potencia libro blatt[^2^]
-ejercicios de impulso y cantidad de movimiento libro blatt[^2^]
-ejercicios de rotacion libro blatt[^2^]
-ejercicios de gravitacion libro blatt[^2^]
-ejercicios de elasticidad libro blatt[^2^]
-ejercicios de fluidos libro blatt[^2^]
-ejercicios de temperatura y calor libro blatt[^2^]
-ejercicios de termodinamica libro blatt[^2^]
-ejercicios de ondas libro blatt[^2^]
-ejercicios de sonido libro blatt[^2^]
-ejercicios de electricidad libro blatt[^2^]
-ejercicios de magnetismo libro blatt[^2^]
-ejercicios de optica libro blatt[^2^]
-ejercicios de relatividad libro blatt[^2^]
-
He was also a leader and mentor in the physics community. He served as the president or vice-president of several professional associations and societies, such as the American Physical Society (APS), the Canadian Association of Physicists (CAP), the International Union of Pure and Applied Physics (IUPAP), the International Commission on Physics Education (ICPE), the International Commission on Medical Physics (ICMP), the International Commission on Environmental Physics (ICEP), and more. He also served as the editor or associate editor of several journals and magazines, such as The Physical Review, The American Journal of Physics, The Canadian Journal of Physics, The Journal of Applied Physics, The Journal of Medical Physics, The Journal of Environmental Physics, The Physics Teacher, The Physics World, and more. He also organized or chaired several conferences and workshops on physics and related topics. He also supervised or advised hundreds of graduate students and postdoctoral fellows who went on to become successful physicists and educators themselves.
-
He was also a passionate and inspiring teacher who loved to share his knowledge and enthusiasm for physics with his students. He taught courses on various levels and topics of physics at many universities around the world. He used innovative methods and techniques to make his lectures engaging and interactive. He used demonstrations, experiments, simulations, animations, videos, games, puzzles, quizzes, projects, assignments, tests, exams, feedbacks Continuing the article: and more to enhance his students' learning experience. He also encouraged his students to ask questions, participate in discussions, collaborate with peers, apply their knowledge to real-world situations, and explore their own interests and passions in physics. He was known for his humor, kindness, patience, and enthusiasm in teaching physics. He received many awards and recognitions for his excellence in teaching, such as the Outstanding Teacher Award from the American Association of Physics Teachers, the Distinguished Teaching Award from the University of British Columbia, the Excellence in Teaching Award from the University of Alberta, the Teaching Excellence Award from the University of Waterloo, and more.
-
How to Use Fisica Blatt Solucionario Effectively?
-
Fisica blatt solucionario can be a very useful resource for physics students who want to practice their problem-solving skills and deepen their understanding of physics concepts. However, it is important to use fisica blatt solucionario wisely and responsibly. Here are some tips and recommendations for using fisica blatt solucionario effectively:
-
-
Do not use fisica blatt solucionario as a substitute for studying or learning physics. Fisica blatt solucionario is meant to complement your learning process, not replace it. You should always read the textbook, attend the lectures, do the homework, and review the material before consulting fisica blatt solucionario.
-
Do not copy or memorize the solutions from fisica blatt solucionario without understanding them. Fisica blatt solucionario is not a cheat sheet or a shortcut to get good grades. Copying or memorizing the solutions will not help you learn physics or prepare you for exams. You should always try to solve the problems on your own first, using your own knowledge and reasoning. If you get stuck or make a mistake, then you can check fisica blatt solucionario for guidance and feedback.
-
Do not rely on fisica blatt solucionario as the only source of solutions or explanations. Fisica blatt solucionario is not a definitive or authoritative answer key. It is possible that some solutions or explanations in fisica blatt solucionario are incomplete, incorrect, unclear, or outdated. You should always compare and contrast fisica blatt solucionario with other sources of solutions or explanations, such as your instructor, your classmates, your tutor, other books, websites, videos, etc.
-
Do use fisica blatt solucionario as a tool for self-assessment and improvement. Fisica blatt solucionario can help you identify your strengths and weaknesses in physics problem-solving. You can use fisica blatt solucionario to check your answers, correct your errors, clarify your doubts, fill in your gaps, reinforce your concepts, expand your perspectives, and challenge yourself with more difficult problems.
-
Do use fisica blatt solucionario as a resource for learning and discovery. Fisica blatt solucionario can expose you to different methods and techniques for solving physics problems. You can learn from fisica blatt solucionario how to apply physics principles and formulas, how to manipulate mathematical expressions and equations, how to use diagrams and graphs, how to interpret physical phenomena and data, how to communicate your solutions and arguments, and more. You can also discover new facts and insights about physics from fisica blatt solucionario that might spark your curiosity and interest.
-
-
By following these tips and recommendations, you can make the most out of fisica blatt solucionario and enhance your physics learning experience.
-
Conclusion
-
In conclusion, fisica blatt solucionario is a collection of solutions for the problems in the book Fundamentos de Fisica by Frank Blatt. It is a valuable resource for physics students who want to practice their problem-solving skills and deepen their understanding of physics concepts. However, it is important to use fisica blatt solucionario wisely and responsibly. You should always try to solve the problems on your own first, Continuing the article: understand the solutions and explanations in fisica blatt solucionario, compare and contrast fisica blatt solucionario with other sources of solutions and explanations, and use fisica blatt solucionario as a tool for self-assessment and improvement. We hope that this article has given you a comprehensive guide on what fisica blatt solucionario is, who is Frank Blatt, and how to use fisica blatt solucionario effectively. If you are interested in learning more about fisica blatt solucionario, you can download it for free from the link below. You can also find more information and support on fisica blatt solucionario from the following resources and links . Thank you for reading this article and we wish you all the best in your physics studies.
FAQs
-
-
Where can I download fisica blatt solucionario for free?
-
You can download fisica blatt solucionario for free from this link: https://epdfx.com/fisica-blatt-solucionario_5edea770e2b6f50f2889f533_pdf.html. This is a PDF document that contains the solutions for all the problems in the book Fundamentos de Fisica by Frank Blatt.
-
What are some other books or websites that offer similar solutions for physics problems?
-
Some other books or websites that offer similar solutions for physics problems are:
-
-
Physics for Scientists and Engineers by Raymond A. Serway and John W. Jewett. This is a textbook that covers topics such as mechanics, thermodynamics, electricity, magnetism, optics, relativity, quantum physics, nuclear physics, and more. It also includes a student solutions manual that contains the solutions for some of the problems in the book.
-
University Physics by Hugh D. Young and Roger A. Freedman. This is another textbook that covers similar topics as the previous one. It also includes a student solutions manual that contains the solutions for some of the problems in the book.
-
Physics Classroom. This is a website that offers tutorials, animations, simulations, videos, quizzes, tests, and more on various topics of physics. It also includes a section called Minds On Physics that contains interactive exercises and problems with feedback and hints.
-
Khan Academy. This is another website that offers videos, articles, exercises, quizzes, tests, and more on various topics of physics. It also includes a section called Physics that contains problems with solutions and explanations.
-
-
How can I check if my answers are correct using fisica blatt solucionario?
-
You can check if your answers are correct using fisica blatt solucionario by comparing your answers with the ones given in fisica blatt solucionario. If your answers match with the ones in fisica blatt solucionario, then you are probably correct. If your answers do not match with the ones in fisica blatt solucionario, then you might have made a mistake or used a different method or approach. In that case, you should try to understand where you went wrong or how you can improve your solution.
-
How can I improve my physics skills and knowledge using fisica blatt solucionario?
-
You can improve your physics skills and knowledge using fisica blatt solucionario by following these steps:
-
-
Read the textbook, attend the lectures, do the homework, and review the material before consulting fisica blatt solucionario.
-
Try to solve the problems on your own first, using your own knowledge and reasoning.
-
Check your answers with fisica blatt solucionario and see if they match.
-
If they match, review the solutions and explanations in fisica blatt solucionario and see if you can learn anything new or different from them.
-
If they do not match, identify your errors or gaps and try to correct them or fill them using fisica blatt solucionario or other sources of solutions or explanations.
-
Practice more problems of different types and levels of difficulty using fisica blatt solucionario or other sources of problems.
-
Evaluate your progress and performance using tests or exams based on fisica blatt solucionario or other sources of questions.
-
-
What are some challenges or limitations of using fisica blatt solucionario?
-
Some challenges or limitations of using fisica blatt solucionario are:
-
-
Fisica blatt solucionario is not a substitute for studying or learning physics. You still need to read the textbook, Continuing the article: attend the lectures, do the homework, and review the material before consulting fisica blatt solucionario.
-
Fisica blatt solucionario is not a cheat sheet or a shortcut to get good grades. You should not copy or memorize the solutions from fisica blatt solucionario without understanding them. You should always try to solve the problems on your own first and use fisica blatt solucionario as a guidance and feedback.
-
Fisica blatt solucionario is not a definitive or authoritative answer key. It is possible that some solutions or explanations in fisica blatt solucionario are incomplete, incorrect, unclear, or outdated. You should always compare and contrast fisica blatt solucionario with other sources of solutions or explanations and verify your answers with your instructor or tutor.
-
Fisica blatt solucionario is not a comprehensive or exhaustive source of problems or questions. It only contains the solutions for the problems in the book Fundamentos de Fisica by Frank Blatt. You should also practice problems from other books or websites that cover similar topics and concepts.
-
Fisica blatt solucionario is not a personalized or adaptive source of learning. It does not take into account your individual needs, preferences, goals, or interests. You should also use other methods and resources that suit your learning style and pace, such as online courses, videos, podcasts, games, apps, etc.
-
-
By being aware of these challenges and limitations, you can avoid some common mistakes and pitfalls when using fisica blatt solucionario and use it more effectively and responsibly.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/ramiin2/AutoGPT/autogpt/commands/web_playwright.py b/spaces/ramiin2/AutoGPT/autogpt/commands/web_playwright.py
deleted file mode 100644
index 4e388ded203cefb5e24f9116f7fe5b8a94893413..0000000000000000000000000000000000000000
--- a/spaces/ramiin2/AutoGPT/autogpt/commands/web_playwright.py
+++ /dev/null
@@ -1,80 +0,0 @@
-"""Web scraping commands using Playwright"""
-from __future__ import annotations
-
-try:
- from playwright.sync_api import sync_playwright
-except ImportError:
- print(
- "Playwright not installed. Please install it with 'pip install playwright' to use."
- )
-from bs4 import BeautifulSoup
-
-from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
-
-
-def scrape_text(url: str) -> str:
- """Scrape text from a webpage
-
- Args:
- url (str): The URL to scrape text from
-
- Returns:
- str: The scraped text
- """
- with sync_playwright() as p:
- browser = p.chromium.launch()
- page = browser.new_page()
-
- try:
- page.goto(url)
- html_content = page.content()
- soup = BeautifulSoup(html_content, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
-
- except Exception as e:
- text = f"Error: {str(e)}"
-
- finally:
- browser.close()
-
- return text
-
-
-def scrape_links(url: str) -> str | list[str]:
- """Scrape links from a webpage
-
- Args:
- url (str): The URL to scrape links from
-
- Returns:
- Union[str, List[str]]: The scraped links
- """
- with sync_playwright() as p:
- browser = p.chromium.launch()
- page = browser.new_page()
-
- try:
- page.goto(url)
- html_content = page.content()
- soup = BeautifulSoup(html_content, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- hyperlinks = extract_hyperlinks(soup, url)
- formatted_links = format_hyperlinks(hyperlinks)
-
- except Exception as e:
- formatted_links = f"Error: {str(e)}"
-
- finally:
- browser.close()
-
- return formatted_links
diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/dns/promises.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/dns/promises.d.ts
deleted file mode 100644
index 77cd807bd501b5a4d8687ed604989f6c2c252f2e..0000000000000000000000000000000000000000
--- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/dns/promises.d.ts
+++ /dev/null
@@ -1,370 +0,0 @@
-/**
- * The `dns.promises` API provides an alternative set of asynchronous DNS methods
- * that return `Promise` objects rather than using callbacks. The API is accessible
- * via `require('dns').promises` or `require('dns/promises')`.
- * @since v10.6.0
- */
-declare module 'dns/promises' {
- import {
- LookupAddress,
- LookupOneOptions,
- LookupAllOptions,
- LookupOptions,
- AnyRecord,
- CaaRecord,
- MxRecord,
- NaptrRecord,
- SoaRecord,
- SrvRecord,
- ResolveWithTtlOptions,
- RecordWithTtl,
- ResolveOptions,
- ResolverOptions,
- } from 'node:dns';
- /**
- * Returns an array of IP address strings, formatted according to [RFC 5952](https://tools.ietf.org/html/rfc5952#section-6),
- * that are currently configured for DNS resolution. A string will include a port
- * section if a custom port is used.
- *
- * ```js
- * [
- * '4.4.4.4',
- * '2001:4860:4860::8888',
- * '4.4.4.4:1053',
- * '[2001:4860:4860::8888]:1053',
- * ]
- * ```
- * @since v10.6.0
- */
- function getServers(): string[];
- /**
- * Resolves a host name (e.g. `'nodejs.org'`) into the first found A (IPv4) or
- * AAAA (IPv6) record. All `option` properties are optional. If `options` is an
- * integer, then it must be `4` or `6` – if `options` is not provided, then IPv4
- * and IPv6 addresses are both returned if found.
- *
- * With the `all` option set to `true`, the `Promise` is resolved with `addresses`being an array of objects with the properties `address` and `family`.
- *
- * On error, the `Promise` is rejected with an `Error` object, where `err.code`is the error code.
- * Keep in mind that `err.code` will be set to `'ENOTFOUND'` not only when
- * the host name does not exist but also when the lookup fails in other ways
- * such as no available file descriptors.
- *
- * `dnsPromises.lookup()` does not necessarily have anything to do with the DNS
- * protocol. The implementation uses an operating system facility that can
- * associate names with addresses, and vice versa. This implementation can have
- * subtle but important consequences on the behavior of any Node.js program. Please
- * take some time to consult the `Implementation considerations section` before
- * using `dnsPromises.lookup()`.
- *
- * Example usage:
- *
- * ```js
- * const dns = require('dns');
- * const dnsPromises = dns.promises;
- * const options = {
- * family: 6,
- * hints: dns.ADDRCONFIG | dns.V4MAPPED,
- * };
- *
- * dnsPromises.lookup('example.com', options).then((result) => {
- * console.log('address: %j family: IPv%s', result.address, result.family);
- * // address: "2606:2800:220:1:248:1893:25c8:1946" family: IPv6
- * });
- *
- * // When options.all is true, the result will be an Array.
- * options.all = true;
- * dnsPromises.lookup('example.com', options).then((result) => {
- * console.log('addresses: %j', result);
- * // addresses: [{"address":"2606:2800:220:1:248:1893:25c8:1946","family":6}]
- * });
- * ```
- * @since v10.6.0
- */
- function lookup(hostname: string, family: number): Promise;
- function lookup(hostname: string, options: LookupOneOptions): Promise;
- function lookup(hostname: string, options: LookupAllOptions): Promise;
- function lookup(hostname: string, options: LookupOptions): Promise;
- function lookup(hostname: string): Promise;
- /**
- * Resolves the given `address` and `port` into a host name and service using
- * the operating system's underlying `getnameinfo` implementation.
- *
- * If `address` is not a valid IP address, a `TypeError` will be thrown.
- * The `port` will be coerced to a number. If it is not a legal port, a `TypeError`will be thrown.
- *
- * On error, the `Promise` is rejected with an `Error` object, where `err.code`is the error code.
- *
- * ```js
- * const dnsPromises = require('dns').promises;
- * dnsPromises.lookupService('127.0.0.1', 22).then((result) => {
- * console.log(result.hostname, result.service);
- * // Prints: localhost ssh
- * });
- * ```
- * @since v10.6.0
- */
- function lookupService(
- address: string,
- port: number
- ): Promise<{
- hostname: string;
- service: string;
- }>;
- /**
- * Uses the DNS protocol to resolve a host name (e.g. `'nodejs.org'`) into an array
- * of the resource records. When successful, the `Promise` is resolved with an
- * array of resource records. The type and structure of individual results vary
- * based on `rrtype`:
- *
- *
- *
- * On error, the `Promise` is rejected with an `Error` object, where `err.code`is one of the `DNS error codes`.
- * @since v10.6.0
- * @param hostname Host name to resolve.
- * @param [rrtype='A'] Resource record type.
- */
- function resolve(hostname: string): Promise;
- function resolve(hostname: string, rrtype: 'A'): Promise;
- function resolve(hostname: string, rrtype: 'AAAA'): Promise;
- function resolve(hostname: string, rrtype: 'ANY'): Promise;
- function resolve(hostname: string, rrtype: 'CAA'): Promise;
- function resolve(hostname: string, rrtype: 'CNAME'): Promise;
- function resolve(hostname: string, rrtype: 'MX'): Promise;
- function resolve(hostname: string, rrtype: 'NAPTR'): Promise;
- function resolve(hostname: string, rrtype: 'NS'): Promise;
- function resolve(hostname: string, rrtype: 'PTR'): Promise;
- function resolve(hostname: string, rrtype: 'SOA'): Promise;
- function resolve(hostname: string, rrtype: 'SRV'): Promise;
- function resolve(hostname: string, rrtype: 'TXT'): Promise;
- function resolve(hostname: string, rrtype: string): Promise;
- /**
- * Uses the DNS protocol to resolve IPv4 addresses (`A` records) for the`hostname`. On success, the `Promise` is resolved with an array of IPv4
- * addresses (e.g. `['74.125.79.104', '74.125.79.105', '74.125.79.106']`).
- * @since v10.6.0
- * @param hostname Host name to resolve.
- */
- function resolve4(hostname: string): Promise;
- function resolve4(hostname: string, options: ResolveWithTtlOptions): Promise;
- function resolve4(hostname: string, options: ResolveOptions): Promise;
- /**
- * Uses the DNS protocol to resolve IPv6 addresses (`AAAA` records) for the`hostname`. On success, the `Promise` is resolved with an array of IPv6
- * addresses.
- * @since v10.6.0
- * @param hostname Host name to resolve.
- */
- function resolve6(hostname: string): Promise;
- function resolve6(hostname: string, options: ResolveWithTtlOptions): Promise;
- function resolve6(hostname: string, options: ResolveOptions): Promise;
- /**
- * Uses the DNS protocol to resolve all records (also known as `ANY` or `*` query).
- * On success, the `Promise` is resolved with an array containing various types of
- * records. Each object has a property `type` that indicates the type of the
- * current record. And depending on the `type`, additional properties will be
- * present on the object:
- *
- *
- *
- * Here is an example of the result object:
- *
- * ```js
- * [ { type: 'A', address: '127.0.0.1', ttl: 299 },
- * { type: 'CNAME', value: 'example.com' },
- * { type: 'MX', exchange: 'alt4.aspmx.l.example.com', priority: 50 },
- * { type: 'NS', value: 'ns1.example.com' },
- * { type: 'TXT', entries: [ 'v=spf1 include:_spf.example.com ~all' ] },
- * { type: 'SOA',
- * nsname: 'ns1.example.com',
- * hostmaster: 'admin.example.com',
- * serial: 156696742,
- * refresh: 900,
- * retry: 900,
- * expire: 1800,
- * minttl: 60 } ]
- * ```
- * @since v10.6.0
- */
- function resolveAny(hostname: string): Promise;
- /**
- * Uses the DNS protocol to resolve `CAA` records for the `hostname`. On success,
- * the `Promise` is resolved with an array of objects containing available
- * certification authority authorization records available for the `hostname`(e.g. `[{critical: 0, iodef: 'mailto:pki@example.com'},{critical: 128, issue: 'pki.example.com'}]`).
- * @since v15.0.0, v14.17.0
- */
- function resolveCaa(hostname: string): Promise;
- /**
- * Uses the DNS protocol to resolve `CNAME` records for the `hostname`. On success,
- * the `Promise` is resolved with an array of canonical name records available for
- * the `hostname` (e.g. `['bar.example.com']`).
- * @since v10.6.0
- */
- function resolveCname(hostname: string): Promise;
- /**
- * Uses the DNS protocol to resolve mail exchange records (`MX` records) for the`hostname`. On success, the `Promise` is resolved with an array of objects
- * containing both a `priority` and `exchange` property (e.g.`[{priority: 10, exchange: 'mx.example.com'}, ...]`).
- * @since v10.6.0
- */
- function resolveMx(hostname: string): Promise;
- /**
- * Uses the DNS protocol to resolve regular expression based records (`NAPTR`records) for the `hostname`. On success, the `Promise` is resolved with an array
- * of objects with the following properties:
- *
- * * `flags`
- * * `service`
- * * `regexp`
- * * `replacement`
- * * `order`
- * * `preference`
- *
- * ```js
- * {
- * flags: 's',
- * service: 'SIP+D2U',
- * regexp: '',
- * replacement: '_sip._udp.example.com',
- * order: 30,
- * preference: 100
- * }
- * ```
- * @since v10.6.0
- */
- function resolveNaptr(hostname: string): Promise;
- /**
- * Uses the DNS protocol to resolve name server records (`NS` records) for the`hostname`. On success, the `Promise` is resolved with an array of name server
- * records available for `hostname` (e.g.`['ns1.example.com', 'ns2.example.com']`).
- * @since v10.6.0
- */
- function resolveNs(hostname: string): Promise;
- /**
- * Uses the DNS protocol to resolve pointer records (`PTR` records) for the`hostname`. On success, the `Promise` is resolved with an array of strings
- * containing the reply records.
- * @since v10.6.0
- */
- function resolvePtr(hostname: string): Promise;
- /**
- * Uses the DNS protocol to resolve a start of authority record (`SOA` record) for
- * the `hostname`. On success, the `Promise` is resolved with an object with the
- * following properties:
- *
- * * `nsname`
- * * `hostmaster`
- * * `serial`
- * * `refresh`
- * * `retry`
- * * `expire`
- * * `minttl`
- *
- * ```js
- * {
- * nsname: 'ns.example.com',
- * hostmaster: 'root.example.com',
- * serial: 2013101809,
- * refresh: 10000,
- * retry: 2400,
- * expire: 604800,
- * minttl: 3600
- * }
- * ```
- * @since v10.6.0
- */
- function resolveSoa(hostname: string): Promise;
- /**
- * Uses the DNS protocol to resolve service records (`SRV` records) for the`hostname`. On success, the `Promise` is resolved with an array of objects with
- * the following properties:
- *
- * * `priority`
- * * `weight`
- * * `port`
- * * `name`
- *
- * ```js
- * {
- * priority: 10,
- * weight: 5,
- * port: 21223,
- * name: 'service.example.com'
- * }
- * ```
- * @since v10.6.0
- */
- function resolveSrv(hostname: string): Promise;
- /**
- * Uses the DNS protocol to resolve text queries (`TXT` records) for the`hostname`. On success, the `Promise` is resolved with a two-dimensional array
- * of the text records available for `hostname` (e.g.`[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]`). Each sub-array contains TXT chunks of
- * one record. Depending on the use case, these could be either joined together or
- * treated separately.
- * @since v10.6.0
- */
- function resolveTxt(hostname: string): Promise;
- /**
- * Performs a reverse DNS query that resolves an IPv4 or IPv6 address to an
- * array of host names.
- *
- * On error, the `Promise` is rejected with an `Error` object, where `err.code`is one of the `DNS error codes`.
- * @since v10.6.0
- */
- function reverse(ip: string): Promise;
- /**
- * Sets the IP address and port of servers to be used when performing DNS
- * resolution. The `servers` argument is an array of [RFC 5952](https://tools.ietf.org/html/rfc5952#section-6) formatted
- * addresses. If the port is the IANA default DNS port (53) it can be omitted.
- *
- * ```js
- * dnsPromises.setServers([
- * '4.4.4.4',
- * '[2001:4860:4860::8888]',
- * '4.4.4.4:1053',
- * '[2001:4860:4860::8888]:1053',
- * ]);
- * ```
- *
- * An error will be thrown if an invalid address is provided.
- *
- * The `dnsPromises.setServers()` method must not be called while a DNS query is in
- * progress.
- *
- * This method works much like [resolve.conf](https://man7.org/linux/man-pages/man5/resolv.conf.5.html).
- * That is, if attempting to resolve with the first server provided results in a`NOTFOUND` error, the `resolve()` method will _not_ attempt to resolve with
- * subsequent servers provided. Fallback DNS servers will only be used if the
- * earlier ones time out or result in some other error.
- * @since v10.6.0
- * @param servers array of `RFC 5952` formatted addresses
- */
- function setServers(servers: ReadonlyArray): void;
- /**
- * Set the default value of `verbatim` in `dns.lookup()` and `dnsPromises.lookup()`. The value could be:
- *
- * * `ipv4first`: sets default `verbatim` `false`.
- * * `verbatim`: sets default `verbatim` `true`.
- *
- * The default is `ipv4first` and `dnsPromises.setDefaultResultOrder()` have
- * higher priority than `--dns-result-order`. When using `worker threads`,`dnsPromises.setDefaultResultOrder()` from the main thread won't affect the
- * default dns orders in workers.
- * @since v16.4.0, v14.18.0
- * @param order must be `'ipv4first'` or `'verbatim'`.
- */
- function setDefaultResultOrder(order: 'ipv4first' | 'verbatim'): void;
- class Resolver {
- constructor(options?: ResolverOptions);
- cancel(): void;
- getServers: typeof getServers;
- resolve: typeof resolve;
- resolve4: typeof resolve4;
- resolve6: typeof resolve6;
- resolveAny: typeof resolveAny;
- resolveCname: typeof resolveCname;
- resolveMx: typeof resolveMx;
- resolveNaptr: typeof resolveNaptr;
- resolveNs: typeof resolveNs;
- resolvePtr: typeof resolvePtr;
- resolveSoa: typeof resolveSoa;
- resolveSrv: typeof resolveSrv;
- resolveTxt: typeof resolveTxt;
- reverse: typeof reverse;
- setLocalAddress(ipv4?: string, ipv6?: string): void;
- setServers: typeof setServers;
- }
-}
-declare module 'node:dns/promises' {
- export * from 'dns/promises';
-}
diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/body-parser/lib/read.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/body-parser/lib/read.js
deleted file mode 100644
index fce6283f50961e68c2f576031ed5e3d4fdc39984..0000000000000000000000000000000000000000
--- a/spaces/rayan-saleh/whisper2notion/server/node_modules/body-parser/lib/read.js
+++ /dev/null
@@ -1,205 +0,0 @@
-/*!
- * body-parser
- * Copyright(c) 2014-2015 Douglas Christopher Wilson
- * MIT Licensed
- */
-
-'use strict'
-
-/**
- * Module dependencies.
- * @private
- */
-
-var createError = require('http-errors')
-var destroy = require('destroy')
-var getBody = require('raw-body')
-var iconv = require('iconv-lite')
-var onFinished = require('on-finished')
-var unpipe = require('unpipe')
-var zlib = require('zlib')
-
-/**
- * Module exports.
- */
-
-module.exports = read
-
-/**
- * Read a request into a buffer and parse.
- *
- * @param {object} req
- * @param {object} res
- * @param {function} next
- * @param {function} parse
- * @param {function} debug
- * @param {object} options
- * @private
- */
-
-function read (req, res, next, parse, debug, options) {
- var length
- var opts = options
- var stream
-
- // flag as parsed
- req._body = true
-
- // read options
- var encoding = opts.encoding !== null
- ? opts.encoding
- : null
- var verify = opts.verify
-
- try {
- // get the content stream
- stream = contentstream(req, debug, opts.inflate)
- length = stream.length
- stream.length = undefined
- } catch (err) {
- return next(err)
- }
-
- // set raw-body options
- opts.length = length
- opts.encoding = verify
- ? null
- : encoding
-
- // assert charset is supported
- if (opts.encoding === null && encoding !== null && !iconv.encodingExists(encoding)) {
- return next(createError(415, 'unsupported charset "' + encoding.toUpperCase() + '"', {
- charset: encoding.toLowerCase(),
- type: 'charset.unsupported'
- }))
- }
-
- // read body
- debug('read body')
- getBody(stream, opts, function (error, body) {
- if (error) {
- var _error
-
- if (error.type === 'encoding.unsupported') {
- // echo back charset
- _error = createError(415, 'unsupported charset "' + encoding.toUpperCase() + '"', {
- charset: encoding.toLowerCase(),
- type: 'charset.unsupported'
- })
- } else {
- // set status code on error
- _error = createError(400, error)
- }
-
- // unpipe from stream and destroy
- if (stream !== req) {
- unpipe(req)
- destroy(stream, true)
- }
-
- // read off entire request
- dump(req, function onfinished () {
- next(createError(400, _error))
- })
- return
- }
-
- // verify
- if (verify) {
- try {
- debug('verify body')
- verify(req, res, body, encoding)
- } catch (err) {
- next(createError(403, err, {
- body: body,
- type: err.type || 'entity.verify.failed'
- }))
- return
- }
- }
-
- // parse
- var str = body
- try {
- debug('parse body')
- str = typeof body !== 'string' && encoding !== null
- ? iconv.decode(body, encoding)
- : body
- req.body = parse(str)
- } catch (err) {
- next(createError(400, err, {
- body: str,
- type: err.type || 'entity.parse.failed'
- }))
- return
- }
-
- next()
- })
-}
-
-/**
- * Get the content stream of the request.
- *
- * @param {object} req
- * @param {function} debug
- * @param {boolean} [inflate=true]
- * @return {object}
- * @api private
- */
-
-function contentstream (req, debug, inflate) {
- var encoding = (req.headers['content-encoding'] || 'identity').toLowerCase()
- var length = req.headers['content-length']
- var stream
-
- debug('content-encoding "%s"', encoding)
-
- if (inflate === false && encoding !== 'identity') {
- throw createError(415, 'content encoding unsupported', {
- encoding: encoding,
- type: 'encoding.unsupported'
- })
- }
-
- switch (encoding) {
- case 'deflate':
- stream = zlib.createInflate()
- debug('inflate body')
- req.pipe(stream)
- break
- case 'gzip':
- stream = zlib.createGunzip()
- debug('gunzip body')
- req.pipe(stream)
- break
- case 'identity':
- stream = req
- stream.length = length
- break
- default:
- throw createError(415, 'unsupported content encoding "' + encoding + '"', {
- encoding: encoding,
- type: 'encoding.unsupported'
- })
- }
-
- return stream
-}
-
-/**
- * Dump the contents of a request.
- *
- * @param {object} req
- * @param {function} callback
- * @api private
- */
-
-function dump (req, callback) {
- if (onFinished.isFinished(req)) {
- callback(null)
- } else {
- onFinished(req, callback)
- req.resume()
- }
-}
diff --git a/spaces/razakhan/text-summarizer/README.md b/spaces/razakhan/text-summarizer/README.md
deleted file mode 100644
index 08abfde3d79d71b34106c6cb0c1cb18b8e518775..0000000000000000000000000000000000000000
--- a/spaces/razakhan/text-summarizer/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Text Summarizer
-emoji: 🦀
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/DEMUL056ARCADEROMSPackepub !FULL!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/DEMUL056ARCADEROMSPackepub !FULL!.md
deleted file mode 100644
index 1a92eae3cc998213d80cde9400850b58c31cd321..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/DEMUL056ARCADEROMSPackepub !FULL!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-- What about your service? - I asked. - There must be someone left there, right? 8a78ff9644
-
-
-
diff --git a/spaces/rewoo/ReWOO-Demo/nodes/NodeCofig.py b/spaces/rewoo/ReWOO-Demo/nodes/NodeCofig.py
deleted file mode 100644
index d98a90a9307d1ae77ffcf26293f0ec1196592e05..0000000000000000000000000000000000000000
--- a/spaces/rewoo/ReWOO-Demo/nodes/NodeCofig.py
+++ /dev/null
@@ -1,7 +0,0 @@
-OPENAI_CONFIG = {
- "temperature": 0.5,
- "max_tokens": 256,
- "top_p": 1,
- "frequency_penalty": 0,
- "presence_penalty": 0,
-}
\ No newline at end of file
diff --git a/spaces/rishi9440/remove-photo-background/src/trainer.py b/spaces/rishi9440/remove-photo-background/src/trainer.py
deleted file mode 100644
index bd3d8be4eeeaf5cde08be16239bc7cdcb2d38bae..0000000000000000000000000000000000000000
--- a/spaces/rishi9440/remove-photo-background/src/trainer.py
+++ /dev/null
@@ -1,299 +0,0 @@
-import math
-import scipy
-import numpy as np
-from scipy.ndimage import grey_dilation, grey_erosion
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-__all__ = [
- 'supervised_training_iter',
- 'soc_adaptation_iter',
-]
-
-
-# ----------------------------------------------------------------------------------
-# Tool Classes/Functions
-# ----------------------------------------------------------------------------------
-
-class GaussianBlurLayer(nn.Module):
- """ Add Gaussian Blur to a 4D tensors
- This layer takes a 4D tensor of {N, C, H, W} as input.
- The Gaussian blur will be performed in given channel number (C) splitly.
- """
-
- def __init__(self, channels, kernel_size):
- """
- Arguments:
- channels (int): Channel for input tensor
- kernel_size (int): Size of the kernel used in blurring
- """
-
- super(GaussianBlurLayer, self).__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- assert self.kernel_size % 2 != 0
-
- self.op = nn.Sequential(
- nn.ReflectionPad2d(math.floor(self.kernel_size / 2)),
- nn.Conv2d(channels, channels, self.kernel_size,
- stride=1, padding=0, bias=None, groups=channels)
- )
-
- self._init_kernel()
-
- def forward(self, x):
- """
- Arguments:
- x (torch.Tensor): input 4D tensor
- Returns:
- torch.Tensor: Blurred version of the input
- """
-
- if not len(list(x.shape)) == 4:
- print('\'GaussianBlurLayer\' requires a 4D tensor as input\n')
- exit()
- elif not x.shape[1] == self.channels:
- print('In \'GaussianBlurLayer\', the required channel ({0}) is'
- 'not the same as input ({1})\n'.format(self.channels, x.shape[1]))
- exit()
-
- return self.op(x)
-
- def _init_kernel(self):
- sigma = 0.3 * ((self.kernel_size - 1) * 0.5 - 1) + 0.8
-
- n = np.zeros((self.kernel_size, self.kernel_size))
- i = math.floor(self.kernel_size / 2)
- n[i, i] = 1
- kernel = scipy.ndimage.gaussian_filter(n, sigma)
-
- for name, param in self.named_parameters():
- param.data.copy_(torch.from_numpy(kernel))
-
-# ----------------------------------------------------------------------------------
-
-
-# ----------------------------------------------------------------------------------
-# MODNet Training Functions
-# ----------------------------------------------------------------------------------
-
-blurer = GaussianBlurLayer(1, 3).cuda()
-
-
-def supervised_training_iter(
- modnet, optimizer, image, trimap, gt_matte,
- semantic_scale=10.0, detail_scale=10.0, matte_scale=1.0):
- """ Supervised training iteration of MODNet
- This function trains MODNet for one iteration in a labeled dataset.
-
- Arguments:
- modnet (torch.nn.Module): instance of MODNet
- optimizer (torch.optim.Optimizer): optimizer for supervised training
- image (torch.autograd.Variable): input RGB image
- its pixel values should be normalized
- trimap (torch.autograd.Variable): trimap used to calculate the losses
- its pixel values can be 0, 0.5, or 1
- (foreground=1, background=0, unknown=0.5)
- gt_matte (torch.autograd.Variable): ground truth alpha matte
- its pixel values are between [0, 1]
- semantic_scale (float): scale of the semantic loss
- NOTE: please adjust according to your dataset
- detail_scale (float): scale of the detail loss
- NOTE: please adjust according to your dataset
- matte_scale (float): scale of the matte loss
- NOTE: please adjust according to your dataset
-
- Returns:
- semantic_loss (torch.Tensor): loss of the semantic estimation [Low-Resolution (LR) Branch]
- detail_loss (torch.Tensor): loss of the detail prediction [High-Resolution (HR) Branch]
- matte_loss (torch.Tensor): loss of the semantic-detail fusion [Fusion Branch]
-
- Example:
- import torch
- from src.models.modnet import MODNet
- from src.trainer import supervised_training_iter
-
- bs = 16 # batch size
- lr = 0.01 # learn rate
- epochs = 40 # total epochs
-
- modnet = torch.nn.DataParallel(MODNet()).cuda()
- optimizer = torch.optim.SGD(modnet.parameters(), lr=lr, momentum=0.9)
- lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=int(0.25 * epochs), gamma=0.1)
-
- dataloader = CREATE_YOUR_DATALOADER(bs) # NOTE: please finish this function
-
- for epoch in range(0, epochs):
- for idx, (image, trimap, gt_matte) in enumerate(dataloader):
- semantic_loss, detail_loss, matte_loss = \
- supervised_training_iter(modnet, optimizer, image, trimap, gt_matte)
- lr_scheduler.step()
- """
-
- global blurer
-
- # set the model to train mode and clear the optimizer
- modnet.train()
- optimizer.zero_grad()
-
- # forward the model
- pred_semantic, pred_detail, pred_matte = modnet(image, False)
-
- # calculate the boundary mask from the trimap
- boundaries = (trimap < 0.5) + (trimap > 0.5)
-
- # calculate the semantic loss
- gt_semantic = F.interpolate(gt_matte, scale_factor=1/16, mode='bilinear')
- gt_semantic = blurer(gt_semantic)
- semantic_loss = torch.mean(F.mse_loss(pred_semantic, gt_semantic))
- semantic_loss = semantic_scale * semantic_loss
-
- # calculate the detail loss
- pred_boundary_detail = torch.where(boundaries, trimap, pred_detail)
- gt_detail = torch.where(boundaries, trimap, gt_matte)
- detail_loss = torch.mean(F.l1_loss(pred_boundary_detail, gt_detail))
- detail_loss = detail_scale * detail_loss
-
- # calculate the matte loss
- pred_boundary_matte = torch.where(boundaries, trimap, pred_matte)
- matte_l1_loss = F.l1_loss(pred_matte, gt_matte) + 4.0 * F.l1_loss(pred_boundary_matte, gt_matte)
- matte_compositional_loss = F.l1_loss(image * pred_matte, image * gt_matte) \
- + 4.0 * F.l1_loss(image * pred_boundary_matte, image * gt_matte)
- matte_loss = torch.mean(matte_l1_loss + matte_compositional_loss)
- matte_loss = matte_scale * matte_loss
-
- # calculate the final loss, backward the loss, and update the model
- loss = semantic_loss + detail_loss + matte_loss
- loss.backward()
- optimizer.step()
-
- # for test
- return semantic_loss, detail_loss, matte_loss
-
-
-def soc_adaptation_iter(
- modnet, backup_modnet, optimizer, image,
- soc_semantic_scale=100.0, soc_detail_scale=1.0):
- """ Self-Supervised sub-objective consistency (SOC) adaptation iteration of MODNet
- This function fine-tunes MODNet for one iteration in an unlabeled dataset.
- Note that SOC can only fine-tune a converged MODNet, i.e., MODNet that has been
- trained in a labeled dataset.
-
- Arguments:
- modnet (torch.nn.Module): instance of MODNet
- backup_modnet (torch.nn.Module): backup of the trained MODNet
- optimizer (torch.optim.Optimizer): optimizer for self-supervised SOC
- image (torch.autograd.Variable): input RGB image
- its pixel values should be normalized
- soc_semantic_scale (float): scale of the SOC semantic loss
- NOTE: please adjust according to your dataset
- soc_detail_scale (float): scale of the SOC detail loss
- NOTE: please adjust according to your dataset
-
- Returns:
- soc_semantic_loss (torch.Tensor): loss of the semantic SOC
- soc_detail_loss (torch.Tensor): loss of the detail SOC
-
- Example:
- import copy
- import torch
- from src.models.modnet import MODNet
- from src.trainer import soc_adaptation_iter
-
- bs = 1 # batch size
- lr = 0.00001 # learn rate
- epochs = 10 # total epochs
-
- modnet = torch.nn.DataParallel(MODNet()).cuda()
- modnet = LOAD_TRAINED_CKPT() # NOTE: please finish this function
-
- optimizer = torch.optim.Adam(modnet.parameters(), lr=lr, betas=(0.9, 0.99))
- dataloader = CREATE_YOUR_DATALOADER(bs) # NOTE: please finish this function
-
- for epoch in range(0, epochs):
- backup_modnet = copy.deepcopy(modnet)
- for idx, (image) in enumerate(dataloader):
- soc_semantic_loss, soc_detail_loss = \
- soc_adaptation_iter(modnet, backup_modnet, optimizer, image)
- """
-
- global blurer
-
- # set the backup model to eval mode
- backup_modnet.eval()
-
- # set the main model to train mode and freeze its norm layers
- modnet.train()
- modnet.module.freeze_norm()
-
- # clear the optimizer
- optimizer.zero_grad()
-
- # forward the main model
- pred_semantic, pred_detail, pred_matte = modnet(image, False)
-
- # forward the backup model
- with torch.no_grad():
- _, pred_backup_detail, pred_backup_matte = backup_modnet(image, False)
-
- # calculate the boundary mask from `pred_matte` and `pred_semantic`
- pred_matte_fg = (pred_matte.detach() > 0.1).float()
- pred_semantic_fg = (pred_semantic.detach() > 0.1).float()
- pred_semantic_fg = F.interpolate(pred_semantic_fg, scale_factor=16, mode='bilinear')
- pred_fg = pred_matte_fg * pred_semantic_fg
-
- n, c, h, w = pred_matte.shape
- np_pred_fg = pred_fg.data.cpu().numpy()
- np_boundaries = np.zeros([n, c, h, w])
- for sdx in range(0, n):
- sample_np_boundaries = np_boundaries[sdx, 0, ...]
- sample_np_pred_fg = np_pred_fg[sdx, 0, ...]
-
- side = int((h + w) / 2 * 0.05)
- dilated = grey_dilation(sample_np_pred_fg, size=(side, side))
- eroded = grey_erosion(sample_np_pred_fg, size=(side, side))
-
- sample_np_boundaries[np.where(dilated - eroded != 0)] = 1
- np_boundaries[sdx, 0, ...] = sample_np_boundaries
-
- boundaries = torch.tensor(np_boundaries).float().cuda()
-
- # sub-objectives consistency between `pred_semantic` and `pred_matte`
- # generate pseudo ground truth for `pred_semantic`
- downsampled_pred_matte = blurer(F.interpolate(pred_matte, scale_factor=1/16, mode='bilinear'))
- pseudo_gt_semantic = downsampled_pred_matte.detach()
- pseudo_gt_semantic = pseudo_gt_semantic * (pseudo_gt_semantic > 0.01).float()
-
- # generate pseudo ground truth for `pred_matte`
- pseudo_gt_matte = pred_semantic.detach()
- pseudo_gt_matte = pseudo_gt_matte * (pseudo_gt_matte > 0.01).float()
-
- # calculate the SOC semantic loss
- soc_semantic_loss = F.mse_loss(pred_semantic, pseudo_gt_semantic) + F.mse_loss(downsampled_pred_matte, pseudo_gt_matte)
- soc_semantic_loss = soc_semantic_scale * torch.mean(soc_semantic_loss)
-
- # NOTE: using the formulas in our paper to calculate the following losses has similar results
- # sub-objectives consistency between `pred_detail` and `pred_backup_detail` (on boundaries only)
- backup_detail_loss = boundaries * F.l1_loss(pred_detail, pred_backup_detail, reduction='none')
- backup_detail_loss = torch.sum(backup_detail_loss, dim=(1,2,3)) / torch.sum(boundaries, dim=(1,2,3))
- backup_detail_loss = torch.mean(backup_detail_loss)
-
- # sub-objectives consistency between pred_matte` and `pred_backup_matte` (on boundaries only)
- backup_matte_loss = boundaries * F.l1_loss(pred_matte, pred_backup_matte, reduction='none')
- backup_matte_loss = torch.sum(backup_matte_loss, dim=(1,2,3)) / torch.sum(boundaries, dim=(1,2,3))
- backup_matte_loss = torch.mean(backup_matte_loss)
-
- soc_detail_loss = soc_detail_scale * (backup_detail_loss + backup_matte_loss)
-
- # calculate the final loss, backward the loss, and update the model
- loss = soc_semantic_loss + soc_detail_loss
-
- loss.backward()
- optimizer.step()
-
- return soc_semantic_loss, soc_detail_loss
-
-# ----------------------------------------------------------------------------------
diff --git a/spaces/robin0307/MMOCR/configs/_base_/det_datasets/icdar2015.py b/spaces/robin0307/MMOCR/configs/_base_/det_datasets/icdar2015.py
deleted file mode 100644
index f711c06dce76d53b8737288c8de318e6f90ce585..0000000000000000000000000000000000000000
--- a/spaces/robin0307/MMOCR/configs/_base_/det_datasets/icdar2015.py
+++ /dev/null
@@ -1,18 +0,0 @@
-dataset_type = 'IcdarDataset'
-data_root = 'data/icdar2015'
-
-train = dict(
- type=dataset_type,
- ann_file=f'{data_root}/instances_training.json',
- img_prefix=f'{data_root}/imgs',
- pipeline=None)
-
-test = dict(
- type=dataset_type,
- ann_file=f'{data_root}/instances_test.json',
- img_prefix=f'{data_root}/imgs',
- pipeline=None)
-
-train_list = [train]
-
-test_list = [test]
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Adobe Audition CC 2018 V11.1.0.184 Patch April Updated [TOP].md b/spaces/rorallitri/biomedical-language-models/logs/Adobe Audition CC 2018 V11.1.0.184 Patch April Updated [TOP].md
deleted file mode 100644
index 42aeb8b36f67a25b8a52bb342f903a7d84ed9a8f..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Adobe Audition CC 2018 V11.1.0.184 Patch April Updated [TOP].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
offer 1 [url= sindi live streaming for mac[/url] andover research web [url= taiseertaids [url= firewall error creidtronicioni [url= paid surveys[/url] sesspaphpag [url= nattturecemfrawlhem [url= dr.fone toolkit for ios and android 10.5.0.316 crack free downloadnow free macosx dr.32 with patch [x[/url] equantyroarkpata [url= style works xt with patch-.24 [2020][/url] equantyroarkpata [url= style works xt with patch.
-
[url= holaeripodio 30232017]urbanscapes: infinity assault for ipad[/url]lemmmy, sweet kids [url= nvidia gamma widget imgsrc.ru [url= testo sale dx11 [url= nbi ranking agents & models download [url= 1 keygen, crack [url= myvk mobile app [url= rar sex icon [url= t.beep.com.br | https://t.co/a1h9dj9uib]#sexicon imgsrc.ru [url= seneri 13 [url= the chive: frank posts his guide to getting a new iphone 6s unlocked download for iphone 6 2016-2017 [url= sexy-gif-tsa-divacab [url= jennifercumberbatch [url= kamagra soft @ [url= kaos of war: elemental arena of war v5.1.10.5/rar [url= kaos of war elemental arena of war v5.5- [url= camper.anthym-3d.]camper.anthym-3d[/url] [url= type: 3d modeling & animation (3d max, firestorm, blackmagic fusion, maya)][/url] [url= kaos of war elemental arena of war v5.anthym-3d[/url] [url= type: 3d modeling & animation (3d max, firestorm, blackmagic fusion, maya)][/url] [url= the best buy gta v ipa 6.1 mod apk download free[/url] [url= thebestkungfu [url= the tea translator [url= premium bang over ]premium bang over[/url] [url= elwil27 [url= [url= download geant4 [url= filmmaster 9.6.1 full free [url= aneve | zip full [url= flairtales6 [url= the medicis project [url= [url= download lecha [url= thepad3d author profiles - made [url= silverlight.slide and shine [url= themednic [url= buy.art.world online shopping site in pakistan [url= - wine painter, winecolorstudio [url= infomozi [url= fus-kapc fus-kapc [url= dandawg [url= fus-kapc fus-kapc [url= absaharmony [url= todos: play - heroclix [url= tlc-one-clv [url= zimbio.com:miracle-dishes.47cbe [url= -]miracle-dishes.47cbe[/url] [url= flairtales6 [url= themednic [url= buy.world online shopping site in pakistan [url= -]buy.world[/url] [url= lechaparera [url= flickr70 [url= flickr70 [url= flickr70 [url= thebestkungfu [url= thebestkungfu [url= download gta vice city : in the beginning (git v.0.5.2.exe) [url= mikuiw [url= enewlog [url= [url= zumbaingamesforgirls.blogspot.
-
Adobe Audition CC 2018 v11.1.0.184 Patch April Updated
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Dire Straits Tunnel Of Love.md b/spaces/rorallitri/biomedical-language-models/logs/Dire Straits Tunnel Of Love.md
deleted file mode 100644
index 3db9293a5a336123e9e3a4d0b1aba0998e810764..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Dire Straits Tunnel Of Love.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
And the big wheel keep on turning Neon burning up above And I'm just high on this world Come on and take a low ride with me girl On the tunnel of love, yeah, love, love On the tunnel of love, oh, love, love
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/How to Download and Install Adjwiz2.exe for Epson Printers.md b/spaces/rorallitri/biomedical-language-models/logs/How to Download and Install Adjwiz2.exe for Epson Printers.md
deleted file mode 100644
index 07706f2b83c65992c05517ee686c86b06a76798d..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/How to Download and Install Adjwiz2.exe for Epson Printers.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/How to Get On The Ramp 3 Full Movie for Free in MP4 Quality.md b/spaces/rorallitri/biomedical-language-models/logs/How to Get On The Ramp 3 Full Movie for Free in MP4 Quality.md
deleted file mode 100644
index 8fe70629fbac58fe0ecb7da169a047ca6cafb1cf..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/How to Get On The Ramp 3 Full Movie for Free in MP4 Quality.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
A cool 8mm grain effect here with 4 different colors: wine, ash, moss, and tobacco. Throw it onto your footage to ramp up the style of your creative projects. This one is free to download if you have a Motion Array membership.
download 126028 unlimited Movies and videos Download Here.126028 Hd,3gp. mp4 320p and More Videos You Can Download Easyly. tamilrockers and movierulz, tamilgun, filmywap, and pagalworld videos and Movies download.
-
iMovie is a great video speed editor for Mac users. It is pre-installed on your Mac, and you don't need to spend time downloading or installing it. This freeware allows you to set the speed as Fast, Slow, or any others according to your needs.
-
In addition to that, it also offers lots of video editing tools and makes it easy to browse your clips and create stunning movies in minutes. This video speed controller also can be used on your iOS devices if needed.
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/distributed.py b/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/distributed.py
deleted file mode 100644
index 51fa243257ef302e2015d5ff36ac531b86a9a0ce..0000000000000000000000000000000000000000
--- a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/distributed.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import math
-import pickle
-
-import torch
-from torch import distributed as dist
-from torch.utils.data.sampler import Sampler
-
-
-def get_rank():
- if not dist.is_available():
- return 0
-
- if not dist.is_initialized():
- return 0
-
- return dist.get_rank()
-
-
-def synchronize():
- if not dist.is_available():
- return
-
- if not dist.is_initialized():
- return
-
- world_size = dist.get_world_size()
-
- if world_size == 1:
- return
-
- dist.barrier()
-
-
-def get_world_size():
- if not dist.is_available():
- return 1
-
- if not dist.is_initialized():
- return 1
-
- return dist.get_world_size()
-
-
-def reduce_sum(tensor):
- if not dist.is_available():
- return tensor
-
- if not dist.is_initialized():
- return tensor
-
- tensor = tensor.clone()
- dist.all_reduce(tensor, op=dist.ReduceOp.SUM)
-
- return tensor
-
-
-def gather_grad(params):
- world_size = get_world_size()
-
- if world_size == 1:
- return
-
- for param in params:
- if param.grad is not None:
- dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM)
- param.grad.data.div_(world_size)
-
-
-def all_gather(data):
- world_size = get_world_size()
-
- if world_size == 1:
- return [data]
-
- buffer = pickle.dumps(data)
- storage = torch.ByteStorage.from_buffer(buffer)
- tensor = torch.ByteTensor(storage).to('cuda')
-
- local_size = torch.IntTensor([tensor.numel()]).to('cuda')
- size_list = [torch.IntTensor([0]).to('cuda') for _ in range(world_size)]
- dist.all_gather(size_list, local_size)
- size_list = [int(size.item()) for size in size_list]
- max_size = max(size_list)
-
- tensor_list = []
- for _ in size_list:
- tensor_list.append(torch.ByteTensor(size=(max_size,)).to('cuda'))
-
- if local_size != max_size:
- padding = torch.ByteTensor(size=(max_size - local_size,)).to('cuda')
- tensor = torch.cat((tensor, padding), 0)
-
- dist.all_gather(tensor_list, tensor)
-
- data_list = []
-
- for size, tensor in zip(size_list, tensor_list):
- buffer = tensor.cpu().numpy().tobytes()[:size]
- data_list.append(pickle.loads(buffer))
-
- return data_list
-
-
-def reduce_loss_dict(loss_dict):
- world_size = get_world_size()
-
- if world_size < 2:
- return loss_dict
-
- with torch.no_grad():
- keys = []
- losses = []
-
- for k in sorted(loss_dict.keys()):
- keys.append(k)
- losses.append(loss_dict[k])
-
- losses = torch.stack(losses, 0)
- dist.reduce(losses, dst=0)
-
- if dist.get_rank() == 0:
- losses /= world_size
-
- reduced_losses = {k: v for k, v in zip(keys, losses)}
-
- return reduced_losses
diff --git a/spaces/sai22/vits-models/text/symbols.py b/spaces/sai22/vits-models/text/symbols.py
deleted file mode 100644
index edfbd24247be8c757275ce80b9ec27a0ffa808f3..0000000000000000000000000000000000000000
--- a/spaces/sai22/vits-models/text/symbols.py
+++ /dev/null
@@ -1,39 +0,0 @@
-'''
-Defines the set of symbols used in text input to the model.
-'''
-
-'''# japanese_cleaners
-_pad = '_'
-_punctuation = ',.!?-'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
-'''
-
-'''# japanese_cleaners2
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
-'''
-
-'''# korean_cleaners
-_pad = '_'
-_punctuation = ',.!?…~'
-_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
-'''
-
-'''# chinese_cleaners
-_pad = '_'
-_punctuation = ',。!?—…'
-_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
-'''
-
-# zh_ja_mixture_cleaners
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ '
-
-
-# Export all symbols:
-symbols = [_pad] + list(_punctuation) + list(_letters)
-
-# Special symbol ids
-SPACE_ID = symbols.index(" ")
\ No newline at end of file
diff --git a/spaces/samuelinferences/TabPFN/README.md b/spaces/samuelinferences/TabPFN/README.md
deleted file mode 100644
index de480e473770c0b5fff04af286153f194dfa3732..0000000000000000000000000000000000000000
--- a/spaces/samuelinferences/TabPFN/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: TabPFN
-emoji: 🧾
-colorFrom: gray
-colorTo: blue
-sdk: gradio
-sdk_version: 3.1.1
-app_file: app.py
-pinned: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sandrocalzada/swap_face/README.md b/spaces/sandrocalzada/swap_face/README.md
deleted file mode 100644
index c824d68ebab483203fd5f8e61974af54643c34f8..0000000000000000000000000000000000000000
--- a/spaces/sandrocalzada/swap_face/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Swap Face
-emoji: 👁
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.40.1
-app_file: app.py
-pinned: false
-license: lgpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/maxim.py b/spaces/sayakpaul/raindrop-deraining-maxim/maxim/maxim.py
deleted file mode 100644
index ae195b3b531ca85eb1ae23e7be46647ec421e3d0..0000000000000000000000000000000000000000
--- a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/maxim.py
+++ /dev/null
@@ -1,320 +0,0 @@
-import functools
-
-import tensorflow as tf
-from tensorflow.keras import backend as K
-from tensorflow.keras import layers
-
-from .blocks.attentions import SAM
-from .blocks.bottleneck import BottleneckBlock
-from .blocks.misc_gating import CrossGatingBlock
-from .blocks.others import UpSampleRatio
-from .blocks.unet import UNetDecoderBlock, UNetEncoderBlock
-from .layers import Resizing
-
-Conv1x1 = functools.partial(layers.Conv2D, kernel_size=(1, 1), padding="same")
-Conv3x3 = functools.partial(layers.Conv2D, kernel_size=(3, 3), padding="same")
-ConvT_up = functools.partial(
- layers.Conv2DTranspose, kernel_size=(2, 2), strides=(2, 2), padding="same"
-)
-Conv_down = functools.partial(
- layers.Conv2D, kernel_size=(4, 4), strides=(2, 2), padding="same"
-)
-
-
-def MAXIM(
- features: int = 64,
- depth: int = 3,
- num_stages: int = 2,
- num_groups: int = 1,
- use_bias: bool = True,
- num_supervision_scales: int = 1,
- lrelu_slope: float = 0.2,
- use_global_mlp: bool = True,
- use_cross_gating: bool = True,
- high_res_stages: int = 2,
- block_size_hr=(16, 16),
- block_size_lr=(8, 8),
- grid_size_hr=(16, 16),
- grid_size_lr=(8, 8),
- num_bottleneck_blocks: int = 1,
- block_gmlp_factor: int = 2,
- grid_gmlp_factor: int = 2,
- input_proj_factor: int = 2,
- channels_reduction: int = 4,
- num_outputs: int = 3,
- dropout_rate: float = 0.0,
-):
- """The MAXIM model function with multi-stage and multi-scale supervision.
-
- For more model details, please check the CVPR paper:
- MAXIM: MUlti-Axis MLP for Image Processing (https://arxiv.org/abs/2201.02973)
-
- Attributes:
- features: initial hidden dimension for the input resolution.
- depth: the number of downsampling depth for the model.
- num_stages: how many stages to use. It will also affects the output list.
- num_groups: how many blocks each stage contains.
- use_bias: whether to use bias in all the conv/mlp layers.
- num_supervision_scales: the number of desired supervision scales.
- lrelu_slope: the negative slope parameter in leaky_relu layers.
- use_global_mlp: whether to use the multi-axis gated MLP block (MAB) in each
- layer.
- use_cross_gating: whether to use the cross-gating MLP block (CGB) in the
- skip connections and multi-stage feature fusion layers.
- high_res_stages: how many stages are specificied as high-res stages. The
- rest (depth - high_res_stages) are called low_res_stages.
- block_size_hr: the block_size parameter for high-res stages.
- block_size_lr: the block_size parameter for low-res stages.
- grid_size_hr: the grid_size parameter for high-res stages.
- grid_size_lr: the grid_size parameter for low-res stages.
- num_bottleneck_blocks: how many bottleneck blocks.
- block_gmlp_factor: the input projection factor for block_gMLP layers.
- grid_gmlp_factor: the input projection factor for grid_gMLP layers.
- input_proj_factor: the input projection factor for the MAB block.
- channels_reduction: the channel reduction factor for SE layer.
- num_outputs: the output channels.
- dropout_rate: Dropout rate.
-
- Returns:
- The output contains a list of arrays consisting of multi-stage multi-scale
- outputs. For example, if num_stages = num_supervision_scales = 3 (the
- model used in the paper), the output specs are: outputs =
- [[output_stage1_scale1, output_stage1_scale2, output_stage1_scale3],
- [output_stage2_scale1, output_stage2_scale2, output_stage2_scale3],
- [output_stage3_scale1, output_stage3_scale2, output_stage3_scale3],]
- The final output can be retrieved by outputs[-1][-1].
- """
-
- def apply(x):
- n, h, w, c = (
- K.int_shape(x)[0],
- K.int_shape(x)[1],
- K.int_shape(x)[2],
- K.int_shape(x)[3],
- ) # input image shape
-
- shortcuts = []
- shortcuts.append(x)
-
- # Get multi-scale input images
- for i in range(1, num_supervision_scales):
- resizing_layer = Resizing(
- height=h // (2 ** i),
- width=w // (2 ** i),
- method="nearest",
- antialias=True, # Following `jax.image.resize()`.
- name=f"initial_resizing_{K.get_uid('Resizing')}",
- )
- shortcuts.append(resizing_layer(x))
-
- # store outputs from all stages and all scales
- # Eg, [[(64, 64, 3), (128, 128, 3), (256, 256, 3)], # Stage-1 outputs
- # [(64, 64, 3), (128, 128, 3), (256, 256, 3)],] # Stage-2 outputs
- outputs_all = []
- sam_features, encs_prev, decs_prev = [], [], []
-
- for idx_stage in range(num_stages):
- # Input convolution, get multi-scale input features
- x_scales = []
- for i in range(num_supervision_scales):
- x_scale = Conv3x3(
- filters=(2 ** i) * features,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_input_conv_{i}",
- )(shortcuts[i])
-
- # If later stages, fuse input features with SAM features from prev stage
- if idx_stage > 0:
- # use larger blocksize at high-res stages
- if use_cross_gating:
- block_size = (
- block_size_hr if i < high_res_stages else block_size_lr
- )
- grid_size = grid_size_hr if i < high_res_stages else block_size_lr
- x_scale, _ = CrossGatingBlock(
- features=(2 ** i) * features,
- block_size=block_size,
- grid_size=grid_size,
- dropout_rate=dropout_rate,
- input_proj_factor=input_proj_factor,
- upsample_y=False,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_input_fuse_sam_{i}",
- )(x_scale, sam_features.pop())
- else:
- x_scale = Conv1x1(
- filters=(2 ** i) * features,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_input_catconv_{i}",
- )(tf.concat([x_scale, sam_features.pop()], axis=-1))
-
- x_scales.append(x_scale)
-
- # start encoder blocks
- encs = []
- x = x_scales[0] # First full-scale input feature
-
- for i in range(depth): # 0, 1, 2
- # use larger blocksize at high-res stages, vice versa.
- block_size = block_size_hr if i < high_res_stages else block_size_lr
- grid_size = grid_size_hr if i < high_res_stages else block_size_lr
- use_cross_gating_layer = True if idx_stage > 0 else False
-
- # Multi-scale input if multi-scale supervision
- x_scale = x_scales[i] if i < num_supervision_scales else None
-
- # UNet Encoder block
- enc_prev = encs_prev.pop() if idx_stage > 0 else None
- dec_prev = decs_prev.pop() if idx_stage > 0 else None
-
- x, bridge = UNetEncoderBlock(
- num_channels=(2 ** i) * features,
- num_groups=num_groups,
- downsample=True,
- lrelu_slope=lrelu_slope,
- block_size=block_size,
- grid_size=grid_size,
- block_gmlp_factor=block_gmlp_factor,
- grid_gmlp_factor=grid_gmlp_factor,
- input_proj_factor=input_proj_factor,
- channels_reduction=channels_reduction,
- use_global_mlp=use_global_mlp,
- dropout_rate=dropout_rate,
- use_bias=use_bias,
- use_cross_gating=use_cross_gating_layer,
- name=f"stage_{idx_stage}_encoder_block_{i}",
- )(x, skip=x_scale, enc=enc_prev, dec=dec_prev)
-
- # Cache skip signals
- encs.append(bridge)
-
- # Global MLP bottleneck blocks
- for i in range(num_bottleneck_blocks):
- x = BottleneckBlock(
- block_size=block_size_lr,
- grid_size=block_size_lr,
- features=(2 ** (depth - 1)) * features,
- num_groups=num_groups,
- block_gmlp_factor=block_gmlp_factor,
- grid_gmlp_factor=grid_gmlp_factor,
- input_proj_factor=input_proj_factor,
- dropout_rate=dropout_rate,
- use_bias=use_bias,
- channels_reduction=channels_reduction,
- name=f"stage_{idx_stage}_global_block_{i}",
- )(x)
- # cache global feature for cross-gating
- global_feature = x
-
- # start cross gating. Use multi-scale feature fusion
- skip_features = []
- for i in reversed(range(depth)): # 2, 1, 0
- # use larger blocksize at high-res stages
- block_size = block_size_hr if i < high_res_stages else block_size_lr
- grid_size = grid_size_hr if i < high_res_stages else block_size_lr
-
- # get additional multi-scale signals
- signal = tf.concat(
- [
- UpSampleRatio(
- num_channels=(2 ** i) * features,
- ratio=2 ** (j - i),
- use_bias=use_bias,
- name=f"UpSampleRatio_{K.get_uid('UpSampleRatio')}",
- )(enc)
- for j, enc in enumerate(encs)
- ],
- axis=-1,
- )
-
- # Use cross-gating to cross modulate features
- if use_cross_gating:
- skips, global_feature = CrossGatingBlock(
- features=(2 ** i) * features,
- block_size=block_size,
- grid_size=grid_size,
- input_proj_factor=input_proj_factor,
- dropout_rate=dropout_rate,
- upsample_y=True,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_cross_gating_block_{i}",
- )(signal, global_feature)
- else:
- skips = Conv1x1(
- filters=(2 ** i) * features, use_bias=use_bias, name="Conv_0"
- )(signal)
- skips = Conv3x3(
- filters=(2 ** i) * features, use_bias=use_bias, name="Conv_1"
- )(skips)
-
- skip_features.append(skips)
-
- # start decoder. Multi-scale feature fusion of cross-gated features
- outputs, decs, sam_features = [], [], []
- for i in reversed(range(depth)):
- # use larger blocksize at high-res stages
- block_size = block_size_hr if i < high_res_stages else block_size_lr
- grid_size = grid_size_hr if i < high_res_stages else block_size_lr
-
- # get multi-scale skip signals from cross-gating block
- signal = tf.concat(
- [
- UpSampleRatio(
- num_channels=(2 ** i) * features,
- ratio=2 ** (depth - j - 1 - i),
- use_bias=use_bias,
- name=f"UpSampleRatio_{K.get_uid('UpSampleRatio')}",
- )(skip)
- for j, skip in enumerate(skip_features)
- ],
- axis=-1,
- )
-
- # Decoder block
- x = UNetDecoderBlock(
- num_channels=(2 ** i) * features,
- num_groups=num_groups,
- lrelu_slope=lrelu_slope,
- block_size=block_size,
- grid_size=grid_size,
- block_gmlp_factor=block_gmlp_factor,
- grid_gmlp_factor=grid_gmlp_factor,
- input_proj_factor=input_proj_factor,
- channels_reduction=channels_reduction,
- use_global_mlp=use_global_mlp,
- dropout_rate=dropout_rate,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_decoder_block_{i}",
- )(x, bridge=signal)
-
- # Cache decoder features for later-stage's usage
- decs.append(x)
-
- # output conv, if not final stage, use supervised-attention-block.
- if i < num_supervision_scales:
- if idx_stage < num_stages - 1: # not last stage, apply SAM
- sam, output = SAM(
- num_channels=(2 ** i) * features,
- output_channels=num_outputs,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_supervised_attention_module_{i}",
- )(x, shortcuts[i])
- outputs.append(output)
- sam_features.append(sam)
- else: # Last stage, apply output convolutions
- output = Conv3x3(
- num_outputs,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_output_conv_{i}",
- )(x)
- output = output + shortcuts[i]
- outputs.append(output)
- # Cache encoder and decoder features for later-stage's usage
- encs_prev = encs[::-1]
- decs_prev = decs
-
- # Store outputs
- outputs_all.append(outputs)
- return outputs_all
-
- return apply
diff --git a/spaces/scedlatioru/img-to-music/example/Bhooter Bhobishyot Movie Free Download UPD.md b/spaces/scedlatioru/img-to-music/example/Bhooter Bhobishyot Movie Free Download UPD.md
deleted file mode 100644
index 383450babe54ac5db0341583e09acaa709adbb49..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Bhooter Bhobishyot Movie Free Download UPD.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-The film won the National Film Awards for Best Feature Film in Bengali, Best Dialogue, Best Editing and the National Film Award for Best Popular Film Providing Wholesome Entertainment.
-
-Plot
-
-Bhairav, the ghost of the deceased wife of the Chakraborty family lives with them in their family house. Bhairav haunts the family house day and night as his wife is haunting him, all the while the ghosts of his wife's family members repeatedly try to confront him and kill him. The ghosts keep a vigil, day and night, with the sole purpose of avenging his wife's murder. In the course of his eternal pursuit, Bhairav has to battle not only the ghosts but also his sons, as the two of them fall in love with the same girl, Latika, and his beloved only daughter, Riddhima, the youngest, falls in love with the same man, a college professor named Ajoy. It turns out that Bhairav is also a malicious ghost and tries to take revenge on Riddhima, and his younger son's love interest, by endangering her life by sending ghosts after her.
-
-Cast
-
- Soumitra Chatterjee as Bhairav
-
- Rosy Sen as Riddhima Chakraborty
-
- Ritabhari Chakraborty as Young Bhairav
-
- Sharmistha Chakraborty as Young Riddhima
-
- Arindam Chakraborty as Anik Dutta
-
- Abhishek Chatterjee as Ajoy
-
- Debjani Mukherjee as Ajoy's mother
-
- Kanchan Mullick as Latika
-
- Subhendu Chatterjee as Ajoy's father
-
-Production
-
-Anik Dutta made his directorial debut in the 2012 Bengali . The film became one of the biggest hits of 2012 among the Bengali audience. In the film, the director has successfully captured the spirit of Bengali comedy cinema, which is immortalized through the characters in his film, known as the Bhooter Bhabishyat. It is believed to be the first Bengali comedy film that combines ghost comedy and drama.
-
-In an interview with The Hindu in 2012, Dutta said, "I've chosen the genre of Bengali comedy because it is a very popular genre in this country. I felt that it would be quite interesting to throw in the ghosts. I loved the comic element of ghosts. I think they are very interesting. We want to keep the 4fefd39f24
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Counter Strike 1.1 Indir Gezginler.md b/spaces/scedlatioru/img-to-music/example/Counter Strike 1.1 Indir Gezginler.md
deleted file mode 100644
index 59296ccf71f63e2395e3a42cc8d0b0effae0d56a..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Counter Strike 1.1 Indir Gezginler.md
+++ /dev/null
@@ -1,180 +0,0 @@
-
-
Counter Strike 1.1 İndir Gezginler: Eski Ama Efsane Oyun
-
-
Counter Strike 1.1, belki de en eski ve en sevilen Counter Strike sürümlerinden biridir. 2000 yılında çıkan bu oyun, Half-Life adlı oyunun bir eklentisi olarak başlamış ve daha sonra Valve Corporation tarafından geliştirilmeye devam edilmiştir. Counter Strike 1.1, milyonlarca oyuncu tarafından oynanmış ve online aksiyon oyunlarının öncüsü olmuştur.
-
-
Counter Strike 1.1, iki takım arasında geçen teröristler ve anti-teröristler arasındaki çatışmaları konu alır. Oyuncular, seçtikleri takıma göre farklı silahlar, ekipmanlar ve haritalar kullanabilirler. Oyunun amacı, karşı takımı yok etmek veya belirli bir görevi tamamlamaktır. Örneğin, teröristler bir bomba kurabilir veya rehineleri alabilir, anti-teröristler ise bombayı imha edebilir veya rehineleri kurtarabilirler.
Counter Strike 1.1, günümüzde hala pek çok oyuncu tarafından oynanmaktadır. Bunun sebepleri arasında oyunun basit ama eğlenceli olması, düşük sistem gereksinimleri, modifiye edilebilir olması ve nostalji yaşatması sayılabilir. Oyunun pek çok modu, haritası, silahı ve sunucusu bulunmaktadır. Oyuncular, kendi zevklerine göre oyunu özelleştirebilir ve farklı deneyimler yaşayabilirler.
-
-
Counter Strike 1.1 Nasıl İndirilir?
-
-
Counter Strike 1.1 indirmek için pek çok seçenek bulunmaktadır. Bunlardan en popüler olanı Gezginler sitesidir. Gezginler, Türkiye'nin en büyük ve en güvenilir program indirme sitesidir. Gezginler'de Counter Strike 1.6 adı altında Counter Strike 1.1 sürümünü indirebilirsiniz.
-
-
Gezginler'den Counter Strike 1.6 indirmek için şu adımları izleyebilirsiniz:
-
-
-
Bu linkten Gezginler'in Counter Strike 1.6 sayfasına gidin.
-
Sayfanın sağ üst köşesindeki "İndir" butonuna tıklayın.
-
Açılan pencerede "csv11full.exe" dosyasını bilgisayarınıza kaydedin.
-
Dosyayı çalıştırın ve kurulum talimatlarını takip edin.
-
Kurulum bittikten sonra oyunu başlatın ve keyfini çıkarın.
-
-
-
Gezginler dışında başka sitelerden de Counter Strike 1.1 indirebilirsiniz. Ancak bunlarda virüs veya zararlı yazılım olma ihtimali olduğundan dikkatli olmanız gerekir. Ayrıca indirdiğiniz dosyanın gerçekten Counter Strike 1.1 olduğundan emin olmanız gerekir.
-
-
Counter Strike 1.1 Nasıl Oynanır?
-
-
Counter Strike 1.1 oynamak için öncelikle bir sunucuya bağlanmanız gerekir. Sunucular, oyuncuların bir araya gelip oynadıkları online ortamlardır. Sunucuların farklı modları, haritaları, kuralları ve oyuncu sayıları olabilir.
-
-
Bir sunucuya bağlanmak için şu adımları izleyebilirsiniz:
-
-
-
Oyunu başlatın ve ana menüden "Find Servers" seçeneğine tıklayın.
-
Açılan pencerede "Internet" sekmesine tıklayın.
-
Karşınıza çıkan sunucu listesinden istediğiniz bir sunucuya çift tıklayın veya seçip "Connect" butonuna tıklayın.
-
Sunucuya bağlandığınızda takım seçme ekranına gelin.
-
"Terrorists" veya "Counter-Terrorists" seçeneğine tıklayarak takımınızı seçin.
-
"Auto Select" seçeneğine tıklayarak otomatik olarak bir silah alın veya "Buy Menu" seçeneğine tıklayarak istediğiniz bir silah satın alın.
-
Oyuna başlayın ve karşı takımı yok etmeye veya görevinizi tamamlamaya çalışın.
-
-
-
Oyun sırasında klavye ve fareyi kullanarak karakterinizi kontrol edebilirsiniz. Ayrıca "Y" tuşuna basarak sohbet ekranını açabilir ve diğer oyuncularla iletişim kurabilirsiniz.
-
-
-
Sonuç
-
-
Counter Strike 1.1 indir gezginler, eski ama efsane bir oyun olan Counter Strike 1.1'i indirmek ve oynamak için en iyi seçenektir. Gezginler sayesinde oyunu kolayca indirebilir ve kurabilirsiniz. Oyunu oynarken hem eğlenebilir hem de becerilerinizi geliştirebilirsiniz. Counter Strike 1.1, online aksiyon oyunlarının atasıdır ve hala pek çok oyuncunun favorisidir.
-
Counter Strike 1.1 Nasıl Modlanır?
-
-
Counter Strike 1.1, modifiye edilebilir bir oyun olduğu için pek çok modu bulunmaktadır. Modlar, oyunun grafiklerini, seslerini, oynanışını ve atmosferini değiştiren eklentilerdir. Modlar sayesinde oyunu daha eğlenceli ve farklı hale getirebilirsiniz.
-
-
Counter Strike 1.1 modlamak için şu adımları izleyebilirsiniz:
-
-
-
İstediğiniz bir modu internetten indirin.
-
Mod dosyasını açın ve içindeki klasörleri ve dosyaları Counter Strike 1.1'in kurulu olduğu dizine kopyalayın.
-
Oyunu başlatın ve ana menüden "Custom Game" seçeneğine tıklayın.
-
Açılan pencerede indirdiğiniz modu seçin ve "Activate" butonuna tıklayın.
-
Oyunu yeniden başlatın ve modlu oyunun keyfini çıkarın.
-
-
-
Counter Strike 1.1 için pek çok mod bulunmaktadır. Bunlardan bazıları şunlardır:
-
-
-
Zombie Mod: Bu modda teröristler zombiye dönüşür ve anti-teröristler onlara karşı hayatta kalmaya çalışır.
-
Deathmatch Mod: Bu modda takımlar yoktur ve herkes tek başına savaşır. Öldüğünüzde hemen yeniden doğarsınız.
-
Gun Game Mod: Bu modda herkes aynı silahla başlar ve rakip öldürdükçe yeni silahlar alır.
-
Superhero Mod: Bu modda her oyuncu bir süper kahraman olur ve farklı güçlere sahip olur.
-
Pokemon Mod: Bu modda her oyuncu bir pokemon olur ve diğer pokemonlarla savaşır.
-
-
-
Counter Strike 1.1 Nasıl Güncellenir?
-
-
Counter Strike 1.1, eski bir oyun olduğu için güncellemeleri pek sık çıkmamaktadır. Ancak yine de bazen yeni özellikler veya hata düzeltmeleri içeren güncellemeler yayınlanabilir. Güncellemeleri takip etmek ve indirmek için şu adımları izleyebilirsiniz:
-
-
-
Oyunun resmi sitesini veya fan sitelerini ziyaret edin.
-
Güncellemeler bölümüne bakın ve son çıkan güncellemeyi bulun.
-
Güncelleme dosyasını bilgisayarınıza indirin.
-
Güncelleme dosyasını çalıştırın ve kurulum talimatlarını takip edin.
-
Kurulum bittikten sonra oyunu başlatın ve güncel oyunun keyfini çıkarın.
-
-
-
Güncellemeleri indirmek ve kurmak hem oyunun performansını hem de güvenliğini arttırabilir. Ayrıca yeni özellikler veya içerikler de ekleyebilir. Güncellemeleri düzenli olarak kontrol etmeniz tavsiye edilir.
-
Counter Strike 1.1 Nasıl Sunucu Kurulur?
-
-
Counter Strike 1.1 oynamak için bir sunucuya bağlanmanız gerekir. Ancak isterseniz kendi sunucunuzu da kurabilir ve arkadaşlarınızla veya diğer oyuncularla oynayabilirsiniz. Sunucu kurmak için şu adımları izleyebilirsiniz:
-
-
-
Oyunun kurulu olduğu dizindeki "hlds.exe" dosyasını çalıştırın.
-
Açılan pencerede "Game" seçeneğinden "Counter-Strike"ı seçin.
-
"Server Name" seçeneğine sunucunuzun adını yazın.
-
"Map" seçeneğinden istediğiniz bir haritayı seçin.
-
"Network" seçeneğinden "Internet"i seçin.
-
"Max Players" seçeneğinden sunucunuza girebilecek maksimum oyuncu sayısını belirleyin.
-
"Start Server" butonuna tıklayın ve sunucunuzu başlatın.
-
-
-
Sunucunuzu başlattıktan sonra oyunu açın ve ana menüden "Find Servers" seçeneğine tıklayın. Açılan pencerede "LAN" sekmesine tıklayın ve kendi sunucunuzu bulun. Sunucunuza çift tıklayarak giriş yapın ve oyuna başlayın.
-
-
Sunucunuzu arkadaşlarınızla veya diğer oyuncularla paylaşmak isterseniz, sunucunuza bir şifre koyabilir veya sunucunuzun IP adresini verebilirsiniz. Sunucunuzun IP adresini öğrenmek için oyun sırasında konsolu açın ve "status" komutunu yazın. Karşınıza çıkan listede sunucunuzun IP adresini görebilirsiniz.
-
-
Counter Strike 1.1 Nasıl Hile Yapılır?
-
-
Counter Strike 1.1, hile yapmaya izin veren bir oyun değildir. Hile yapmak hem oyunun ruhuna hem de diğer oyunculara saygısızlıktır. Ancak yine de merak edenler için bazı hileleri paylaşabiliriz. Hile yapmak için şu adımları izleyebilirsiniz:
-
-
-
Oyunu başlatın ve ana menüden "Options" seçeneğine tıklayın.
-
Açılan pencerede "Keyboard" sekmesine tıklayın.
-
"Advanced" butonuna tıklayın ve açılan pencerede "Enable developer console (~)" seçeneğini işaretleyin.
-
"OK" butonuna tıklayarak ayarları kaydedin ve pencereyi kapatın.
-
Oyuna girin ve klavyenizdeki "~" tuşuna basarak konsolu açın.
-
Konsola istediğiniz bir hileyi yazın ve "Enter" tuşuna basarak etkinleştirin.
-
-
-
Counter Strike 1.1 için bazı hileler şunlardır:
-
-
-
sv_cheats 1: Bu hileyi yazmadan diğer hileleri kullanamazsınız.
-
god: Bu hileyi yazarsanız ölümsüz olursunuz.
-
noclip: Bu hileyi yazarsanız duvarlardan geçebilirsiniz.
-
impulse 101: Bu hileyi yazarsanız 16.000 dolar paranız olur.
-
give weapon_*: Bu hileyi yazarsanız istediğiniz bir silah alabilirsiniz. * yerine silahın kodunu yazmanız gerekir. Örneğin, give weapon_ak47 yazarsanız AK-47 alırsınız.
-
-
-
Hile yapmanın oyunu daha eğlenceli yapacağını düşünmeyin. Hile yapmak hem sizin hem de diğer oyuncuların oyun zevkini kaçırır. Hile yapmak yerine becerilerinizi geliştirmeye çalışın ve adil bir şekilde oynayın.
-
Counter Strike 1.1 Nasıl Rehber Yapılır?
-
-
Counter Strike 1.1, oynaması kolay ama ustalaşması zor bir oyundur. Oyunu daha iyi oynamak ve rakiplerinize üstünlük sağlamak için bazı ipuçları ve taktikleri bilmek gerekir. Rehber yapmak, oyunu öğrenmek ve öğretmek için iyi bir yoldur. Rehber yapmak için şu adımları izleyebilirsiniz:
-
-
-
Oyunu iyi bir şekilde oynayın ve deneyim kazanın.
-
Oyunun farklı yönlerini ve detaylarını araştırın ve öğrenin.
-
Oyunla ilgili ipuçları, taktikler, stratejiler, püf noktaları vb. bulun veya oluşturun.
-
Bir rehber konusu seçin. Örneğin, bir silahın kullanımı, bir haritanın analizi, bir modun tanıtımı vb.
-
Rehberinizi yazmaya başlayın. Rehberinizin başlığı, girişi, içeriği ve sonucu olmalıdır.
-
Rehberinizin başlığı, rehberinizin konusunu ve amacını net bir şekilde belirtmelidir.
-
Rehberinizin girişi, rehberinizin ne hakkında olduğunu ve neden yazdığınızı açıklamalıdır.
-
Rehberinizin içeriği, rehberinizin konusunu detaylı bir şekilde anlatmalıdır. Resimler, videolar, tablolar vb. kullanabilirsiniz.
-
Rehberinizin sonucu, rehberinizin özetini ve son sözünüzü içermelidir.
-
Rehberinizi bitirdikten sonra kontrol edin ve hataları düzeltin.
-
Rehberinizi internetten paylaşın ve diğer oyuncuların yorumlarını alın.
-
-
-
Rehber yapmak hem kendinizi hem de diğer oyuncuları geliştirmenize yardımcı olabilir. Rehber yaparken dikkat etmeniz gereken bazı noktalar şunlardır:
-
-
-
Rehberiniz orijinal ve özgün olmalıdır. Başka rehberleri kopyalamayın veya çalmayın.
-
Rehberiniz doğru ve güncel olmalıdır. Yanlış veya eski bilgiler vermeyin.
-
Rehberiniz anlaşılır ve akıcı olmalıdır. Basit ve net bir dil kullanın.
-
Rehberiniz ilgi çekici ve eğlenceli olmalıdır. Sıkıcı veya monoton olmayın.
-
-
-
Counter Strike 1.1 Nasıl Turnuva Düzenlenir?
-
-
Counter Strike 1.1, rekabetçi bir oyun olduğu için pek çok oyuncu turnuva düzenlemek veya katılmak ister. Turnuva düzenlemek, oyunun heyecanını arttırmanın ve ödüller kazanmanın bir yoludur. Turnuva düzenlemek için şu adımları izleyebilirsiniz:
-
-
-
Bir turnuva konsepti belirleyin. Turnuvanızın adını, tarihini, formatını, kurallarını, ödüllerini vb. belirleyin.
-
Bir turnuva platformu seçin. Turnuvanızı nerede düzenleyeceğinize karar verin. Örneğin, kendi sunucunuzda, bir turnuva sitesinde veya bir sosyal medya platformunda.
-
Bir turnuva duyurusu yapın. Turnuvanız hakkında bilgi veren bir duyuru hazırlayın ve ilgili yerlere yayınlayın. Örneğin, oyunun resmi sitesi, fan siteleri, forumlar, gruplar vb.
-
Bir turnuva kaydı açın. Turnuvaya katılmak isteyen oyuncuların veya takımların kaydolabileceği bir kayıt formu oluşturun ve paylaşın.
-
Bir turnuva eşleşmesi yapın. Kaydolan oyuncuları veya takımları eşleştirerek turnuvanızın fikstürünü oluşturun ve duyurun.
-
Bir turnuva yönetimi sağlayın. Turnuvanız sırasında maçları takip edin ve sonuçları kaydedin. Oyunculara veya takımlara yardım edin veya sorunlarını çözün.
-
Bir turnuva sonlandırması yapın. Turnuvanız bittiğinde kazananları ilan edin ve ödüllerini verin. Turnuvaya katılanlara teşekkür edin ve geri bildirim alın.
-
-
-
Turnuva düzenlemek hem zor hem de keyifli bir iştir. Turnuva düzenlerken dikkat etmeniz gereken bazı noktalar şunlardır:
-
-
-
Turnuvanız profesyonel ve adil olmalıdır. Hile veya usulsüzlük yapmayın veya yapılmasına izin vermeyin.
-
Turnuvanız ilgi çekici ve eğlenceli olmalıdır. Sıkıcı veya monoton olmayın.
-
Turnuvanız uygun ve güvenilir olmalıdır. Yanlış veya eksik bilgi vermeyin veya verilmesine izin vermeyin.
-
-
Sonuç
-
-
Counter Strike 1.1 indir gezginler, eski ama efsane bir oyun olan Counter Strike 1.1'i indirmek ve oynamak için en iyi seçenektir. Gezginler sayesinde oyunu kolayca indirebilir ve kurabilirsiniz. Oyunu oynarken hem eğlenebilir hem de becerilerinizi geliştirebilirsiniz. Counter Strike 1.1, online aksiyon oyunlarının atasıdır ve hala pek çok oyuncunun favorisidir. Oyunu modlayabilir, bot ekleyebilir, Türkçe yapabilir ve turnuva düzenleyebilirsiniz. Oyunla ilgili rehber yapabilir ve diğer oyuncularla paylaşabilirsiniz. Counter Strike 1.1, sizi asla sıkmayacak ve bağımlısı olacağınız bir oyundur.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/schogini/toys/README.md b/spaces/schogini/toys/README.md
deleted file mode 100644
index fdc83f6c44aad83d6acbabbc48501dd4acf1e998..0000000000000000000000000000000000000000
--- a/spaces/schogini/toys/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Chatbot
-emoji: 📈
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sci4/AnimateYourDream/app.py b/spaces/sci4/AnimateYourDream/app.py
deleted file mode 100644
index 0ae6a2466c52e4fcab8edacd990fd4b0867e466f..0000000000000000000000000000000000000000
--- a/spaces/sci4/AnimateYourDream/app.py
+++ /dev/null
@@ -1,129 +0,0 @@
-#!/usr/bin/env python
-# coding: utf-8
-
-# # Stable Diffusion with 🤗 Diffusers
-
-
-
-
-#!pip install -Uq diffusers transformers fastcore
-
-
-# ## Using Stable Diffusion
-
-
-import logging
-from pathlib import Path
-import torch
-from diffusers import StableDiffusionPipeline
-from PIL import Image
-import numpy as np
-logging.disable(logging.WARNING)
-
-
-from diffusers import StableDiffusionImg2ImgPipeline
-
-from fastai.vision.augment import CropPad
-
-import streamlit as st
-
-
-
-
-imageLocation = st.empty()
-with imageLocation.container():
- st.header('Animate your dream')
- st.write(f'Tap > to reveal sidebar. Select a style or artist. Enter text prompts and select the number of frames.\
- Include an optional negative prompt. Press the button to generate the animation. \
- Running on {"GPU takes 3-5 minutes." if torch.cuda.is_available() else "CPU does not work. You are advised to upgrade to (a paid) GPU after duplicating the space"}')
- st.markdown('',unsafe_allow_html=True)
- st.image('DaliDream.gif')
-
-
-with st.sidebar:
- fn = 'Animation' #st.text_input('Name of animation','Dali')
- style = st.text_input('Animation style (artist)','surreal Dali')
- zoom = st.checkbox('Zoom in animation',False)
- col1, col2 = st.columns(2)
- with col1:
- prompt1 = st.text_input('Prompt 1','a landscape')
- prompt2 = st.text_input('Prompt 2','weird animals')
- prompt3 = st.text_input('Prompt 3','a castle with weird animals')
- with col2:
- frames1 = st.number_input('Frames in 1', min_value=5, max_value=10, value=5, step=1)
- frames2 = st.number_input('Frames in 2', min_value=5, max_value=10, value=5, step=1)
- frames3 = st.number_input('Frames in 3', min_value=5, max_value=10, value=5, step=1)
- negative_prompt = st.text_input('Negative prompt','text')
-
-
-prompts = [[prompt1,frames1],
- [prompt2,frames2],
- [prompt3,frames3]]
-
-
-
-
-def zoomIn(im,scale = 0.97):
- size = im.size[0]
- return im.crop_pad(int(size*scale)).resize((size,size))
-
-def fade(im0,im1,steps=20):
- """Fade from one image to another"""
- return [Image.fromarray(((1-i/steps)*np.array(im0)+i/steps*np.array(im1)).astype(np.uint8)) for i in range(steps)]
-
-def makeMovie(prompts, style='', negative_prompt='', scale = (512-4)/512,mix_factor=0.01,strength=0.5,guidance_scale=7,num_inference_steps=50):
- # Create an initial image then iterate
- with st.spinner('Be patient, it takes about a minute to generate the initial image on a GPU, but is likely to time out on CPU.'):
- pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4",
- revision="fp16", torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32)
- if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- prompt1 = f'{prompts[0][0]} in the style of {style}' if style!='' else prompts[0][0]
- im1 = pipe(prompt1).images[0]
- with st.spinner('Preparing animation pipeline takes another minute on a GPU'):
- pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4",revision="fp16",torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32)
- if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- im = im1
- movie = [im]
- for prompt,frames in prompts:
- prompt = f'{prompt} in the style of {style}' if style!='' else prompt
- for i in range(frames):
- im = zoomIn(im,scale)
- im2 = pipe(prompt, num_images_per_prompt=1, image=im, negative_prompt=negative_prompt,
- strength=strength, guidance_scale=guidance_scale,
- num_inference_steps=num_inference_steps).images[0]
- if max(max(im2.getextrema()))>0:
- #st.write(i,prompt)
- im = Image.fromarray(((1-mix_factor*i)*np.array(im2)+mix_factor*i*np.array(im1)).astype(np.uint8))
- movie += [im]
- imageLocation.image(im,caption=f'{prompt} frame {i}')
-
- n = len(movie)
- extMovie = []
- for i in range(n):
- extMovie += fade(movie[i],movie[(i+1)%n],steps=20)
-
- return extMovie
-
-def create(fn,prompts,style='',negative_prompt='',zoom=True):
- st.header('Generating initial image')
- scale = (512-16)/512 if zoom else 1
- movie = makeMovie(prompts, style, negative_prompt,scale,mix_factor=0.01,strength=0.5,guidance_scale=7,num_inference_steps=50)
- with st.spinner('Final step: stitching frames together to make animation'):
- movie[0].save(f'{fn}.gif', format='GIF', append_images=movie[1:], save_all=True, duration=50, loop=0)
- imageLocation.image(open(f'{fn}.gif','rb').read(),caption=f'{fn} displays above, as soon as it has loaded')
-
-
-
-
-
-
-st.sidebar.button('Generate animation',on_click=create, args=(fn,prompts,style,negative_prompt,zoom), type='primary')
-
-
-
-
-
-
diff --git a/spaces/seduerr/text_analytics/text_analytics/indices/connective_indices.py b/spaces/seduerr/text_analytics/text_analytics/indices/connective_indices.py
deleted file mode 100644
index 7ff04a960a470e1f8666d6e4f386b81bd30f3a10..0000000000000000000000000000000000000000
--- a/spaces/seduerr/text_analytics/text_analytics/indices/connective_indices.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import multiprocessing
-import pyphen
-import spacy
-import string
-
-from typing import Callable
-from typing import List
-from text_analytics.indices.descriptive_indices import DescriptiveIndices
-from text_analytics.constants import ACCEPTED_LANGUAGES
-from text_analytics.utils.utils import split_text_into_paragraphs
-from text_analytics.utils.utils import split_text_into_sentences
-
-class ConnectiveIndices:
- def __init__(self, nlp, language: str='en', descriptive_indices: DescriptiveIndices=None) -> None:
- self.language = language
- self._nlp = nlp
- self._incidence = 1
- if descriptive_indices is None:
- self._di = DescriptiveIndices(language)
- else:
- self._di = descriptive_indices
-
- def _get_connectives_incidence(self, text: str, disable_pipeline: List, count_connectives_function: Callable, word_count: int=None, workers: int=-1) -> float:
- paragraphs = split_text_into_paragraphs(text)
- pc = len(paragraphs)
- threads = 1
- wc = word_count if word_count is not None else self._di.get_word_count_from_text(text)
- self._nlp.get_pipe('feature counter').counter_function = count_connectives_function
- connectives = sum(doc._.feature_count for doc in self._nlp.pipe(paragraphs, batch_size=threads, disable=disable_pipeline, n_process=threads))
- return connectives
-
- def get_causal_connectives_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float:
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['causal connective tagger', 'feature counter']]
- causal_connectives_counter = lambda doc: len(doc._.causal_connectives)
- result = self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=causal_connectives_counter, workers=workers)
- return result
-
- def get_temporal_connectives_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float:
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['temporal connective tagger', 'feature counter']]
- temporal_connectives_counter = lambda doc: len(doc._.temporal_connectives)
- result = self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=temporal_connectives_counter, workers=workers)
- return result
-
- def get_exemplifications_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float:
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['exemplifications tagger', 'tagger', 'feature counter']]
- exemplifications_counter = lambda doc: len(doc._.exemplifications)
- return self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=exemplifications_counter, workers=workers)
-
- def get_emphatics_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float:
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['emphatics tagger', 'tagger', 'feature counter']]
- emphatics_counter = lambda doc: len(doc._.emphatics)
- return self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=emphatics_counter, workers=workers)
-
- def get_asks_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float:
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['asks tagger', 'tagger', 'feature counter']]
- asks_counter = lambda doc: len(doc._.asks)
- return self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=asks_counter, workers=workers)
-
- def get_polites_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float:
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['polites tagger', 'tagger', 'feature counter']]
- polites_counter = lambda doc: len(doc._.polites)
- return self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=polites_counter, workers=workers)
-
- def get_logical_connectives_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float:
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['logical connective tagger', 'tagger', 'feature counter']]
- logical_connectives_counter = lambda doc: len(doc._.logical_connectives)
- return self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=logical_connectives_counter, workers=workers)
-
- def get_adversative_connectives_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float:
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['adversative connective tagger', 'tagger', 'feature counter']]
- adversative_connectives_counter = lambda doc: len(doc._.adversative_connectives)
- return self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=adversative_connectives_counter, workers=workers)
-
- def get_additive_connectives_incidence(self, text: str, word_count: int=None, workers: int=-1) -> float:
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['additive connective tagger', 'tagger', 'feature counter']]
- additive_connectives_counter = lambda doc: len(doc._.additive_connectives)
- return self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=additive_connectives_counter, workers=workers)
-
- def get_all_connectives_incidence(self, text: str, word_count: int = None, workers: int = -1) -> float:
- """
- This method returns the incidence per {self._incidence} words for all connectives.
-
- Parameters:
- text(str): The text to be analyzed.
- word_count(int): The amount of words in the text.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The incidence of all connectives per {self._incidence} words.
- """
- disable_pipeline = [pipe for pipe in self._nlp.pipe_names if pipe not in ['causal connective tagger', 'logical connective tagger',
- 'adversative connective tagger', 'temporal connective tagger', 'additive connective tagger', 'causal connective tagger', 'tagger', 'feature counter']]
-
- def all_connectives_counter(doc): return len(doc._.causal_connectives) + len(doc._.logical_connectives) + len(
- doc._.adversative_connectives) + len(doc._.temporal_connectives) + len(doc._.additive_connectives)
-
- return self._get_connectives_incidence(text, disable_pipeline=disable_pipeline, count_connectives_function=all_connectives_counter, workers=workers)
diff --git a/spaces/segments-tobias/conex/espnet/nets/scorers/__init__.py b/spaces/segments-tobias/conex/espnet/nets/scorers/__init__.py
deleted file mode 100644
index b7f177368e62a5578b8706300e101f831a3972ac..0000000000000000000000000000000000000000
--- a/spaces/segments-tobias/conex/espnet/nets/scorers/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Initialize sub package."""
diff --git a/spaces/shivi/mask2former-demo/app.py b/spaces/shivi/mask2former-demo/app.py
deleted file mode 100644
index 447f5fb3613c1ba5e9c961eb27d94c61214c2e6d..0000000000000000000000000000000000000000
--- a/spaces/shivi/mask2former-demo/app.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import gradio as gr
-from predict import predict_masks
-import glob
-
-##Create list of examples to be loaded
-example_list = glob.glob("examples/*")
-example_list = list(map(lambda el:[el], example_list))
-
-demo = gr.Blocks()
-
-with demo:
-
- gr.Markdown("# **
Mask2Former: Masked Attention Mask Transformer for Universal Segmentation
**")
- gr.Markdown("This space demonstrates the use of Mask2Former. It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). \
- Before Mask2Former, you'd have to resort to using a specialized architecture designed for solving a particular kind of image segmentation task (i.e. semantic, instance or panoptic segmentation). On the other hand, in the form of Mask2Former, for the first time, we have a single architecture that is capable of solving any segmentation task and performs on par or better than specialized architectures.")
-
- with gr.Box():
-
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("**Inputs**")
- segmentation_task = gr.Dropdown(["semantic", "instance", "panoptic"], value="panoptic", label="Segmentation Task", show_label=True)
- input_image = gr.Image(type='filepath',label="Input Image", show_label=True)
-
- with gr.Column():
- gr.Markdown("**Outputs**")
- output_heading = gr.Textbox(label="Output Type", show_label=True)
- output_mask = gr.Image(label="Predicted Masks", show_label=True)
-
- gr.Markdown("**Predict**")
-
- with gr.Box():
- with gr.Row():
- submit_button = gr.Button("Submit")
-
- gr.Markdown("**Examples:**")
-
- with gr.Column():
- gr.Examples(example_list, [input_image, segmentation_task], [output_mask,output_heading], predict_masks)
-
-
- submit_button.click(predict_masks, inputs=[input_image, segmentation_task], outputs=[output_mask,output_heading])
-
- gr.Markdown('\n Demo created by: Shivalika Singh')
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/utils/__init__.py b/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/utils/__init__.py
deleted file mode 100644
index f03b1c2bafcd7759cb7e8722a0c6715f201a46dc..0000000000000000000000000000000000000000
--- a/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/utils/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from .face_utils import align_crop_face_landmarks, compute_increased_bbox, get_valid_bboxes, paste_face_back
-from .misc import img2tensor, load_file_from_url, download_pretrained_models, scandir
-
-__all__ = [
- 'align_crop_face_landmarks', 'compute_increased_bbox', 'get_valid_bboxes', 'load_file_from_url',
- 'download_pretrained_models', 'paste_face_back', 'img2tensor', 'scandir'
-]
diff --git a/spaces/simonduerr/diffdock/baselines/baseline_tankbind_runtime.py b/spaces/simonduerr/diffdock/baselines/baseline_tankbind_runtime.py
deleted file mode 100644
index 4df6eb1d79d4c94674cd1ba6d8a6ad8f7534164c..0000000000000000000000000000000000000000
--- a/spaces/simonduerr/diffdock/baselines/baseline_tankbind_runtime.py
+++ /dev/null
@@ -1,342 +0,0 @@
-# This file needs to be ran in the TANKBind repository together with baseline_run_tankbind_parallel.sh
-
-import sys
-import time
-from multiprocessing import Pool
-
-
-import copy
-import warnings
-from argparse import ArgumentParser
-
-from rdkit.Chem import AllChem, RemoveHs
-
-from feature_utils import save_cleaned_protein, read_mol
-from generation_utils import get_LAS_distance_constraint_mask, get_info_pred_distance, write_with_new_coords
-import logging
-from torch_geometric.loader import DataLoader
-from tqdm import tqdm # pip install tqdm if fails.
-from model import get_model
-# from utils import *
-import torch
-
-
-from data import TankBind_prediction
-
-import os
-import numpy as np
-import pandas as pd
-import rdkit.Chem as Chem
-from feature_utils import generate_sdf_from_smiles_using_rdkit
-from feature_utils import get_protein_feature
-from Bio.PDB import PDBParser
-from feature_utils import extract_torchdrug_feature_from_mol
-
-
-def read_strings_from_txt(path):
- # every line will be one element of the returned list
- with open(path) as file:
- lines = file.readlines()
- return [line.rstrip() for line in lines]
-
-
-def read_molecule(molecule_file, sanitize=False, calc_charges=False, remove_hs=False):
- if molecule_file.endswith('.mol2'):
- mol = Chem.MolFromMol2File(molecule_file, sanitize=False, removeHs=False)
- elif molecule_file.endswith('.sdf'):
- supplier = Chem.SDMolSupplier(molecule_file, sanitize=False, removeHs=False)
- mol = supplier[0]
- elif molecule_file.endswith('.pdbqt'):
- with open(molecule_file) as file:
- pdbqt_data = file.readlines()
- pdb_block = ''
- for line in pdbqt_data:
- pdb_block += '{}\n'.format(line[:66])
- mol = Chem.MolFromPDBBlock(pdb_block, sanitize=False, removeHs=False)
- elif molecule_file.endswith('.pdb'):
- mol = Chem.MolFromPDBFile(molecule_file, sanitize=False, removeHs=False)
- else:
- return ValueError('Expect the format of the molecule_file to be '
- 'one of .mol2, .sdf, .pdbqt and .pdb, got {}'.format(molecule_file))
- try:
- if sanitize or calc_charges:
- Chem.SanitizeMol(mol)
-
- if calc_charges:
- # Compute Gasteiger charges on the molecule.
- try:
- AllChem.ComputeGasteigerCharges(mol)
- except:
- warnings.warn('Unable to compute charges for the molecule.')
-
- if remove_hs:
- mol = Chem.RemoveHs(mol, sanitize=sanitize)
- except:
- return None
-
- return mol
-
-
-def parallel_save_prediction(arguments):
- dataset, y_pred_list, chosen,rdkit_mol_path, result_folder, name = arguments
- for idx, line in chosen.iterrows():
- pocket_name = line['pocket_name']
- compound_name = line['compound_name']
- ligandName = compound_name.split("_")[1]
- dataset_index = line['dataset_index']
- coords = dataset[dataset_index].coords.to('cpu')
- protein_nodes_xyz = dataset[dataset_index].node_xyz.to('cpu')
- n_compound = coords.shape[0]
- n_protein = protein_nodes_xyz.shape[0]
- y_pred = y_pred_list[dataset_index].reshape(n_protein, n_compound).to('cpu')
- compound_pair_dis_constraint = torch.cdist(coords, coords)
- mol = Chem.MolFromMolFile(rdkit_mol_path)
- LAS_distance_constraint_mask = get_LAS_distance_constraint_mask(mol).bool()
- pred_dist_info = get_info_pred_distance(coords, y_pred, protein_nodes_xyz, compound_pair_dis_constraint,
- LAS_distance_constraint_mask=LAS_distance_constraint_mask,
- n_repeat=1, show_progress=False)
-
- toFile = f'{result_folder}/{name}_tankbind_chosen.sdf'
- new_coords = pred_dist_info.sort_values("loss")['coords'].iloc[0].astype(np.double)
- write_with_new_coords(mol, new_coords, toFile)
-
-if __name__ == '__main__':
- tankbind_src_folder = "../tankbind"
- sys.path.insert(0, tankbind_src_folder)
- torch.set_num_threads(16)
- parser = ArgumentParser()
- parser.add_argument('--data_dir', type=str, default='/Users/hstark/projects/ligbind/data/PDBBind_processed', help='')
- parser.add_argument('--split_path', type=str, default='/Users/hstark/projects/ligbind/data/splits/timesplit_test', help='')
- parser.add_argument('--prank_path', type=str, default='/Users/hstark/projects/p2rank_2.3/prank', help='')
- parser.add_argument('--results_path', type=str, default='results/tankbind_results', help='')
- parser.add_argument('--skip_existing', action='store_true', default=False, help='')
- parser.add_argument('--skip_p2rank', action='store_true', default=False, help='')
- parser.add_argument('--skip_multiple_pocket_outputs', action='store_true', default=False, help='')
- parser.add_argument('--device', type=str, default='cpu', help='')
- parser.add_argument('--num_workers', type=int, default=1, help='')
- parser.add_argument('--parallel_id', type=int, default=0, help='')
- parser.add_argument('--parallel_tot', type=int, default=1, help='')
- args = parser.parse_args()
-
- device = args.device
- cache_path = "tankbind_cache"
- os.makedirs(cache_path, exist_ok=True)
- os.makedirs(args.results_path, exist_ok=True)
-
-
-
- logging.basicConfig(level=logging.INFO)
- model = get_model(0, logging, device)
- # re-dock model
- # modelFile = "../saved_models/re_dock.pt"
- # self-dock model
- modelFile = f"{tankbind_src_folder}/../saved_models/self_dock.pt"
-
- model.load_state_dict(torch.load(modelFile, map_location=device))
- _ = model.eval()
- batch_size = 5
- names = read_strings_from_txt(args.split_path)
- if args.parallel_tot > 1:
- size = len(names) // args.parallel_tot + 1
- names = names[args.parallel_id*size:(args.parallel_id+1)*size]
- rmsds = []
-
- forward_pass_time = []
- times_preprocess = []
- times_inference = []
- top_10_generation_time = []
- top_1_generation_time = []
- start_time = time.time()
- if not args.skip_p2rank:
- for name in names:
- if args.skip_existing and os.path.exists(f'{args.results_path}/{name}/{name}_tankbind_1.sdf'): continue
- print("Now processing: ", name)
- protein_path = f'{args.data_dir}/{name}/{name}_protein_processed.pdb'
- cleaned_protein_path = f"{cache_path}/{name}_protein_tankbind_cleaned.pdb" # if you change this you also need to change below
- parser = PDBParser(QUIET=True)
- s = parser.get_structure(name, protein_path)
- c = s[0]
- clean_res_list, ligand_list = save_cleaned_protein(c, cleaned_protein_path)
-
- with open(f"{cache_path}/pdb_list_p2rank.txt", "w") as out:
- for name in names:
- out.write(f"{name}_protein_tankbind_cleaned.pdb\n")
- cmd = f"bash {args.prank_path} predict {cache_path}/pdb_list_p2rank.txt -o {cache_path}/p2rank -threads 4"
- os.system(cmd)
- times_preprocess.append(time.time() - start_time)
- p2_rank_time = time.time() - start_time
-
-
-
-
- list_to_parallelize = []
- for name in tqdm(names):
- single_preprocess_time = time.time()
- if args.skip_existing and os.path.exists(f'{args.results_path}/{name}/{name}_tankbind_1.sdf'): continue
- print("Now processing: ", name)
- protein_path = f'{args.data_dir}/{name}/{name}_protein_processed.pdb'
- ligand_path = f"{args.data_dir}/{name}/{name}_ligand.sdf"
- cleaned_protein_path = f"{cache_path}/{name}_protein_tankbind_cleaned.pdb" # if you change this you also need to change below
- rdkit_mol_path = f"{cache_path}/{name}_rdkit_ligand.sdf"
-
- parser = PDBParser(QUIET=True)
- s = parser.get_structure(name, protein_path)
- c = s[0]
- clean_res_list, ligand_list = save_cleaned_protein(c, cleaned_protein_path)
- lig, _ = read_mol(f"{args.data_dir}/{name}/{name}_ligand.sdf", f"{args.data_dir}/{name}/{name}_ligand.mol2")
-
- lig = RemoveHs(lig)
- smiles = Chem.MolToSmiles(lig)
- generate_sdf_from_smiles_using_rdkit(smiles, rdkit_mol_path, shift_dis=0)
-
- parser = PDBParser(QUIET=True)
- s = parser.get_structure("x", cleaned_protein_path)
- res_list = list(s.get_residues())
-
- protein_dict = {}
- protein_dict[name] = get_protein_feature(res_list)
- compound_dict = {}
-
- mol = Chem.MolFromMolFile(rdkit_mol_path)
- compound_dict[name + f"_{name}" + "_rdkit"] = extract_torchdrug_feature_from_mol(mol, has_LAS_mask=True)
-
- info = []
- for compound_name in list(compound_dict.keys()):
- # use protein center as the block center.
- com = ",".join([str(a.round(3)) for a in protein_dict[name][0].mean(axis=0).numpy()])
- info.append([name, compound_name, "protein_center", com])
-
- p2rankFile = f"{cache_path}/p2rank/{name}_protein_tankbind_cleaned.pdb_predictions.csv"
- pocket = pd.read_csv(p2rankFile)
- pocket.columns = pocket.columns.str.strip()
- pocket_coms = pocket[['center_x', 'center_y', 'center_z']].values
- for ith_pocket, com in enumerate(pocket_coms):
- com = ",".join([str(a.round(3)) for a in com])
- info.append([name, compound_name, f"pocket_{ith_pocket + 1}", com])
- info = pd.DataFrame(info, columns=['protein_name', 'compound_name', 'pocket_name', 'pocket_com'])
-
- dataset_path = f"{cache_path}/{name}_dataset/"
- os.system(f"rm -r {dataset_path}")
- os.system(f"mkdir -p {dataset_path}")
- dataset = TankBind_prediction(dataset_path, data=info, protein_dict=protein_dict, compound_dict=compound_dict)
-
- # dataset = TankBind_prediction(dataset_path)
- times_preprocess.append(time.time() - single_preprocess_time)
- single_forward_pass_time = time.time()
- data_loader = DataLoader(dataset, batch_size=batch_size, follow_batch=['x', 'y', 'compound_pair'], shuffle=False,
- num_workers=0)
- affinity_pred_list = []
- y_pred_list = []
- for data in tqdm(data_loader):
- data = data.to(device)
- y_pred, affinity_pred = model(data)
- affinity_pred_list.append(affinity_pred.detach().cpu())
- for i in range(data.y_batch.max() + 1):
- y_pred_list.append((y_pred[data['y_batch'] == i]).detach().cpu())
-
- affinity_pred_list = torch.cat(affinity_pred_list)
- forward_pass_time.append(time.time() - single_forward_pass_time)
- output_info = copy.deepcopy(dataset.data)
- output_info['affinity'] = affinity_pred_list
- output_info['dataset_index'] = range(len(output_info))
- output_info_sorted = output_info.sort_values('affinity', ascending=False)
-
-
- result_folder = f'{args.results_path}/{name}'
- os.makedirs(result_folder, exist_ok=True)
- output_info_sorted.to_csv(f"{result_folder}/output_info_sorted_by_affinity.csv")
-
- if not args.skip_multiple_pocket_outputs:
- for idx, (dataframe_idx, line) in enumerate(copy.deepcopy(output_info_sorted).iterrows()):
- single_top10_generation_time = time.time()
- pocket_name = line['pocket_name']
- compound_name = line['compound_name']
- ligandName = compound_name.split("_")[1]
- coords = dataset[dataframe_idx].coords.to('cpu')
- protein_nodes_xyz = dataset[dataframe_idx].node_xyz.to('cpu')
- n_compound = coords.shape[0]
- n_protein = protein_nodes_xyz.shape[0]
- y_pred = y_pred_list[dataframe_idx].reshape(n_protein, n_compound).to('cpu')
- y = dataset[dataframe_idx].dis_map.reshape(n_protein, n_compound).to('cpu')
- compound_pair_dis_constraint = torch.cdist(coords, coords)
- mol = Chem.MolFromMolFile(rdkit_mol_path)
- LAS_distance_constraint_mask = get_LAS_distance_constraint_mask(mol).bool()
- pred_dist_info = get_info_pred_distance(coords, y_pred, protein_nodes_xyz, compound_pair_dis_constraint,
- LAS_distance_constraint_mask=LAS_distance_constraint_mask,
- n_repeat=1, show_progress=False)
-
- toFile = f'{result_folder}/{name}_tankbind_{idx}.sdf'
- new_coords = pred_dist_info.sort_values("loss")['coords'].iloc[0].astype(np.double)
- write_with_new_coords(mol, new_coords, toFile)
- if idx < 10:
- top_10_generation_time.append(time.time() - single_top10_generation_time)
- if idx == 0:
- top_1_generation_time.append(time.time() - single_top10_generation_time)
-
- output_info_chosen = copy.deepcopy(dataset.data)
- output_info_chosen['affinity'] = affinity_pred_list
- output_info_chosen['dataset_index'] = range(len(output_info_chosen))
- chosen = output_info_chosen.loc[
- output_info_chosen.groupby(['protein_name', 'compound_name'], sort=False)['affinity'].agg(
- 'idxmax')].reset_index()
-
- list_to_parallelize.append((dataset, y_pred_list, chosen, rdkit_mol_path, result_folder, name))
-
- chosen_generation_start_time = time.time()
- if args.num_workers > 1:
- p = Pool(args.num_workers, maxtasksperchild=1)
- p.__enter__()
- with tqdm(total=len(list_to_parallelize), desc=f'running optimization {i}/{len(list_to_parallelize)}') as pbar:
- map_fn = p.imap_unordered if args.num_workers > 1 else map
- for t in map_fn(parallel_save_prediction, list_to_parallelize):
- pbar.update()
- if args.num_workers > 1: p.__exit__(None, None, None)
- chosen_generation_time = time.time() - chosen_generation_start_time
- """
- lig, _ = read_mol(f"{args.data_dir}/{name}/{name}_ligand.sdf", f"{args.data_dir}/{name}/{name}_ligand.mol2")
- sm = Chem.MolToSmiles(lig)
- m_order = list(lig.GetPropsAsDict(includePrivate=True, includeComputed=True)['_smilesAtomOutputOrder'])
- lig = Chem.RenumberAtoms(lig, m_order)
- lig = Chem.RemoveAllHs(lig)
- lig = RemoveHs(lig)
- true_ligand_pos = np.array(lig.GetConformer().GetPositions())
-
- toFile = f'{result_folder}/{name}_tankbind_chosen.sdf'
- mol_pred, _ = read_mol(toFile, None)
- sm = Chem.MolToSmiles(mol_pred)
- m_order = list(mol_pred.GetPropsAsDict(includePrivate=True, includeComputed=True)['_smilesAtomOutputOrder'])
- mol_pred = Chem.RenumberAtoms(mol_pred, m_order)
- mol_pred = RemoveHs(mol_pred)
- mol_pred_pos = np.array(mol_pred.GetConformer().GetPositions())
- rmsds.append(np.sqrt(((true_ligand_pos - mol_pred_pos) ** 2).sum(axis=1).mean(axis=0)))
- print(np.sqrt(((true_ligand_pos - mol_pred_pos) ** 2).sum(axis=1).mean(axis=0)))
- """
- forward_pass_time = np.array(forward_pass_time).sum()
- times_preprocess = np.array(times_preprocess).sum()
- times_inference = np.array(times_inference).sum()
- top_10_generation_time = np.array(top_10_generation_time).sum()
- top_1_generation_time = np.array(top_1_generation_time).sum()
-
- rmsds = np.array(rmsds)
-
- print(f'forward_pass_time: {forward_pass_time}')
- print(f'times_preprocess: {times_preprocess}')
- print(f'times_inference: {times_inference}')
- print(f'top_10_generation_time: {top_10_generation_time}')
- print(f'top_1_generation_time: {top_1_generation_time}')
- print(f'chosen_generation_time: {chosen_generation_time}')
- print(f'rmsds_below_2: {(100 * (rmsds < 2).sum() / len(rmsds))}')
- print(f'p2rank Time: {p2_rank_time}')
- print(
- f'total_time: '
- f'{forward_pass_time + times_preprocess + times_inference + top_10_generation_time + top_1_generation_time + p2_rank_time}')
-
- with open(os.path.join(args.results_path, 'tankbind_log.log'), 'w') as file:
- file.write(f'forward_pass_time: {forward_pass_time}')
- file.write(f'times_preprocess: {times_preprocess}')
- file.write(f'times_inference: {times_inference}')
- file.write(f'top_10_generation_time: {top_10_generation_time}')
- file.write(f'top_1_generation_time: {top_1_generation_time}')
- file.write(f'rmsds_below_2: {(100 * (rmsds < 2).sum() / len(rmsds))}')
- file.write(f'p2rank Time: {p2_rank_time}')
- file.write(f'total_time: {forward_pass_time + times_preprocess + times_inference + top_10_generation_time + top_1_generation_time + p2_rank_time}')
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Chhath Songs 1-77 The Complete Collection of Chhath Puja Songs in MP3 Format - Free Download or Streaming.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Chhath Songs 1-77 The Complete Collection of Chhath Puja Songs in MP3 Format - Free Download or Streaming.md
deleted file mode 100644
index e859ca4177b24eca405fb72be0a5f346b05746be..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Chhath Songs 1-77 The Complete Collection of Chhath Puja Songs in MP3 Format - Free Download or Streaming.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
Chhath Puja Songs Jukebox Download: How to Enjoy the Festival with Music
-
Chhath Puja is a four-day festival that is celebrated by Hindus in India, Nepal, and some other parts of the world. It is dedicated to the Sun God, Surya, and his sister, Chhathi Maiya, who are worshipped for their blessings of health, wealth, prosperity, and happiness. Chhath Puja is also a festival of music, as devotees sing and listen to various songs that praise the Sun God and his sister.
-
If you are looking for a way to enjoy Chhath Puja with music, you can download a jukebox of Chhath Puja songs for free from various websites and apps. A jukebox is a collection of songs that can be played in a sequence or randomly. You can play the jukebox offline or online on your devices and platforms of your choice. In this article, we will tell you how to download Chhath Puja songs jukebox for free, how to play it offline and online, and how to celebrate the festival with music and family.
Chhath Puja is an ancient festival that dates back to the Vedic period, when sages used to worship the Sun God by fasting and standing in water. It is also associated with some legends from the Hindu epics Ramayana and Mahabharata. According to one legend, Lord Rama and his wife Sita performed Chhath Puja after returning from exile. According to another legend, Karna, the son of Surya and Kunti, performed Chhath Puja as a king. According to yet another legend, Draupadi and the Pandavas performed Chhath Puja to overcome their troubles.
-
Chhath Puja is celebrated to express gratitude and devotion to the Sun God and his sister, who are believed to be the sources of life, energy, light, and healing on earth. By worshipping them, devotees seek their blessings for themselves and their families. They also pray for their wishes to be fulfilled.
-
Rituals and Timings of Chhath Puja
-
Chhath Puja is celebrated for four days in the month of Kartik (October-November) according to the Hindu calendar. The four days are:
-
-
Nahay Khay: The first day when devotees take a holy bath in a river or pond and eat a vegetarian meal.
-
Kharna: The second day when devotees observe a fast from sunrise to sunset and break it by eating kheer (rice pudding) made with jaggery.
-
Sandhya Arghya: The third day when devotees observe a strict fast without water for 36 hours. They also offer arghya (water) to the setting sun at a riverbank or pond.
-
Usha Arghya: The fourth day when devotees offer arghya to the rising sun at a riverbank or pond. They also break their. fast and eat a special prasad (offering) made of fruits, sweets, and thekua (a fried cookie).
-
-
The timings of Chhath Puja vary according to the sunrise and sunset timings of different locations. You can check the exact timings of Chhath Puja for your location from various online sources .
-
How to Download Chhath Puja Songs Jukebox for Free?
-
Websites and Apps to Download Chhath Puja Songs Jukebox
-
There are many websites and apps that offer free download of Chhath Puja songs jukebox. Some of the popular ones are:
-
-
Gaana: Gaana is a music streaming app that also allows you to download songs for offline listening. You can find a variety of Chhath Puja songs jukebox on Gaana, such as Chhath Pooja Special, Chhath Geet, and Chhath Mahaparv. You can download the jukebox by tapping on the download icon next to the play button.
-
JioSaavn: JioSaavn is another music streaming app that lets you download songs for offline listening. You can browse through various Chhath Puja songs jukebox on JioSaavn, such as Chhath Puja Ke Geet, Chhath Mahima, and Chhathi Maiya Ke Bhajan. You can download the jukebox by tapping on the download icon below the play button.
-
YouTube: YouTube is a video-sharing platform that also hosts many audio tracks. You can search for Chhath Puja songs jukebox on YouTube, such as Chhath Puja Special Songs 2023, Chhath Puja Top 10 Songs, and Chhath Puja Non Stop Songs. You can download the jukebox by using a third-party tool or app that can convert YouTube videos to MP3 files.
-
-
Tips and Tricks to Download Chhath Puja Songs Jukebox Safely and Quickly
-
While downloading Chhath Puja songs jukebox for free, you should keep in mind some tips and tricks to ensure a safe and quick download. Here are some of them:
-
-
Make sure you have a stable and fast internet connection to avoid interruptions and delays in downloading.
-
Choose a reliable and trusted website or app to download from, and avoid clicking on any suspicious links or ads that may contain malware or viruses.
-
Check the file size and format of the jukebox before downloading, and make sure it is compatible with your device and software.
-
Create a separate folder or playlist for your downloaded jukebox, and label it clearly so that you can find it easily later.
-
Delete any unwanted or duplicate files from your device to free up some space and avoid cluttering.
-
-
How to Play Chhath Puja Songs Jukebox Offline and Online?
-
Devices and Software to Play Chhath Puja Songs Jukebox Offline
-
If you want to play Chhath Puja songs jukebox offline, you will need a device and software that can support MP3 files. Some of the common devices and software that you can use are:
-
-
Smartphone: You can use your smartphone to play Chhath Puja songs jukebox offline by using the default music player app or any other app that you prefer. You can also connect your smartphone to a speaker or a headphone for better sound quality.
-
Laptop: You can use your laptop to play Chhath Puja songs jukebox offline by using the default media player software or any other software that you like. You can also connect your laptop to a speaker or a headphone for better sound quality.
-
MP3 Player: You can use an MP3 player to play Chhath Puja songs jukebox offline by transferring the files from your device or computer to the MP3 player. You can also use an earphone or a headphone to listen to the songs.
-
-
Platforms and Services to Play Chhath Puja Songs Jukebox Online
-
If you want to play Chhath Puja songs jukebox online, you will need a platform or service that can stream MP3 files. Some of the popular platforms and services that you can use are:
-
chhath puja songs mp3 free download
-chhath puja songs video jukebox
-chhath puja songs online play
-chhath puja songs anuradha paudwal
-chhath puja songs pawan singh
-chhath puja songs khesari lal yadav
-chhath puja songs sharda sinha
-chhath puja songs kalpana
-chhath puja songs manoj tiwari
-chhath puja songs bhojpuri
-chhath puja songs maithili
-chhath puja songs hindi
-chhath puja songs new 2023
-chhath puja songs old 2016
-chhath puja songs remix dj
-chhath puja songs lyrics in hindi
-chhath puja songs list with singer name
-chhath puja songs audio jukebox
-chhath puja songs best collection
-chhath puja songs zip file download
-chhath puja songs download pagalworld
-chhath puja songs download mr jatt
-chhath puja songs download gaana.com
-chhath puja songs download hungama.com
-chhath puja songs download saavn.com
-chhath puja songs download djpunjab.com
-chhath puja songs download wapking.in
-chhath puja songs download webmusic.in
-chhath puja songs download mp4 hd
-chhath puja songs download 320kbps
-chhath puja songs download 128kbps
-chhath puja songs download ringtone
-chhath puja songs download status video
-chhath puja songs download whatsapp status
-chhath puja songs download for mobile phone
-chhath puja songs download in laptop or pc
-chhath puja songs download from youtube
-chhath puja songs download from internet archive [^1^]
-chhath puja video jukebox pawan singh [^2^]
-chhath puja audio jukebox anuradha paudwal [^3^]
-
-
Gaana: As mentioned earlier, Gaana is a music streaming app that offers various Chhath Pu ja songs jukebox for free download and online streaming. You can play the jukebox online by tapping on the play button and enjoy the songs with high-quality sound and lyrics.
-
JioSaavn: As mentioned earlier, JioSaavn is another music streaming app that provides various Chhath Puja songs jukebox for free download and online streaming. You can play the jukebox online by tapping on the play button and enjoy the songs with high-quality sound and lyrics.
-
YouTube: As mentioned earlier, YouTube is a video-sharing platform that also hosts many audio tracks of Chhath Puja songs jukebox. You can play the jukebox online by clicking on the video and enjoy the songs with visuals and subtitles.
-
-
How to Enjoy Chhath Puja with Music and Family?
-
Benefits of Listening to Chhath Puja Songs
-
Listening to Chhath Puja songs can have many benefits for you and your family. Some of them are:
-
-
It can enhance your mood and spirit by creating a festive atmosphere.
-
It can increase your faith and devotion by reminding you of the glory and grace of the Sun God and his sister.
-
It can improve your health and well-being by reducing stress and anxiety, and boosting your immunity and energy.
-
It can strengthen your bond and harmony with your family by sharing the joy and happiness of the festival.
-
-
Ways to Celebrate Chhath Puja with Music and Family
-
There are many ways to celebrate Chhath Puja with music and family. Some of them are:
-
-
You can sing along with the Chhath Puja songs jukebox and express your gratitude and love to the Sun God and his sister.
-
You can dance along with the Chhath Puja songs jukebox and show your enthusiasm and excitement for the festival.
-
You can play games or quizzes based on the Chhath Puja songs jukebox and test your knowledge and memory of the festival.
-
You can share stories or anecdotes related to the Chhath Puja songs jukebox and learn more about the culture and tradition of the festival.
-
-
Conclusion
-
Chhath Puja is a festival of music, as well as a festival of faith, gratitude, and happiness. By downloading a jukebox of Chhath Puja songs for free, you can enjoy the festival with music and family. You can play the jukebox offline or online on your devices and platforms of your choice. You can also benefit from listening to Chhath Puja songs, as they can enhance your mood, spirit, faith, devotion, health, well-being, bond, and harmony. We hope this article has helped you to know how to download Chhath Puja songs jukebox for free, how to play it offline and online, and how to celebrate the festival with music and family. Happy Chhath Puja!
-
FAQs
-
Here are some frequently asked questions about Chhath Puja songs jukebox download:
-
-
Q: What are some of the popular singers of Chhath Puja songs?
-A: Some of the popular singers of Chhath Puja songs are Sharda Sinha, Anuradha Paudwal, Kalpana Patowary, Pawan Singh, Khesari Lal Yadav, Manoj Tiwari, Dinesh Lal Yadav, etc.
Q: What are some of the popular genres of Chhath Puja songs?
-A: Some of the popular genres of Chhath Puja songs are folk, devotional, classical, pop, bhojpuri, etc.
Q: How can I make my own Chhath Puja songs jukebox?
-A: You can make your own Chhath Puja songs jukebox by selecting your favorite songs from various sources and creating a playlist or a folder on your device or computer. You can also use some online tools or apps that can help you to create a custom jukebox.
Q: How can I share my Chhath Puja songs jukebox with others?
-A: You can share your Chhath Puja songs jukebox with others by sending them the link or file of your jukebox via email, message, social media, etc. You can also use some online platforms or services that allow you to share your music with others.
Q: How can I find more information about Chhath Puja?
-A: You can find more information about Chh ath Puja by visiting some websites or blogs that provide detailed information about the festival, such as its history, significance, rituals, timings, etc. You can also watch some videos or documentaries that showcase the festival and its culture. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Cheat Sengoku Basara Battle Heroes PPSSPP Enhance Your Graphics and Performance.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Cheat Sengoku Basara Battle Heroes PPSSPP Enhance Your Graphics and Performance.md
deleted file mode 100644
index 1d7a282fff35b35b79a44e9fa6feb06058ecd1eb..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Cheat Sengoku Basara Battle Heroes PPSSPP Enhance Your Graphics and Performance.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
How to Download Cheat Sengoku Basara Battle Heroes PPSSPP
-
If you are a fan of action-packed hack and slash games set in feudal Japan, you might want to try Sengoku Basara Battle Heroes PPSSPP. This game is a spin-off of the popular Sengoku Basara series, featuring over 30 playable characters from different factions and historical periods. You can enjoy the game on your PSP or Android device using the PPSSPP emulator, which allows you to play PSP games on various platforms.
However, if you want to spice up your gameplay and unlock more content, you might be interested in using cheat codes for Sengoku Basara Battle Heroes PPSSPP. Cheat codes can help you access exclusive weapons, armors, missions, and characters that are otherwise hard to obtain. In this article, we will show you what is Sengoku Basara Battle Heroes PPSSPP, why use cheat codes for it, and how to download cheat codes for it.
-
What is Sengoku Basara Battle Heroes PPSSPP?
-
A brief introduction to the game and its features
-
Sengoku Basara Battle Heroes PPSSPP is a PSP game that was released in 2009 by Capcom. It is a spin-off of the Sengoku Basara series, which is based on the historical events and characters of the Sengoku period in Japan. The game features over 30 playable characters from different factions and historical periods, such as Oda Nobunaga, Sanada Yukimura, Date Masamune, and more. Each character has their own unique weapons, skills, and personality.
-
The game has two main modes: Story mode and Mission mode. In Story mode, you can choose one of the characters and follow their storyline through various battles and events. In Mission mode, you can create your own team of up to four characters and complete various objectives in different stages. You can also play with or against other players online or locally using the ad-hoc mode.
-
How to play the game on PSP or Android using PPSSPP emulator
-
If you want to play Sengoku Basara Battle Heroes PPSSPP on your PSP or Android device, you will need to use the PPSSPP emulator. PPSSPP is a free and open-source emulator that allows you to play PSP games on various platforms, such as Windows, Linux, Mac OS, iOS, Android, and more. You can download the latest version of PPSSPP from its official website or from Google Play Store . You will also need to download the ISO file of Sengoku Basara Battle Heroes from a reliable source . Once you have both the emulator and the ISO file, you can follow these steps:
-
-
Launch the PPSSPP emulator on your device and navigate to the folder where you saved the ISO file.
-
Select the ISO file of Sengoku Basara Battle Heroes and tap on it to start the game.
-
Adjust the settings of the emulator according to your preference and device performance. You can change the graphics, audio, controls, network, and system settings from the menu.
-
Enjoy playing Sengoku Basara Battle Heroes PPSSPP on your device.
-
Why use cheat codes for Sengoku Basara Battle Heroes PPSSPP?
-
The benefits of using cheat codes, such as unlocking characters, weapons, armors, and missions
-
Using cheat codes for Sengoku Basara Battle Heroes PPSSPP can make your gameplay more fun and exciting. Cheat codes can help you unlock more content that is otherwise hard to obtain or hidden in the game. For example, you can use cheat codes to unlock all the characters, weapons, armors, and missions in the game. This way, you can enjoy playing with your favorite characters and using their best equipment. You can also explore more stages and scenarios that offer different challenges and rewards.
-
How to download cheat codes for sengoku basara battle heroes psp
-Sengoku basara battle heroes ppsspp iso download english
-Sengoku basara battle heroes ppsspp cheats android
-Sengoku basara battle heroes psp game download free
-Sengoku basara battle heroes ppsspp settings for best performance
-Sengoku basara battle heroes ppsspp save data download
-Sengoku basara battle heroes ppsspp multiplayer guide
-Sengoku basara battle heroes psp rom download for pc
-Sengoku basara battle heroes ppsspp unlock all characters cheat
-Sengoku basara battle heroes ppsspp download link
-Sengoku basara battle heroes ppsspp gameplay video
-Sengoku basara battle heroes psp cheat codes list
-Sengoku basara battle heroes ppsspp english patch download
-Sengoku basara battle heroes psp iso highly compressed download
-Sengoku basara battle heroes ppsspp tips and tricks
-Sengoku basara battle heroes psp download full version
-Sengoku basara battle heroes ppsspp mod apk download
-Sengoku basara battle heroes psp review and rating
-Sengoku basara battle heroes ppsspp emulator download for android
-Sengoku basara battle heroes psp cheats cwcheat
-Sengoku basara battle heroes ppsspp online mode tutorial
-Sengoku basara battle heroes psp iso download google drive
-Sengoku basara battle heroes ppsspp hack tool download
-Sengoku basara battle heroes psp best characters and weapons
-Sengoku basara battle heroes ppsspp controller support guide
-Sengoku basara battle heroes psp iso download coolrom
-Sengoku basara battle heroes ppsspp cheat engine download
-Sengoku basara battle heroes psp walkthrough and mission mode guide
-Sengoku basara battle heroes ppsspp system requirements and compatibility
-Sengoku basara battle heroes psp iso download mediafire
-Sengoku basara battle heroes ppsspp gold apk download free
-Sengoku basara battle heroes psp soundtrack and theme song download
-Sengoku basara battle heroes ppsspp unlimited money cheat code
-Sengoku basara battle heroes psp story and characters summary
-Sengoku basara battle heroes ppsspp graphics mod download
-Sengoku basara battle heroes psp iso download no password
-Sengoku basara battle heroes ppsspp infinite health cheat code
-Sengoku basara battle heroes psp challenge mode and unlockables guide
-Sengoku basara battle heroes ppsspp lag fix and speed up tips
-Sengoku basara battle heroes psp iso download utorrent
-Sengoku basara battle heroes ppsspp latest version update download
-Sengoku basara battle heroes psp cheats and secrets guide
-Sengoku basara battle heroes ppsspp custom cheats file download
-Sengoku basara battle heroes psp game size and format info
-Sengoku basara battle heroes ppsspp texture pack download free
-Sengoku basara battle heroes psp iso download reddit link
-Sengoku basara battle heroes ppsspp max level cheat code
-
Cheat codes can also help you customize your gameplay according to your preference and skill level. For example, you can use cheat codes to modify the game difficulty, the enemy strength, the damage output, the health recovery, and more. You can also use cheat codes to activate special effects, such as infinite health, infinite money, infinite items, and more. These cheat codes can make your gameplay easier or harder, depending on your choice.
-
The drawbacks of using cheat codes, such as affecting the game balance and difficulty
-
However, using cheat codes for Sengoku Basara Battle Heroes PPSSPP also has some drawbacks that you should be aware of. Cheat codes can affect the game balance and difficulty, which can ruin the original design and intention of the game. For example, using cheat codes to unlock all the content in the game can make the game boring and repetitive, as you will have no incentive to play through the game normally and earn the rewards. You will also miss out on the sense of achievement and satisfaction that comes from completing the game challenges and objectives.
-
Cheat codes can also affect the game performance and stability, which can cause glitches and errors in the game. For example, using cheat codes to modify the game parameters can cause the game to crash or freeze, as the game might not be able to handle the changes. You might also encounter bugs and problems in the game graphics, audio, controls, or network. These issues can affect your gameplay experience and enjoyment.
How to download cheat codes for Sengoku Basara Battle Heroes PPSSPP?
-
The sources of cheat codes, such as websites, videos, and forums
-
If you are looking for cheat codes for Sengoku Basara Battle Heroes PPSSPP, you can find them from various sources online. Some of the most common sources are websites, videos, and forums that provide cheat codes for different games and platforms. For example, you can visit these websites that offer cheat codes for Sengoku Basara Battle Heroes PPSSPP. You can also watch these videos that show you how to use cheat codes for the game. You can also join these forums that discuss and share cheat codes for the game.
-
However, you should be careful when downloading cheat codes from these sources, as some of them might be unreliable, outdated, or malicious. You should always check the credibility and reputation of the source before downloading anything from it. You should also scan the files or links for viruses or malware before opening or clicking on them. You should also backup your game data and device data before applying any cheat codes, in case something goes wrong.
-
The steps to download and apply cheat codes, such as using passwords, files, or plugins
-
Once you have found the cheat codes that you want to use for Sengoku Basara Battle Heroes PPSSPP, you can download and apply them using different methods, depending on the type and format of the cheat codes. Some of the most common methods are using passwords, files, or plugins. Here are the steps to use each method:
-
-
Using passwords: Some cheat codes are in the form of passwords that you can enter in the game menu or during the gameplay. To use these cheat codes, you just need to follow these steps:
-
Find the password that corresponds to the cheat code that you want to use.
-
Launch the game and go to the menu where you can enter the password. This might be in the options menu, the extras menu, or the pause menu.
-
Enter the password exactly as it is shown in the source. Make sure to use the correct case and symbols.
-
Confirm the password and enjoy the cheat code.
-
-
-
Using files: Some cheat codes are in the form of files that you need to download and copy to your device or emulator. To use these cheat codes, you just need to follow these steps:
-
Find the file that corresponds to the cheat code that you want to use.
-
Download the file from the source and save it to your device or emulator.
-
Copy the file to the folder where your game data or emulator data is stored. This might be in the PSP folder, the GAME folder, or the CHEATS folder.
-
Launch the game and enable the cheat code from the emulator menu or settings.
-
-
-
Using plugins: Some cheat codes are in the form of plugins that you need to download and install on your device or emulator. To use these cheat codes, you just need to follow these steps:
-
Find the plugin that corresponds to the cheat code that you want to use.
-
Download the plugin from the source and save it to your device or emulator.
-
Install the plugin on your device or emulator according to its instructions. This might involve extracting files, copying files, editing files, or running programs.
-
Launch the game and activate the plugin from the emulator menu or settings.
-
-
-
-
Conclusion
-
A summary of the main points and a call to action
-
Sengoku Basara Battle Heroes PPSSPP is a great game for fans of action-packed hack and slash games set in feudal Japan. You can play it on your PSP or Android device using the PPSSPP emulator, which allows you to play PSP games on various platforms. You can also use cheat codes for Sengoku Basara Battle Heroes PPSSPP to unlock more content and customize your gameplay. You can find cheat codes from various sources online, such as websites, videos, and forums. You can download and apply cheat codes using different methods, such as using passwords, files, or plugins.
-
If you want to experience Sengoku Basara Battle Heroes PPSSPP with more fun and excitement, why not try using cheat codes for it? You can download cheat codes from the sources that we have provided in this article, or you can search for more sources online. You can also use the methods that we have explained in this article, or you can find more methods online. Just remember to be careful when downloading and applying cheat codes, as they might have some drawbacks and risks. Also, remember to backup your game data and device data before using cheat codes, in case something goes wrong.
-
We hope that this article has helped you learn how to download cheat codes for Sengoku Basara Battle Heroes PPSSPP. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. Thank you for reading and happy gaming!
-
FAQs
-
Five unique questions and answers related to the topic
-
Here are some of the frequently asked questions and answers related to the topic of downloading cheat codes for Sengoku Basara Battle Heroes PPSSPP:
-
-
Q: Can I use cheat codes for Sengoku Basara Battle Heroes PPSSPP on other platforms besides PSP and Android?
-
A: Yes, you can use cheat codes for Sengoku Basara Battle Heroes PPSSPP on other platforms besides PSP and Android, as long as you use the PPSSPP emulator. The PPSSPP emulator is available for various platforms, such as Windows, Linux, Mac OS, iOS, and more. You just need to download the emulator and the ISO file of the game for your platform, and then follow the same steps as we have described in this article.
-
Q: Can I use cheat codes for Sengoku Basara Battle Heroes PPSSPP online or offline?
-
A: You can use cheat codes for Sengoku Basara Battle Heroes PPSSPP both online and offline, depending on the type and format of the cheat codes. Some cheat codes require an internet connection to work, such as passwords or plugins that need to be downloaded or updated. Some cheat codes work offline, such as files or plugins that are already installed or copied to your device or emulator. However, you should be careful when using cheat codes online, as they might affect your network performance or cause problems with other players.
-
Q: Can I use cheat codes for Sengoku Basara Battle Heroes PPSSPP with other games or emulators?
-
A: No, you cannot use cheat codes for Sengoku Basara Battle Heroes PPSSPP with other games or emulators, as they are specific and compatible only with this game and this emulator. If you try to use cheat codes for Sengoku Basara Battle Heroes PPSSPP with other games or emulators, they might not work at all, or they might cause errors and glitches in the game or emulator. You should always use the appropriate cheat codes for the game and emulator that you are using.
-
Q: Can I use cheat codes for Sengoku Basara Battle Heroes PPSSPP without affecting the game balance and difficulty?
-
A: Yes, you can use cheat codes for Sengoku Basara Battle Heroes PPSSPP without affecting the game balance and difficulty, if you use them moderately and wisely. You should not use too many cheat codes at once, or use cheat codes that are too powerful or unfair. You should also not use cheat codes that make the game too easy or too hard, as they might ruin the original design and intention of the game. You should always respect the game developers and their vision of the game.
-
Q: Can I use cheat codes for Sengoku Basara Battle Heroes PPSSPP legally and ethically?
-
A: The legality and ethics of using cheat codes for Sengoku Basara Battle Heroes PPSSPP depend on your personal judgment and situation. Generally speaking, using cheat codes for personal and private use is not illegal or unethical, as long as you do not harm anyone or anything by doing so. However, using cheat codes for commercial or public use might be illegal or unethical, as it might violate the intellectual property rights of the game developers or publishers, or it might affect the rights and interests of other players or stakeholders. You should always follow the rules and regulations of your country and region regarding the use of cheat codes.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install QuickBooks Desktop Premier 2017 in Minutes.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install QuickBooks Desktop Premier 2017 in Minutes.md
deleted file mode 100644
index 97fded7343421b25a4823d1c85109fd5722656f0..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install QuickBooks Desktop Premier 2017 in Minutes.md
+++ /dev/null
@@ -1,146 +0,0 @@
-
-
How to Download and Install QuickBooks Desktop 2017 Premier
-
QuickBooks Desktop 2017 Premier is a powerful accounting software that can help you manage your small business finances, payroll, inventory, and more. It allows you to have up to five users simultaneously and supports up to 150-200 MB of data file size. In this article, we will show you how to download and install QuickBooks Desktop 2017 Premier on your computer.
QuickBooks Desktop 2017 Premier is a version of QuickBooks Desktop that was released in September 2016. It is designed for small businesses that need industry-specific features and reports, such as contractors, manufacturers, wholesalers, retailers, nonprofits, and professional services. Some of the new features and improvements in QuickBooks Desktop 2017 Premier include:
-
-
Automated reports: You can schedule reports to be generated and emailed automatically at a specific time and frequency.
-
Smart search: You can find names, accounts, items, and transactions faster with autocomplete suggestions and improved search functionality.
-
Report filters: You can easily view and modify the filters applied to a report without opening the Customize Report window.
-
Deleted users: You can track the deleted users on audit trail reports to see who made changes to your data.
-
Scheduled reports: You can set up a single-user mode switch for scheduled reports to avoid interruptions.
-
Security updates: You can protect your data with enhanced security features, such as TLS 1.2 encryption, complex passwords, and multi-factor authentication.
-
-
System requirements for QuickBooks Desktop 2017 Premier
-
Before you download and install QuickBooks Desktop 2017 Premier, you need to make sure that your computer meets the minimum system requirements. Here are the system requirements for QuickBooks Desktop 2017 Premier:
-
-
Operating system
Windows 10, Windows 8.1 Update 1, or Windows 7 SP1 (32-bit or 64-bit)
-
Processor
2.4 GHz minimum
-
RAM
4 GB minimum, 8 GB recommended
-
Disk space
2.5 GB of disk space (additional space required for data files)
-
Optical drive
4X DVD-ROM drive (unless user is downloading from Intuit server)
-
Screen resolution
1280 x 1024 or higher with up to 2 extended monitors
-
Internet connection
1 Mbps or faster for online features and updates
-
Browser
Internet Explorer 11 (32-bit)
-
Integration with other software
Microsoft Word and Excel integration requires Office 2010 SP2 - 2016 or Office 365 (32-bit or 64-bit) E-mail estimates, invoices and other forms with Microsoft Outlook 2010 SP2-2016, Microsoft Outlook with Office 365, Gmail™, Yahoo! Mail®, Outlook.com®, and other SMTP-supporting e-mail clients Transfer data from Quicken 2016-202 , QuickBooks for Mac 2016, Microsoft Access 2010 SP2-2016 or Office 365 (32-bit only), and all editions of QuickBooks Desktop Enterprise
-
-
How to download QuickBooks Desktop 2017 Premier?
-
There are three ways to download QuickBooks Desktop 2017 Premier: from your Intuit account, from the Downloads & Updates page, or from a CD or DVD. Here are the steps for each method:
Click the Download button and choose a location to save the file.
-
Double-click the downloaded file to launch the installation wizard.
-
-
From a CD or DVD
-
-
Insert the CD or DVD into your computer's optical drive.
-
If the installation wizard does not start automatically, go to the Windows Start menu and select Computer or This PC.
-
Right-click the CD or DVD drive and choose Explore.
-
Double-click the Setup.exe file to launch the installation wizard.
-
-
How to install QuickBooks Desktop 2017 Premier?
-
After you download QuickBooks Desktop 2017 Premier, you need to install it on your computer. Here are the steps for installing QuickBooks Desktop 2017 Premier:
-
Choose the installation type
-
You can choose between two types of installation: express install or custom and network install. The express install is recommended for most users, while the custom and network install is for advanced users who want more control over the installation settings or who need to share QuickBooks with other users on a network. Here are the differences between the two types of installation:
-
quickbooks desktop 2017 premier download
-quickbooks pro 2017 premier download link
-quickbooks 2017 premier accountant edition download
-quickbooks 2017 premier free trial download
-quickbooks 2017 premier canada download
-quickbooks 2017 premier plus download
-quickbooks 2017 premier update download
-quickbooks 2017 premier mac download
-quickbooks 2017 premier crack download
-quickbooks 2017 premier enterprise download
-quickbooks 2017 premier multi user download
-quickbooks 2017 premier contractor edition download
-quickbooks 2017 premier nonprofit edition download
-quickbooks 2017 premier manufacturing and wholesale edition download
-quickbooks 2017 premier professional services edition download
-quickbooks 2017 premier retail edition download
-how to download quickbooks 2017 premier from intuit
-how to reinstall quickbooks 2017 premier
-how to upgrade quickbooks 2017 premier to 2020
-how to activate quickbooks 2017 premier
-how to install quickbooks 2017 premier on multiple computers
-how to transfer quickbooks 2017 premier to another computer
-how to backup and restore quickbooks 2017 premier
-how to uninstall and reinstall quickbooks 2017 premier
-how to use quickbooks 2017 premier tutorial
-where to buy quickbooks 2017 premier
-where to find license and product number for quickbooks 2017 premier
-where to get support for quickbooks 2017 premier
-where to learn quickbooks 2017 premier online
-where to download older versions of quickbooks 2017 premier
-why is quickbooks 2017 premier not available to download
-why is quickbooks 2017 premier not updating
-why is quickbooks 2017 premier not opening
-why is quickbooks 2017 premier not working on windows 10
-why is quickbooks 2017 premier running slow
-what is the difference between quickbooks 2017 premier and pro
-what is the difference between quickbooks 2017 premier and enterprise
-what is the difference between quickbooks 2017 premier and online
-what is the difference between quickbooks 2017 premier and accountant
-what is the difference between quickbooks 2017 premier and plus
-what are the system requirements for quickbooks 2017 premier
-what are the features of quickbooks 2017 premier
-what are the benefits of quickbooks 2017 premier
-what are the limitations of quickbooks 2017 premier
-what are the best practices for using quickbooks 2017 premier
-when will quickbooks 2017 premier be discontinued
-when will quickbooks 2017 premier stop working
-when will intuit release updates for quickbooks 2017 premier
-
Express install
-
-
This option installs QuickBooks Desktop 2017 Premier using the default settings.
-
This option is faster and easier than the custom and network install.
-
This option is suitable for single-user mode or if you are not using QuickBooks on a network.
-
-
Custom and Network install
-
-
This option allows you to change the installation location, select which features to install, and set up a multi-user network.
-
This option is more complex and time-consuming than the express install.
-
This option is suitable for multi-user mode or if you are using QuickBooks on a network.
-
-
To choose the installation type, follow these steps:
-
-
In the installation wizard, click Next to accept the Software License Agreement.
-
Select Express or Custom and Network Options and click Next.
-
If you choose Custom and Network Options, select how you will use QuickBooks and click Next.
-
-
Follow the installation wizard
-
After you choose the installation type, follow the instructions on the screen to complete the installation. Depending on your choice, you may need to do some of the following steps:
-
-
Select a destination folder for QuickBooks Desktop 2017 Premier.
-
Select which features to install (full or partial).
Configure the firewall and antivirus settings to allow QuickBooks to run properly.
-
Enter your license and product numbers that you received when you purchased QuickBooks Desktop 2017 Premier.
-
-
Activate your QuickBooks Desktop 2017 Premier
-
After you install QuickBooks Desktop 2017 Premier, you need to activate it before you can use it. Activation is a simple process that verifies that your copy of QuickBooks is genuine and that you have a valid license to use it. To activate your QuickBooks Desktop 2017 Premier, follow these steps:
-
-
Open QuickBooks Desktop 2017 Premier and click Activate QuickBooks on the Help menu.
-
Follow the on-screen instructions to complete the activation process.
QuickBooks Desktop 2017 Premier is a great accounting software for small businesses that need industry-specific features and reports. It can help you manage your finances, payroll, inventory, and more with ease and efficiency. In this article, we showed you how to download and install QuickBooks Desktop 2017 Premier on your computer. We hope this guide was helpful and that you enjoy using QuickBooks Desktop 2017 Premier for your business needs.
-
FAQs
-
Here are some frequently asked questions about QuickBooks Desktop 2017 Premier:
-
-
Q: How much does QuickBooks Desktop 2017 Premier cost? A: The price of QuickBooks Desktop 2017 Premier depends on the number of users and the subscription plan. You can choose between a one-time purchase or an annual subscription. The one-time purchase costs $499.95 for one user, $849.95 for two users, $1,249.95 for three users, $1,599.95 for four users, and $1,949.95 for five users. The annual subscription costs $299.95 per year for one user, $499.95 per year for two users, $699.95 per year for three users, $899.95 per year for four users, and $1,099.95 per year for five users. You can also get a free trial of QuickBooks Desktop 2017 Premier for 30 days at https://quickbooks.intuit.com/desktop/free-trial/.
-
Q: How do I update QuickBooks Desktop 2017 Premier? A: You can update QuickBooks Desktop 2017 Premier manually or automatically. To update manually, go to the Help menu and click Update QuickBooks Desktop. Then select the updates you want to download and click Get Updates. To update automatically, go to the Help menu and click Update QuickBooks Desktop. Then click the Options tab and select Yes for Automatic Update. This will allow QuickBooks to download and install updates automatically when they are available.
-
Q: How do I uninstall QuickBooks Desktop 2017 Premier? A: If you need to uninstall QuickBooks Desktop 2017 Premier from your computer, you can use the Windows Control Panel or the Clean Install Tool. To uninstall using the Control Panel, go to the Windows Start menu and select Control Panel. Then click Programs and Features and select QuickBooks Desktop 2017 Premier from the list of programs. Click Uninstall/Change and follow the prompts to complete the uninstallation process. To uninstall using the Clean Install Tool, download the tool from https://dlm2.download.intuit.com/akdlm/SBD/QuickBooks/QBPDF/QuickBooks_Clean_Install_Tool.exe and run it on your computer. Then follow the instructions on the screen to remove QuickBooks Desktop 2017 Premier completely.
-
Q: How do I backup my data in QuickBooks Desktop 2017 Premier? A: You can backup your data in QuickBooks Desktop 2017 Premier by creating a backup file or using Intuit Data Protect. To create a backup file, go to the File menu and click Back Up Company. Then choose Create Local Backup or Create Online Backup depending on where you want to save your backup file. To use Intuit Data Protect, go to the File menu and click Back Up Company. Then choose Set Up/Activate Online Backup and follow the steps to sign up for Intuit Data Protect. Intuit Data Protect is a subscription service that automatically backs up your data online every day.
-
Q : How do I contact QuickBooks Desktop 2017 Premier support? A: If you need help or support with QuickBooks Desktop 2017 Premier, you can contact Intuit support by phone, chat, or email. To contact by phone, call 1-800-446-8848 and select the option for QuickBooks Desktop. To contact by chat, go to https://help.quickbooks.intuit.com/en_US/contact and select QuickBooks Desktop. To contact by email, go to https://help.quickbooks.intuit.com/en_US/contact and select Email Us.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Free Download T-Shirt Design Master Collection with Over 90000 Images.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Free Download T-Shirt Design Master Collection with Over 90000 Images.md
deleted file mode 100644
index 3ee0e55850247eb83a7af4deec1f64b6d7ba961a..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Free Download T-Shirt Design Master Collection with Over 90000 Images.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
T-Shirt Design Master Collection Free Download
-
Do you want to create stunning and professional-looking T-Shirts without spending hours on designing or hiring expensive designers? If yes, then you need to check out the T-Shirt Design Master Collection, a bundle of over 2000 ready-made templates, graphics, fonts, and mockups that you can use to create amazing T-Shirts in minutes. In this article, we will show you what is T-Shirt Design Master Collection, why do you need it, how to use it, where to download it for free, and how to customize your T-Shirt designs with it. Let's get started!
-
What is T-Shirt Design Master Collection?
-
T-Shirt Design Master Collection is a collection of over 2000 high-quality and editable T-Shirt design templates, graphics, fonts, and mockups that you can use to create your own unique and original T-Shirts. Whether you want to make T-Shirts for yourself, your friends, your family, your business, or your clients, you will find something that suits your needs and style in this collection. You can use these templates for any purpose, such as personal use, commercial use, print-on-demand, merchandising, branding, marketing, and more.
There are many reasons why you need T-Shirt Design Master Collection for your T-Shirt design projects. Here are some of them:
-
-
It saves you time and money. You don't have to spend hours on designing from scratch or hiring expensive designers. You can simply choose a template that you like, edit it according to your preferences, and print it on your favorite T-Shirt.
-
It gives you variety and creativity. You have access to over 2000 different templates, graphics, fonts, and mockups that you can mix and match to create endless combinations of designs. You can also customize them as much as you want to make them unique and original.
-
It helps you stand out from the crowd. You can create T-Shirts that reflect your personality, style, message, or brand. You can also make T-Shirts that are relevant to your niche, audience, or occasion. You can impress your friends, family, customers, or clients with your awesome T-Shirts.
-
-
How to use T-Shirt Design Master Collection?
-
Using T-Shirt Design Master Collection is very easy and fun. All you need is a computer with Adobe Photoshop or Illustrator installed (or any other software that can open PSD or AI files), a printer, and some blank T-Shirts. Here are the steps to follow:
-
-
Download the T-Shirt Design Master Collection from one of the sources that we will mention below.
-
Unzip the files and browse through the folders to find the templates that you like.
-
Open the template file in Photoshop or Illustrator and edit it as you wish. You can change the text, graphics, colors, fonts, layout, etc.
-
Save the file as a PNG or JPG image with a transparent background.
-
Print the image on a transfer paper using an inkjet printer.
-
Cut out the image and iron it on your blank T-Shirt following the instructions on the transfer paper.
-
Enjoy your new custom-made T-Shirt!
-
-
Where to download T-Shirt Design Master Collection for free?
-
You might be wondering where you can get this amazing collection of T-Shirt design templates for free. Well, there are some websites that offer free downloads of some or all of the templates in this collection. Here are some of them:
-
Fre
Freepik.com
-
Freepik.com is one of the most popular websites for free graphic resources, including vectors, photos, icons, and PSD files. You can find hundreds of free T-Shirt design templates on this website, covering various themes and styles. You can download them as PSD or AI files and edit them in Photoshop or Illustrator. You can use them for personal and commercial projects, but you have to credit the author or purchase a premium subscription to remove the attribution.
-
The Vector Lab
-
The Vector Lab is a website that specializes in T-Shirt design templates, graphics, fonts, and mockups. It offers a T-Shirt Design Master Collection that contains over 2000 templates, graphics, fonts, and mockups that you can use to create awesome T-Shirts. You can download the entire collection for $99 or get a free sample of 40 templates by signing up for their newsletter.
-
Other sources
-
There are many other websites that offer free or paid T-Shirt design templates that you can use for your projects. Some of them are:
-
-
GraphicRiver: A marketplace for premium graphic resources, including T-Shirt design templates. You can buy them individually or get unlimited access with a subscription.
-
TeePublic: A platform for independent artists to sell their T-Shirt designs. You can browse through thousands of designs and buy them as T-Shirts or download them as PNG files.
-
Pinterest: A social media network for sharing and discovering creative ideas. You can find many T-Shirt design templates on Pinterest, but you have to check the source and the license before using them.
-
-
How to customize your T-Shirt designs with T-Shirt Design Master Collection?
-
Once you have downloaded the T-Shirt Design Master Collection or any of the templates from the sources mentioned above, you can customize them to make them your own. Here are some tips on how to do that:
The first step is to choose a template that matches your purpose, niche, audience, or occasion. For example, if you want to make a T-Shirt for a birthday party, you can choose a template that has a birthday theme, such as balloons, candles, cake, etc. If you want to make a T-Shirt for a fitness brand, you can choose a template that has a fitness theme, such as weights, muscles, slogans, etc.
-
Edit the text and graphics
-
The next step is to edit the text and graphics on the template to suit your message, style, or brand. You can change the words, phrases, slogans, names, dates, etc. to make them relevant and catchy. You can also change the graphics, such as icons, shapes, images, etc. to make them fit your theme and vision.
-
Change the colors and fonts
-
The final step is to change the colors and fonts on the template to make them appealing and attractive. You can use colors that match your mood, personality, or brand identity. You can also use colors that contrast or complement each other to create harmony or contrast. You can also change the fonts on the template to make them readable and expressive. You can use fonts that match your tone, voice, or genre.
-
Save and print your design
-
After you have customized your T-Shirt design template, you can save it as a PNG or JPG image with a transparent background. Then you can print it on a transfer paper using an inkjet printer and iron it on your blank T-Shirt following the instructions on the transfer paper.
-
Conclusion
-
Summary of the main points
-
In this article, we have shown you how to create stunning and professional-looking T-Shirts using the T-Shirt Design Master Collection, a bundle of over 2000 ready-made templates, graphics, fonts fonts, and mockups that you can use to create amazing T-Shirts in minutes. We have also explained what is T-Shirt Design Master Collection, why do you need it, how to use it, where to download it for free, and how to customize your T-Shirt designs with it. We hope that you have found this article helpful and informative.
-
Call to action
-
If you are ready to start making your own awesome T-Shirts with T-Shirt Design Master Collection, don't wait any longer. Download the collection today and unleash your creativity. You will be amazed by the results and the feedback that you will get from your friends, family, customers, or clients. You can also share your T-Shirt designs with us on social media using the hashtag #TShirtDesignMasterCollection. We would love to see your creations and feature them on our website. Thank you for reading and happy designing!
-
FAQs
-
Here are some of the frequently asked questions about T-Shirt Design Master Collection:
-
-
What software do I need to use T-Shirt Design Master Collection?
-You need a software that can open and edit PSD or AI files, such as Adobe Photoshop or Illustrator. You can also use other software that are compatible with these file formats, such as GIMP, Inkscape, CorelDRAW, etc.
-
What kind of printer do I need to print my T-Shirt designs?
-You need an inkjet printer that can print on transfer paper. You can use any brand or model of inkjet printer, as long as it has good quality and resolution.
-
What kind of transfer paper do I need to print my T-Shirt designs?
-You need a transfer paper that is suitable for your type of T-Shirt fabric. There are different types of transfer paper for cotton, polyester, or blend fabrics. You can find them online or in local craft stores.
-
How long does it take to make a T-Shirt with T-Shirt Design Master Collection?
-It depends on how much time you spend on customizing your template and printing your design. But generally, it takes less than an hour to make a T-Shirt with T-Shirt Design Master Collection.
-
Can I sell my T-Shirts that I make with T-Shirt Design Master Collection?
-Yes, you can sell your T-Shirts that you make with T-Shirt Design Master Collection for personal or commercial use. However, you have to make sure that you follow the terms and conditions of the source that you downloaded the templates from. Some sources may require you to credit the author or purchase a license to use their templates for commercial purposes.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Free Online Jigsaw Puzzles No Download A Great Hobby for Everyone.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Free Online Jigsaw Puzzles No Download A Great Hobby for Everyone.md
deleted file mode 100644
index 73b0d5fb3c05ec68a8e2560c2147377ef9cd2981..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Free Online Jigsaw Puzzles No Download A Great Hobby for Everyone.md
+++ /dev/null
@@ -1,256 +0,0 @@
-
-
-
-
-
Best Free Online Jigsaw Puzzles No Download
-
Do you love solving jigsaw puzzles but don't have enough space or time to set up a physical one? Or do you want to try something new and challenging without spending any money? If so, you might want to check out some of the best free online jigsaw puzzles no download required.
-
Online jigsaw puzzles are digital versions of the classic game that you can play on your computer or mobile device. They are fun and relaxing activities that can help you improve your memory, concentration
Online jigsaw puzzles are digital versions of the classic game that you can play on your computer or mobile device. They are fun and relaxing activities that can help you improve your memory, concentration, and creativity. They can also reduce stress, boost your mood, and keep your brain healthy.
There are many websites that offer free online jigsaw puzzles no download needed. You can choose from a variety of themes, images, and difficulty levels. You can also create your own puzzles from photos or URLs, or play with other people online.
-
In this article, we will review some of the best free online jigsaw puzzles no download required. We will compare their features, pros, and cons, and help you find the best one for you. Let's get started!
-
Jigsaw Explorer
-
Jigsaw Explorer is one of the most popular and user-friendly websites for online jigsaw puzzles. It has a clean and simple interface that lets you enjoy the puzzles without any distractions. It is also ad-free, so you don't have to worry about annoying pop-ups or banners.
-
best free online jigsaw puzzles no download required
-best free online jigsaw puzzles no download for adults
-best free online jigsaw puzzles no download for kids
-best free online jigsaw puzzles no download for mac
-best free online jigsaw puzzles no download for windows 10
-best free online jigsaw puzzles no download or registration
-best free online jigsaw puzzles no download or sign up
-best free online jigsaw puzzles no download or flash
-best free online jigsaw puzzles no download with timer
-best free online jigsaw puzzles no download with hints
-best free online jigsaw puzzles no download with rotation
-best free online jigsaw puzzles no download with sound
-best free online jigsaw puzzles no download with zoom
-best free online jigsaw puzzles no download with daily challenge
-best free online jigsaw puzzles no download with leaderboard
-best free online jigsaw puzzles no download with custom images
-best free online jigsaw puzzles no download with different levels
-best free online jigsaw puzzles no download with different shapes
-best free online jigsaw puzzles no download with different themes
-best free online jigsaw puzzles no download with animals
-best free online jigsaw puzzles no download with nature
-best free online jigsaw puzzles no download with art
-best free online jigsaw puzzles no download with flowers
-best free online jigsaw puzzles no download with cars
-best free online jigsaw puzzles no download with cartoons
-best free online jigsaw puzzles no download with celebrities
-best free online jigsaw puzzles no download with movies
-best free online jigsaw puzzles no download with music
-best free online jigsaw puzzles no download with sports
-best free online jigsaw puzzles no download with holidays
-best free online jigsaw puzzles no download with seasons
-best free online jigsaw puzzles no download with food
-best free online jigsaw puzzles no download with travel
-best free online jigsaw puzzles no download with history
-best free online jigsaw puzzles no download with culture
-best free online jigsaw puzzles no download with science
-best free online jigsaw puzzles no download with education
-best free online jigsaw puzzles no download with fun facts
-best free online jigsaw puzzles no download with trivia questions
-best free online jigsaw puzzles no download with crossword clues
-best free online jigsaw puzzles no download that are easy to play
-best free online jigsaw puzzles no download that are hard to solve
-best free online jigsaw puzzles no download that are relaxing and stress-free
-best free online jigsaw puzzles no download that are addictive and challenging
-best free online jigsaw puzzles no download that are interactive and social
-best free online jigsaw puzzles no download that are realistic and high-quality
-best free online jigsaw puzzles no download that are colorful and beautiful
-
Features
-
Some of the features of Jigsaw Explorer are:
-
-
You can choose from hundreds of high-quality images in various categories such as animals, nature, art, landmarks, etc.
-
You can adjust the number of pieces from 6 to 1000, and the shape and size of the pieces.
-
You can create your own custom puzzles from photos or URLs, and share them with others.
-
You can play a new puzzle every day with the daily puzzle feature.
-
You can use the full-screen mode, the magnifying glass tool, the edge-only option, and the ghost image option to enhance your experience.
-
-
Pros
-
Some of the pros of Jigsaw Explorer are:
-
-
It is easy to use and navigate.
-
It has high-quality images and smooth animations.
-
It has no ads or registration required.
-
It has a daily puzzle feature that keeps you entertained.
-
It has a custom puzzle feature that lets you create your own puzzles.
-
-
Cons
-
Some of the cons of Jigsaw Explorer are:
-
-
It has limited categories and themes to choose from.
-
It has no multiplayer mode or leaderboards to compete with others.
-
It has no difficulty settings or hints to help you solve the puzzles.
-
JigZone
-
JigZone is another website that offers free online jigsaw puzzles no download required. It has a different approach to the game, as it allows you to choose from different shapes and sizes of pieces, such as triangles, stars, hearts, etc. It also has a puzzle of the day feature that you can subscribe to.
-
Features
-
Some of the features of JigZone are:
-
-
You can choose from thousands of images in various categories such as animals, art, flowers, holidays, etc.
-
You can adjust the number and shape of the pieces from 6 to 247, and the style and rotation of the pieces.
-
You can create your own custom puzzles from photos or URLs, and share them with others.
-
You can play a new puzzle every day with the puzzle of the day feature.
-
You can use the challenge mode to compete with other players and see who can solve the puzzles faster.
-
-
Pros
-
Some of the pros of JigZone are:
-
-
It has a variety of puzzles and shapes to choose from.
-
It has a challenge mode that adds some excitement and competition to the game.
-
It has a puzzle of the day feature that keeps you updated.
-
It has a custom puzzle feature that lets you create your own puzzles.
-
-
Cons
-
Some of the cons of JigZone are:
-
-
It has an outdated design and interface that might not appeal to some users.
-
It has ads that might interfere with your enjoyment.
-
It has no difficulty settings or hints to help you solve the puzzles.
-
Jigsaw Planet
-
Jigsaw Planet is another website that offers free online jigsaw puzzles no download required. It has a modern and sleek interface that makes the game more enjoyable. It also has a user-generated content feature that allows you to browse and play millions of puzzles created by other users.
-
Features
-
Some of the features of Jigsaw Planet are:
-
-
You can choose from millions of images in various categories and tags such as animals, nature, movies, cartoons, etc.
-
You can adjust the number of pieces from 4 to 300, and the shape and rotation of the pieces.
-
You can create your own custom puzzles from photos or URLs, and share them with others.
-
You can rate, comment, and bookmark the puzzles you like or dislike.
-
You can join the community and follow other users or groups.
-
-
Pros
-
Some of the pros of Jigsaw Planet are:
-
-
It has a modern and sleek interface that is easy to use and navigate.
-
It has millions of puzzles to choose from, with different categories and tags.
-
It has a user-generated content feature that lets you create and play your own puzzles or others' puzzles.
-
It has a rating, commenting, and bookmarking system that lets you express your opinion and save your favorites.
-
It has a community feature that lets you interact with other users or groups.
-
-
Cons
-
Some of the cons of Jigsaw Planet are:
-
-
It has a variable quality of puzzles, depending on the source and creator of the images.
-
It has no difficulty settings or hints to help you solve the puzzles.
-
It has no multiplayer mode or leaderboards to compete with others.
-
Puzzle Garage
-
Puzzle Garage is another website that offers free online jigsaw puzzles no download required. It has a colorful and attractive interface that makes the game more appealing. It also has a multiplayer mode that allows you to play with your friends or strangers online.
-
Features
-
Some of the features of Puzzle Garage are:
-
-
You can choose from hundreds of images in various collections and albums such as animals, nature, art, food, etc.
-
You can adjust the number of pieces from 12 to 288, and the shape and rotation of the pieces.
-
You can create your own custom puzzles from photos or URLs, and share them with others.
-
You can play with other people online in the multiplayer mode, and chat with them.
-
You can use the hints and preview options to help you solve the puzzles.
-
-
Pros
-
Some of the pros of Puzzle Garage are:
-
-
It has a colorful and attractive interface that is fun and easy to use.
-
It has beautiful images and smooth animations.
-
It has a multiplayer mode that lets you play and chat with other people online.
-
It has hints and preview options that let you see the whole image or a part of it.
-
It has a custom puzzle feature that lets you create your own puzzles.
-
-
Cons
-
Some of the cons of Puzzle Garage are:
-
-
It has ads that might distract you from the game.
-
It has a limited number of pieces per puzzle, which might not be challenging enough for some users.
-
It has no difficulty settings or categories to filter the puzzles.
-
-
Just Jigsaw Puzzles
-
Just Jigsaw Puzzles is another website that offers free online jigsaw puzzles no download required. It has an advanced search option that lets you find the perfect puzzle for you based on keywords, categories, colors, sizes, etc. It also has a custom puzzle feature that lets you create your own puzzles from photos or URLs.
-
Features
-
Some of the features of Just Jigsaw Puzzles are:
-
-
You can choose from thousands of images in various categories such as animals, nature, art, holidays, etc.
-
You can adjust the number of pieces from 12 to 850, and the shape and rotation of the pieces.
-
You can create your own custom puzzles from photos or URLs, and share them with others.
-
You can use the advanced search option to find the best puzzle for you based on keywords, categories, colors, sizes, etc.
-
You can use the adjustable difficulty levels to make the puzzles easier or harder.
-
-
Pros
-
Some of the pros of Just Jigsaw Puzzles are:
-
-
It has thousands of puzzles to choose from, with different categories and themes.
-
It has an advanced search option that lets you find the perfect puzzle for you.
-
It has adjustable difficulty levels that let you customize your experience.
-
It has a custom puzzle feature that lets you create your own puzzles.
-
-
Cons
-
Some of the cons of Just Jigsaw Puzzles are:
-
-
It has ads that might annoy you or slow down your device.
-
It has an outdated design and interface that might not appeal to some users.
-
It has no multiplayer mode or leaderboards to compete with others.
-
Comparison Table
-
To help you compare the five websites we reviewed, we have created a table that summarizes their main features, pros, and cons. You can use this table to find the best free online jigsaw puzzles no download required for you.
-
-
-
Website
-
Number of Puzzles
-
Categories
-
Difficulty Levels
-
Mutliplayer Mode
-
Ads
-
-
-
Jigsaw Explorer
-
Hundreds
-
Limited
-
No
-
No
-
No
-
-
-
JigZone
-
Thousands
-
Various
-
No
-
Yes (challenge mode)
-
Yes
-
-
-
Jigsaw Planet
-
Millions
-
Various (user-generated)
-
No
-
No
-
No
-
-
-
Puzzle Garage
-
Hundreds
-
Various (collections and albums)
-
No
-
Yes (multiplayer mode)
-
Yes
-
-
-
Just Jigsaw Puzzles
-
Thousands
-
Various
-
Yes (adjustable difficulty levels)
-
No
-
Yes
Conclusion
-
We have reviewed some of the best free online jigsaw puzzles no download required. We have compared their features, pros, and cons, and provided a comparison table to help you find the best one for you.
-
Online jigsaw puzzles are fun and relaxing activities that can help you improve your memory, concentration, and creativity. They can also reduce stress, boost your mood, and keep your brain healthy.
-
The best website for online jigsaw puzzles depends on your personal preference and needs. You might want to consider factors such as the number and quality of puzzles, the categories and themes, the difficulty levels, the multiplayer mode, the ads, and the custom puzzle feature.
-
Based on our review, we recommend Jigsaw Explorer as the best website for online jigsaw puzzles no download required. It has a clean and simple interface, high-quality images, no ads, a daily puzzle feature, and a custom puzzle feature. It is easy to use and navigate, and it offers a smooth and enjoyable experience.
-
However, you might also like JigZone if you want to try different shapes and sizes of pieces, or challenge other players. You might also like Jigsaw Planet if you want to browse and play millions of user-generated puzzles, or rate, comment, and bookmark them. You might also like Puzzle Garage if you want to play with your friends or strangers online, or use hints and preview options. You might also like Just Jigsaw Puzzles if you want to use the advanced search option, or adjust the difficulty levels.
-
Ultimately, the choice is yours. You can try out different websites and see which one suits you best. You can also switch between them depending on your mood or interest. The most important thing is to have fun and relax with online jigsaw puzzles.
-
FAQs
-
Here are some frequently asked questions and answers about online jigsaw puzzles:
-
Q: What are the benefits of online jigsaw puzzles?
-
A: Online jigsaw puzzles are beneficial for your mental health and well-being. They can help you improve your memory, concentration, and creativity. They can also reduce stress, boost your mood, and keep your brain healthy.
-
Q: How do I create my own custom puzzles from photos or URLs?
-
A: Most of the websites we reviewed have a custom puzzle feature that lets you create your own puzzles from photos or URLs. You just need to upload a photo from your device or paste a URL from the internet, and then adjust the number and shape of the pieces. You can also share your custom puzzles with others.
-
Q: How do I play with other people online?
-
A: Some of the websites we reviewed have a multiplayer mode that lets you play with other people online. You just need to join a room or create your own room, and then invite your friends or strangers to play with you. You can also chat with them while playing.
-
Q: How do I find the best puzzle for me?
-
A: You can find the best puzzle for you based on your preference and needs. You can choose from different categories and themes such as animals, nature, art, etc. You can also choose from different difficulty levels such as easy, medium, hard, etc. You can also use the advanced search option to find puzzles based on keywords, colors, sizes, etc.
-
Q: How do I solve online jigsaw puzzles?
-
A: Solving online jigsaw puzzles is similar to solving physical ones. You just need to drag and drop the pieces to their correct places until you complete the whole image. You can also use some tools to help you solve the puzzles such as full-screen mode, magnifying glass tool, edge-only option, ghost image option, hints option, preview option, etc.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/skf15963/summary/fengshen/data/megatron_dataloader/dataset_utils.py b/spaces/skf15963/summary/fengshen/data/megatron_dataloader/dataset_utils.py
deleted file mode 100644
index 9b579751573ff8ddf94882c032d4ed6cc168ba07..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/data/megatron_dataloader/dataset_utils.py
+++ /dev/null
@@ -1,788 +0,0 @@
-# coding=utf-8
-# Copyright 2018 The Google AI Language Team Authors, and NVIDIA.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-# Most of the code here has been copied from:
-# https://github.com/google-research/albert/blob/master/create_pretraining_data.py
-# with some modifications.
-
-import math
-import time
-import collections
-
-import numpy as np
-import re
-
-from fengshen.data.megatron_dataloader.utils import (
- print_rank_0
-)
-from fengshen.data.megatron_dataloader.blendable_dataset import BlendableDataset
-from fengshen.data.megatron_dataloader.indexed_dataset import make_dataset as make_indexed_dataset
-
-DSET_TYPE_BERT = 'standard_bert'
-DSET_TYPE_ICT = 'ict'
-DSET_TYPE_T5 = 't5'
-DSET_TYPE_BERT_CN_WWM = 'bert_cn_wwm'
-DSET_TYPE_BART = 'bart'
-DSET_TYPE_COCOLM = 'coco_lm'
-
-DSET_TYPES = [DSET_TYPE_BERT, DSET_TYPE_ICT,
- DSET_TYPE_T5, DSET_TYPE_BERT_CN_WWM,
- DSET_TYPE_BART, DSET_TYPE_COCOLM]
-
-
-def get_datasets_weights_and_num_samples(data_prefix,
- train_valid_test_num_samples):
-
- # The data prefix should be in the format of:
- # weight-1, data-prefix-1, weight-2, data-prefix-2, ..
- assert len(data_prefix) % 2 == 0
- num_datasets = len(data_prefix) // 2
- weights = [0] * num_datasets
- prefixes = [0] * num_datasets
- for i in range(num_datasets):
- weights[i] = float(data_prefix[2 * i])
- prefixes[i] = (data_prefix[2 * i + 1]).strip()
- # Normalize weights
- weight_sum = 0.0
- for weight in weights:
- weight_sum += weight
- assert weight_sum > 0.0
- weights = [weight / weight_sum for weight in weights]
-
- # Add 0.5% (the 1.005 factor) so in case the bleding dataset does
- # not uniformly distribute the number of samples, we still have
- # samples left to feed to the network.
- datasets_train_valid_test_num_samples = []
- for weight in weights:
- datasets_train_valid_test_num_samples.append(
- [int(math.ceil(val * weight * 1.005))
- for val in train_valid_test_num_samples])
-
- return prefixes, weights, datasets_train_valid_test_num_samples
-
-
-def compile_helper():
- """Compile helper function ar runtime. Make sure this
- is invoked on a single process."""
- import os
- import subprocess
- path = os.path.abspath(os.path.dirname(__file__))
- ret = subprocess.run(['make', '-C', path])
- if ret.returncode != 0:
- print("Making C++ dataset helpers module failed, exiting.")
- import sys
- sys.exit(1)
-
-
-def get_a_and_b_segments(sample, np_rng):
- """Divide sample into a and b segments."""
-
- # Number of sentences in the sample.
- n_sentences = len(sample)
- # Make sure we always have two sentences.
- assert n_sentences > 1, 'make sure each sample has at least two sentences.'
-
- # First part:
- # `a_end` is how many sentences go into the `A`.
- a_end = 1
- if n_sentences >= 3:
- # Note that randin in numpy is exclusive.
- a_end = np_rng.randint(1, n_sentences)
- tokens_a = []
- for j in range(a_end):
- tokens_a.extend(sample[j])
-
- # Second part:
- tokens_b = []
- for j in range(a_end, n_sentences):
- tokens_b.extend(sample[j])
-
- # Random next:
- is_next_random = False
- if np_rng.random() < 0.5:
- is_next_random = True
- tokens_a, tokens_b = tokens_b, tokens_a
-
- return tokens_a, tokens_b, is_next_random
-
-
-def truncate_segments(tokens_a, tokens_b, len_a, len_b, max_num_tokens, np_rng):
- """Truncates a pair of sequences to a maximum sequence length."""
- # print(len_a, len_b, max_num_tokens)
- assert len_a > 0
- if len_a + len_b <= max_num_tokens:
- return False
- while len_a + len_b > max_num_tokens:
- if len_a > len_b:
- len_a -= 1
- tokens = tokens_a
- else:
- len_b -= 1
- tokens = tokens_b
- if np_rng.random() < 0.5:
- del tokens[0]
- else:
- tokens.pop()
- return True
-
-
-def create_tokens_and_tokentypes(tokens_a, tokens_b, cls_id, sep_id):
- """Merge segments A and B, add [CLS] and [SEP] and build tokentypes."""
-
- tokens = []
- tokentypes = []
- # [CLS].
- tokens.append(cls_id)
- tokentypes.append(0)
- # Segment A.
- for token in tokens_a:
- tokens.append(token)
- tokentypes.append(0)
- # [SEP].
- tokens.append(sep_id)
- tokentypes.append(0)
- # Segment B.
- for token in tokens_b:
- tokens.append(token)
- tokentypes.append(1)
- if tokens_b:
- # [SEP].
- tokens.append(sep_id)
- tokentypes.append(1)
-
- return tokens, tokentypes
-
-
-MaskedLmInstance = collections.namedtuple("MaskedLmInstance",
- ["index", "label"])
-
-
-def is_start_piece(piece):
- """Check if the current word piece is the starting piece (BERT)."""
- # When a word has been split into
- # WordPieces, the first token does not have any marker and any subsequence
- # tokens are prefixed with ##. So whenever we see the ## token, we
- # append it to the previous set of word indexes.
- return not piece.startswith("##")
-
-
-def create_masked_lm_predictions(tokens,
- vocab_id_list, vocab_id_to_token_dict,
- masked_lm_prob,
- cls_id, sep_id, mask_id,
- max_predictions_per_seq,
- np_rng,
- tokenizer,
- max_ngrams=3,
- do_whole_word_mask=True,
- favor_longer_ngram=False,
- do_permutation=False,
- geometric_dist=False,
- masking_style="bert",
- zh_tokenizer=None):
- """Creates the predictions for the masked LM objective.
- Note: Tokens here are vocab ids and not text tokens."""
-
- cand_indexes = []
- # Note(mingdachen): We create a list for recording if the piece is
- # the starting piece of current token, where 1 means true, so that
- # on-the-fly whole word masking is possible.
- token_boundary = [0] * len(tokens)
-
- # 如果没有指定中文分词器,那就直接按##算
- if zh_tokenizer is None:
- for (i, token) in enumerate(tokens):
- if token == cls_id or token == sep_id:
- token_boundary[i] = 1
- continue
- # Whole Word Masking means that if we mask all of the wordpieces
- # corresponding to an original word.
- #
- # Note that Whole Word Masking does *not* change the training code
- # at all -- we still predict each WordPiece independently, softmaxed
- # over the entire vocabulary.
- if (do_whole_word_mask and len(cand_indexes) >= 1 and
- not is_start_piece(vocab_id_to_token_dict[token])):
- cand_indexes[-1].append(i)
- else:
- cand_indexes.append([i])
- if is_start_piece(vocab_id_to_token_dict[token]):
- token_boundary[i] = 1
- else:
- # 如果指定了中文分词器,那就先用分词器分词,然后再进行判断
- # 获取去掉CLS SEP的原始文本
- raw_tokens = []
- for t in tokens:
- if t != cls_id and t != sep_id:
- raw_tokens.append(t)
- raw_tokens = [vocab_id_to_token_dict[i] for i in raw_tokens]
- # 分词然后获取每次字开头的最长词的长度
- word_list = set(zh_tokenizer(''.join(raw_tokens), HMM=True))
- word_length_dict = {}
- for w in word_list:
- if len(w) < 1:
- continue
- if w[0] not in word_length_dict:
- word_length_dict[w[0]] = len(w)
- elif word_length_dict[w[0]] < len(w):
- word_length_dict[w[0]] = len(w)
- i = 0
- # 从词表里面检索
- while i < len(tokens):
- token_id = tokens[i]
- token = vocab_id_to_token_dict[token_id]
- if len(token) == 0 or token_id == cls_id or token_id == sep_id:
- token_boundary[i] = 1
- i += 1
- continue
- word_max_length = 1
- if token[0] in word_length_dict:
- word_max_length = word_length_dict[token[0]]
- j = 0
- word = ''
- word_end = i+1
- # 兼容以前##的形式,如果后面的词是##开头的,那么直接把后面的拼到前面当作一个词
- old_style = False
- while word_end < len(tokens) and vocab_id_to_token_dict[tokens[word_end]].startswith('##'):
- old_style = True
- word_end += 1
- if not old_style:
- while j < word_max_length and i+j < len(tokens):
- cur_token = tokens[i+j]
- word += vocab_id_to_token_dict[cur_token]
- j += 1
- if word in word_list:
- word_end = i+j
- cand_indexes.append([p for p in range(i, word_end)])
- token_boundary[i] = 1
- i = word_end
-
- output_tokens = list(tokens)
- # add by ganruyi
- if masking_style == 'bert-cn-wwm':
- # if non chinese is False, that means it is chinese
- # then try to remove "##" which is added previously
- new_token_ids = []
- for token_id in output_tokens:
- token = tokenizer.convert_ids_to_tokens([token_id])[0]
- if len(re.findall('##[\u4E00-\u9FA5]', token)) > 0:
- token = token[2:]
- new_token_id = tokenizer.convert_tokens_to_ids([token])[
- 0]
- new_token_ids.append(new_token_id)
- output_tokens = new_token_ids
-
- masked_lm_positions = []
- masked_lm_labels = []
-
- if masked_lm_prob == 0:
- return (output_tokens, masked_lm_positions,
- masked_lm_labels, token_boundary)
-
- num_to_predict = min(max_predictions_per_seq,
- max(1, int(round(len(tokens) * masked_lm_prob))))
-
- ngrams = np.arange(1, max_ngrams + 1, dtype=np.int64)
- if not geometric_dist:
- # Note(mingdachen):
- # By default, we set the probilities to favor shorter ngram sequences.
- pvals = 1. / np.arange(1, max_ngrams + 1)
- pvals /= pvals.sum(keepdims=True)
- if favor_longer_ngram:
- pvals = pvals[::-1]
- # 获取一个ngram的idx,对于每个word,记录他的ngram的word
- ngram_indexes = []
- for idx in range(len(cand_indexes)):
- ngram_index = []
- for n in ngrams:
- ngram_index.append(cand_indexes[idx:idx + n])
- ngram_indexes.append(ngram_index)
-
- np_rng.shuffle(ngram_indexes)
-
- (masked_lms, masked_spans) = ([], [])
- covered_indexes = set()
- for cand_index_set in ngram_indexes:
- if len(masked_lms) >= num_to_predict:
- break
- if not cand_index_set:
- continue
- # Note(mingdachen):
- # Skip current piece if they are covered in lm masking or previous ngrams.
- for index_set in cand_index_set[0]:
- for index in index_set:
- if index in covered_indexes:
- continue
-
- if not geometric_dist:
- n = np_rng.choice(ngrams[:len(cand_index_set)],
- p=pvals[:len(cand_index_set)] /
- pvals[:len(cand_index_set)].sum(keepdims=True))
- else:
- # Sampling "n" from the geometric distribution and clipping it to
- # the max_ngrams. Using p=0.2 default from the SpanBERT paper
- # https://arxiv.org/pdf/1907.10529.pdf (Sec 3.1)
- n = min(np_rng.geometric(0.2), max_ngrams)
-
- index_set = sum(cand_index_set[n - 1], [])
- n -= 1
- # Note(mingdachen):
- # Repeatedly looking for a candidate that does not exceed the
- # maximum number of predictions by trying shorter ngrams.
- while len(masked_lms) + len(index_set) > num_to_predict:
- if n == 0:
- break
- index_set = sum(cand_index_set[n - 1], [])
- n -= 1
- # If adding a whole-word mask would exceed the maximum number of
- # predictions, then just skip this candidate.
- if len(masked_lms) + len(index_set) > num_to_predict:
- continue
- is_any_index_covered = False
- for index in index_set:
- if index in covered_indexes:
- is_any_index_covered = True
- break
- if is_any_index_covered:
- continue
- for index in index_set:
- covered_indexes.add(index)
- masked_token = None
- if masking_style == "bert":
- # 80% of the time, replace with [MASK]
- if np_rng.random() < 0.8:
- masked_token = mask_id
- else:
- # 10% of the time, keep original
- if np_rng.random() < 0.5:
- masked_token = tokens[index]
- # 10% of the time, replace with random word
- else:
- masked_token = vocab_id_list[np_rng.randint(0, len(vocab_id_list))]
- elif masking_style == 'bert-cn-wwm':
- # 80% of the time, replace with [MASK]
- if np_rng.random() < 0.8:
- masked_token = mask_id
- else:
- # 10% of the time, keep original
- if np_rng.random() < 0.5:
- # 如果是中文全词mask,去掉tokens里的##
- token_id = tokens[index]
- token = tokenizer.convert_ids_to_tokens([token_id])[
- 0]
- if len(re.findall('##[\u4E00-\u9FA5]', token)) > 0:
- token = token[2:]
- new_token_id = tokenizer.convert_tokens_to_ids([token])[
- 0]
- masked_token = new_token_id
- # 10% of the time, replace with random word
- else:
- masked_token = vocab_id_list[np_rng.randint(
- 0, len(vocab_id_list))]
- elif masking_style == "t5":
- masked_token = mask_id
- else:
- raise ValueError("invalid value of masking style")
-
- output_tokens[index] = masked_token
- masked_lms.append(MaskedLmInstance(
- index=index, label=tokens[index]))
-
- masked_spans.append(MaskedLmInstance(
- index=index_set,
- label=[tokens[index] for index in index_set]))
-
- assert len(masked_lms) <= num_to_predict
- np_rng.shuffle(ngram_indexes)
-
- select_indexes = set()
- if do_permutation:
- for cand_index_set in ngram_indexes:
- if len(select_indexes) >= num_to_predict:
- break
- if not cand_index_set:
- continue
- # Note(mingdachen):
- # Skip current piece if they are covered in lm masking or previous ngrams.
- for index_set in cand_index_set[0]:
- for index in index_set:
- if index in covered_indexes or index in select_indexes:
- continue
-
- n = np.random.choice(ngrams[:len(cand_index_set)],
- p=pvals[:len(cand_index_set)] /
- pvals[:len(cand_index_set)].sum(keepdims=True))
- index_set = sum(cand_index_set[n - 1], [])
- n -= 1
-
- while len(select_indexes) + len(index_set) > num_to_predict:
- if n == 0:
- break
- index_set = sum(cand_index_set[n - 1], [])
- n -= 1
- # If adding a whole-word mask would exceed the maximum number of
- # predictions, then just skip this candidate.
- if len(select_indexes) + len(index_set) > num_to_predict:
- continue
- is_any_index_covered = False
- for index in index_set:
- if index in covered_indexes or index in select_indexes:
- is_any_index_covered = True
- break
- if is_any_index_covered:
- continue
- for index in index_set:
- select_indexes.add(index)
- assert len(select_indexes) <= num_to_predict
-
- select_indexes = sorted(select_indexes)
- permute_indexes = list(select_indexes)
- np_rng.shuffle(permute_indexes)
- orig_token = list(output_tokens)
-
- for src_i, tgt_i in zip(select_indexes, permute_indexes):
- output_tokens[src_i] = orig_token[tgt_i]
- masked_lms.append(MaskedLmInstance(
- index=src_i, label=orig_token[src_i]))
-
- masked_lms = sorted(masked_lms, key=lambda x: x.index)
- # Sort the spans by the index of the first span
- masked_spans = sorted(masked_spans, key=lambda x: x.index[0])
-
- for p in masked_lms:
- masked_lm_positions.append(p.index)
- masked_lm_labels.append(p.label)
- return (output_tokens, masked_lm_positions, masked_lm_labels, token_boundary, masked_spans)
-
-
-def pad_and_convert_to_numpy(tokens, tokentypes, masked_positions,
- masked_labels, pad_id, max_seq_length):
- """Pad sequences and convert them to numpy."""
-
- # Some checks.
- num_tokens = len(tokens)
- padding_length = max_seq_length - num_tokens
- assert padding_length >= 0
- assert len(tokentypes) == num_tokens
- assert len(masked_positions) == len(masked_labels)
-
- # Tokens and token types.
- filler = [pad_id] * padding_length
- tokens_np = np.array(tokens + filler, dtype=np.int64)
- tokentypes_np = np.array(tokentypes + filler, dtype=np.int64)
-
- # Padding mask.
- padding_mask_np = np.array([1] * num_tokens + [0] * padding_length,
- dtype=np.int64)
-
- # Lables and loss mask.
- labels = [-1] * max_seq_length
- loss_mask = [0] * max_seq_length
- for i in range(len(masked_positions)):
- assert masked_positions[i] < num_tokens
- labels[masked_positions[i]] = masked_labels[i]
- loss_mask[masked_positions[i]] = 1
- labels_np = np.array(labels, dtype=np.int64)
- loss_mask_np = np.array(loss_mask, dtype=np.int64)
-
- return tokens_np, tokentypes_np, labels_np, padding_mask_np, loss_mask_np
-
-
-def build_train_valid_test_datasets(data_prefix, data_impl, splits_string,
- train_valid_test_num_samples,
- max_seq_length,
- masked_lm_prob, short_seq_prob, seed,
- tokenizer,
- skip_warmup, binary_head=False,
- max_seq_length_dec=None,
- dataset_type='standard_bert',
- zh_tokenizer=None,
- span=None):
-
- if len(data_prefix) == 1:
- return _build_train_valid_test_datasets(data_prefix[0],
- data_impl, splits_string,
- train_valid_test_num_samples,
- max_seq_length, masked_lm_prob,
- short_seq_prob, seed,
- skip_warmup,
- binary_head,
- max_seq_length_dec,
- tokenizer,
- dataset_type=dataset_type,
- zh_tokenizer=zh_tokenizer,
- span=span)
- # Blending dataset.
- # Parse the values.
- output = get_datasets_weights_and_num_samples(data_prefix,
- train_valid_test_num_samples)
- prefixes, weights, datasets_train_valid_test_num_samples = output
-
- # Build individual datasets.
- train_datasets = []
- valid_datasets = []
- test_datasets = []
- for i in range(len(prefixes)):
- train_ds, valid_ds, test_ds = _build_train_valid_test_datasets(
- prefixes[i], data_impl, splits_string,
- datasets_train_valid_test_num_samples[i],
- max_seq_length, masked_lm_prob, short_seq_prob,
- seed, skip_warmup, binary_head, max_seq_length_dec,
- tokenizer, dataset_type=dataset_type, zh_tokenizer=zh_tokenizer)
- if train_ds:
- train_datasets.append(train_ds)
- if valid_ds:
- valid_datasets.append(valid_ds)
- if test_ds:
- test_datasets.append(test_ds)
-
- # Blend.
- blending_train_dataset = None
- if train_datasets:
- blending_train_dataset = BlendableDataset(train_datasets, weights)
- blending_valid_dataset = None
- if valid_datasets:
- blending_valid_dataset = BlendableDataset(valid_datasets, weights)
- blending_test_dataset = None
- if test_datasets:
- blending_test_dataset = BlendableDataset(test_datasets, weights)
-
- return (blending_train_dataset, blending_valid_dataset,
- blending_test_dataset)
-
-
-def _build_train_valid_test_datasets(data_prefix, data_impl, splits_string,
- train_valid_test_num_samples,
- max_seq_length,
- masked_lm_prob, short_seq_prob, seed,
- skip_warmup, binary_head,
- max_seq_length_dec,
- tokenizer,
- dataset_type='standard_bert',
- zh_tokenizer=None,
- span=None):
-
- if dataset_type not in DSET_TYPES:
- raise ValueError("Invalid dataset_type: ", dataset_type)
-
- # Indexed dataset.
- indexed_dataset = get_indexed_dataset_(data_prefix,
- data_impl,
- skip_warmup)
-
- # Get start and end indices of train/valid/train into doc-idx
- # Note that doc-idx is desinged to be num-docs + 1 so we can
- # easily iterate over it.
- total_num_of_documents = indexed_dataset.doc_idx.shape[0] - 1
- splits = get_train_valid_test_split_(splits_string, total_num_of_documents)
-
- # Print stats about the splits.
- print_rank_0(' > dataset split:')
-
- def print_split_stats(name, index):
- print_rank_0(' {}:'.format(name))
- print_rank_0(' document indices in [{}, {}) total of {} '
- 'documents'.format(splits[index], splits[index + 1],
- splits[index + 1] - splits[index]))
- start_index = indexed_dataset.doc_idx[splits[index]]
- end_index = indexed_dataset.doc_idx[splits[index + 1]]
- print_rank_0(' sentence indices in [{}, {}) total of {} '
- 'sentences'.format(start_index, end_index,
- end_index - start_index))
- print_split_stats('train', 0)
- print_split_stats('validation', 1)
- print_split_stats('test', 2)
-
- def build_dataset(index, name):
- from fengshen.data.megatron_dataloader.bert_dataset import BertDataset
- from fengshen.data.megatron_dataloader.bart_dataset import BartDataset
- from fengshen.data.megatron_dataloader.cocolm_dataset import COCOLMDataset
- dataset = None
- if splits[index + 1] > splits[index]:
- # Get the pointer to the original doc-idx so we can set it later.
- doc_idx_ptr = indexed_dataset.get_doc_idx()
- # Slice the doc-idx
- start_index = splits[index]
- # Add +1 so we can index into the dataset to get the upper bound.
- end_index = splits[index + 1] + 1
- # New doc_idx view.
- indexed_dataset.set_doc_idx(doc_idx_ptr[start_index:end_index])
- # Build the dataset accordingly.
- kwargs = dict(
- name=name,
- data_prefix=data_prefix,
- num_epochs=None,
- max_num_samples=train_valid_test_num_samples[index],
- max_seq_length=max_seq_length,
- seed=seed,
- )
-
- if dataset_type == DSET_TYPE_BERT or dataset_type == DSET_TYPE_BERT_CN_WWM:
- dataset = BertDataset(
- indexed_dataset=indexed_dataset,
- masked_lm_prob=masked_lm_prob,
- short_seq_prob=short_seq_prob,
- binary_head=binary_head,
- # 增加参数区分bert和bert-cn-wwm
- tokenizer=tokenizer,
- masking_style='bert' if dataset_type == DSET_TYPE_BERT else 'bert-cn-wwm',
- **kwargs
- )
- elif dataset_type == DSET_TYPE_BART:
- dataset = BartDataset(
- indexed_dataset=indexed_dataset,
- masked_lm_prob=masked_lm_prob,
- short_seq_prob=short_seq_prob,
- tokenizer=tokenizer,
- zh_tokenizer=zh_tokenizer,
- **kwargs
- )
- elif dataset_type == DSET_TYPE_COCOLM:
- dataset = COCOLMDataset(
- indexed_dataset=indexed_dataset,
- masked_lm_prob=masked_lm_prob,
- short_seq_prob=short_seq_prob,
- tokenizer=tokenizer,
- masking_style='bert',
- span=span,
- **kwargs
- )
- else:
- raise NotImplementedError(
- "Dataset type not fully implemented.")
-
- # Set the original pointer so dataset remains the main dataset.
- indexed_dataset.set_doc_idx(doc_idx_ptr)
- # Checks.
- assert indexed_dataset.doc_idx[0] == 0
- assert indexed_dataset.doc_idx.shape[0] == \
- (total_num_of_documents + 1)
- return dataset
-
- train_dataset = build_dataset(0, 'train')
- valid_dataset = build_dataset(1, 'valid')
- test_dataset = build_dataset(2, 'test')
-
- return (train_dataset, valid_dataset, test_dataset)
-
-
-def get_indexed_dataset_(data_prefix, data_impl, skip_warmup):
-
- print_rank_0(' > building dataset index ...')
-
- start_time = time.time()
- indexed_dataset = make_indexed_dataset(data_prefix,
- data_impl,
- skip_warmup)
- assert indexed_dataset.sizes.shape[0] == indexed_dataset.doc_idx[-1]
- print_rank_0(' > finished creating indexed dataset in {:4f} '
- 'seconds'.format(time.time() - start_time))
-
- print_rank_0(' > indexed dataset stats:')
- print_rank_0(' number of documents: {}'.format(
- indexed_dataset.doc_idx.shape[0] - 1))
- print_rank_0(' number of sentences: {}'.format(
- indexed_dataset.sizes.shape[0]))
-
- return indexed_dataset
-
-
-def get_train_valid_test_split_(splits_string, size):
- """ Get dataset splits from comma or '/' separated string list."""
-
- splits = []
- if splits_string.find(',') != -1:
- splits = [float(s) for s in splits_string.split(',')]
- elif splits_string.find('/') != -1:
- splits = [float(s) for s in splits_string.split('/')]
- else:
- splits = [float(splits_string)]
- while len(splits) < 3:
- splits.append(0.)
- splits = splits[:3]
- splits_sum = sum(splits)
- assert splits_sum > 0.0
- splits = [split / splits_sum for split in splits]
- splits_index = [0]
- for index, split in enumerate(splits):
- splits_index.append(splits_index[index] +
- int(round(split * float(size))))
- diff = splits_index[-1] - size
- for index in range(1, len(splits_index)):
- splits_index[index] -= diff
- assert len(splits_index) == 4
- assert splits_index[-1] == size
- return splits_index
-
-
-def get_samples_mapping(indexed_dataset,
- data_prefix,
- num_epochs,
- max_num_samples,
- max_seq_length,
- short_seq_prob,
- seed,
- name,
- binary_head):
- """Get a list that maps a sample index to a starting
- sentence index, end sentence index, and length"""
-
- if not num_epochs:
- if not max_num_samples:
- raise ValueError("Need to specify either max_num_samples "
- "or num_epochs")
- num_epochs = np.iinfo(np.int32).max - 1
- if not max_num_samples:
- max_num_samples = np.iinfo(np.int64).max - 1
-
- # Filename of the index mapping
- indexmap_filename = data_prefix
- indexmap_filename += '_{}_indexmap'.format(name)
- if num_epochs != (np.iinfo(np.int32).max - 1):
- indexmap_filename += '_{}ep'.format(num_epochs)
- if max_num_samples != (np.iinfo(np.int64).max - 1):
- indexmap_filename += '_{}mns'.format(max_num_samples)
- indexmap_filename += '_{}msl'.format(max_seq_length)
- indexmap_filename += '_{:0.2f}ssp'.format(short_seq_prob)
- indexmap_filename += '_{}s'.format(seed)
- indexmap_filename += '.npy'
-
- # This should be a barrier but nccl barrier assumes
- # device_index=rank which is not the case for model
- # parallel case
- # ganruyi comment
- # counts = torch.cuda.LongTensor([1])
- # torch.distributed.all_reduce(
- # counts, group=mpu.get_data_parallel_group())
- # torch.distributed.all_reduce(
- # counts, group=mpu.get_pipeline_model_parallel_group())
- # assert counts[0].item() == (
- # torch.distributed.get_world_size() //
- # torch.distributed.get_world_size(
- # group=mpu.get_tensor_model_parallel_group()))
-
- # Load indexed dataset.
- print_rank_0(' > loading indexed mapping from {}'.format(
- indexmap_filename))
- start_time = time.time()
- samples_mapping = np.load(
- indexmap_filename, allow_pickle=True, mmap_mode='r')
- print_rank_0(' loaded indexed file in {:3.3f} seconds'.format(
- time.time() - start_time))
- print_rank_0(' total number of samples: {}'.format(
- samples_mapping.shape[0]))
-
- return samples_mapping
diff --git a/spaces/skf15963/summary/fengshen/examples/pegasus/pretrain_pegasus.py b/spaces/skf15963/summary/fengshen/examples/pegasus/pretrain_pegasus.py
deleted file mode 100644
index 0059355f5d5bf6d149e01fc3dc15d3a760932733..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/examples/pegasus/pretrain_pegasus.py
+++ /dev/null
@@ -1,181 +0,0 @@
-# -*- coding: utf-8 -*-
-
-
-from fengshen.models.model_utils import add_module_args
-from transformers import PegasusForConditionalGeneration, PegasusConfig
-from pytorch_lightning import Trainer, loggers, LightningModule
-from pytorch_lightning.callbacks import LearningRateMonitor
-from tokenizers_pegasus import PegasusTokenizer
-from utils import UniversalCheckpoint
-from data.universal_datamodule import UniversalDataModule
-from data_utils import (
- get_input_mask, pseudo_summary_f1, shift_tokens_right,
- padding_to_maxlength, load_stopwords, text_segmentate)
-import argparse
-import torch
-import os
-import sys
-
-sys.path.append('../../')
-
-
-# os.environ["CUDA_VISIBLE_DEVICES"] = '6'
-
-
-class FakeAbstractCollator:
-
- def __init__(self, tokenizer, stopwords_dict, max_enc_length):
- self.tokenizer = tokenizer
- self.max_seq_length = max_enc_length
- self.stopwords_dict = stopwords_dict
-
- def __call__(self, samples):
- # print("samples: ", samples)
- labels = []
- attn_mask = []
- decoder_attn_mask = []
- source_inputs = []
-
- for text in samples:
- texts = text["chunks"]
- text = text_segmentate(texts)
- sentence_id_vec, source, target, source_idxs, target_idxs = pseudo_summary_f1(
- text, self.stopwords_dict, self.tokenizer, self.max_seq_length,
- "rouge-l")
- source_idxs, target_idxs = get_input_mask(sentence_id_vec,
- target_idxs)
- if len(source_idxs) > self.max_seq_length:
- if 2 not in source_idxs[self.max_seq_length - 1:]:
- source_idxs = source_idxs[:self.max_seq_length]
- source_idxs[-1] = self.tokenizer.eos_token_id
- sys.stderr.write("Warning split long line: " + source +
- "\n")
- else:
- continue
-
- source_idxs, attention_mask = padding_to_maxlength(
- source_idxs, self.max_seq_length, self.tokenizer.pad_token_id)
- label, target_attention_mask = padding_to_maxlength(
- target_idxs, self.max_seq_length, self.tokenizer.pad_token_id)
- # print("sample len: ", len(source_idxs))
- source_inputs.append(source_idxs)
- attn_mask.append(attention_mask)
- decoder_attn_mask.append(target_attention_mask)
- labels.append(label)
- labels = torch.tensor(labels)
- decode_input_idxs = shift_tokens_right(labels,
- self.tokenizer.pad_token_id,
- self.tokenizer.pad_token_id)
- end_token_index = torch.where(labels == self.tokenizer.eos_token_id)[1]
- for idx, end_idx in enumerate(end_token_index):
- labels[idx][end_idx + 1:] = -100
-
- # print("call samples: ")
- return {
- "input_ids": torch.tensor(source_inputs),
- "attention_mask": torch.tensor(attn_mask),
- "labels": labels,
- "decoder_input_ids": decode_input_idxs,
- "decoder_attention_mask": torch.tensor(decoder_attn_mask)
- }
-
-
-class PegasusChineseModel(LightningModule):
-
- def __init__(self, args, **kwargs):
- super().__init__()
- self.args = args
- self.save_hyperparameters(args)
- config = PegasusConfig.from_json_file(
- os.path.join(args.model_path, "config.json"))
- print("vocab_size: ", config.vocab_size)
- self.model = PegasusForConditionalGeneration(config=config)
- print("model.num_parameters: ", self.model.num_parameters())
-
- def setup(self, stage) -> None:
- if stage == 'fit':
- train_loader = self.trainer._data_connector._train_dataloader_source.dataloader(
- )
-
- # Calculate total steps
- tb_size = self.hparams.train_batchsize * max(1, self.trainer.gpus)
- ab_size = self.trainer.accumulate_grad_batches * float(
- self.trainer.max_epochs)
- self.total_steps = (len(train_loader.dataset) //
- tb_size) // ab_size
- print('Total training step:', self.total_steps)
-
- def configure_optimizers(self):
- from fengshen.models.model_utils import configure_optimizers
- return configure_optimizers(self)
-
- def training_step(self, batch, batch_idx):
- output = self.model(**batch)
- self.log('train_loss', output.loss, sync_dist=True)
- return output.loss
-
- def comput_metrix(self, logits, labels):
- y_pred = torch.argmax(logits, dim=-1)
- y_pred = y_pred.view(size=(-1, ))
- y_true = labels.view(size=(-1, )).float()
- corr = torch.eq(y_pred, y_true)
- acc = torch.sum(corr.float()) / labels.size()[0]
- return acc
-
- def validation_step(self, batch, batch_idx):
- output = self.model(**batch)
- acc = self.comput_metrix(output.logits, batch['labels'])
- self.log('val_loss', output.loss, sync_dist=True)
- self.log('val_acc', acc, sync_dist=True)
-
- def on_save_checkpoint(self, checkpoint) -> None:
- if self.trainer._accelerator_connector.cluster_environment.global_rank(
- ) == 0:
- self.model.save_pretrained(
- os.path.join(
- self.trainer.checkpoint_callback.dirpath,
- 'hf_pretrained_epoch{}_step{}'.format(
- checkpoint['epoch'], checkpoint['global_step'])))
-
-
-def main():
- args_parser = argparse.ArgumentParser("Pegasus Task")
-
- args_parser = UniversalDataModule.add_data_specific_args(args_parser)
- args_parser = Trainer.add_argparse_args(args_parser)
- args_parser = UniversalCheckpoint.add_argparse_args(args_parser)
- args_parser = add_module_args(args_parser)
- args_parser.add_argument('--deepspeed')
- args_parser.add_argument(
- '--stopword_path',
- default="/cognitive_comp/dongxiaoqun/project/pegasus/own/pegasus/stopwords",
- type=str)
- args_parser.add_argument('--max_seq_length', default=1024, type=int)
- args = args_parser.parse_args()
-
- tokenizer = PegasusTokenizer.from_pretrained(args.model_path)
- stopwords_dict = load_stopwords(args.stopword_path)
- collator = FakeAbstractCollator(tokenizer, stopwords_dict,
- args.max_seq_length)
- data_module = UniversalDataModule(tokenizer=tokenizer,
- args=args,
- collate_fn=collator)
- module = PegasusChineseModel(args)
- lr_monitor = LearningRateMonitor(logging_interval='step')
- logger = loggers.TensorBoardLogger(
- save_dir=os.path.join(args.default_root_dir, 'logs/'),
- name=os.path.basename(os.path.dirname(args.model_path)))
- checkpoint_callback = UniversalCheckpoint(args).callbacks
-
- # autotuning
- if args.deepspeed is not None:
- os.environ['PL_DEEPSPEED_CONFIG_PATH'] = args.deepspeed
-
- trainer = Trainer.from_argparse_args(
- args, logger=logger, callbacks=[lr_monitor, checkpoint_callback])
-
- trainer.fit(module, data_module)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/skf15963/summary/fengshen/examples/pretrain_t5/pretrain_t5.py b/spaces/skf15963/summary/fengshen/examples/pretrain_t5/pretrain_t5.py
deleted file mode 100644
index 7a95bc8781ca5f4e0fa3ef0cb1eea98e5d4abbe6..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/examples/pretrain_t5/pretrain_t5.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import time
-from builtins import print
-import sys
-import os
-import torch
-import argparse
-import json
-import pytorch_lightning as pl
-from transformers import MT5Config, MT5Tokenizer
-from pytorch_lightning import Trainer, loggers
-from transformers import MT5ForConditionalGeneration
-from pytorch_lightning.callbacks import LearningRateMonitor
-# os.environ["CUDA_VISIBLE_DEVICES"] = '3'
-
-
-class MT5PretrainModel(pl.LightningModule):
-
- @staticmethod
- def add_model_specific_args(parent_args):
- parser = parent_args.add_argument_group('BaseModel')
- parser.add_argument('--keep_tokens_path', default=None, type=str)
- return parent_args
-
- def __init__(self, args):
- super().__init__()
- self.save_hyperparameters(args)
- if args.tokenizer_type == 't5_tokenizer':
- if args.new_vocab_path is not None:
- # 用于从mt5继续训练,此时只保留中英文词表,spm采用新模型
- assert args.keep_tokens_path is not None
- keep_tokens = json.load(open(args.keep_tokens_path))
- self.model = MT5ForConditionalGeneration.from_pretrained(
- args.pretrained_model_path)
- new_config = self.model.config
- new_config.vocab_size = len(keep_tokens)
- print('vocab_size:', new_config.vocab_size)
-
- new_state_dict = self.model.state_dict()
- select_index = torch.tensor(keep_tokens)
- new_state_dict['encoder.embed_tokens.weight'] = torch.index_select(
- new_state_dict['encoder.embed_tokens.weight'], dim=0, index=select_index)
- new_state_dict['shared.weight'] = torch.index_select(
- new_state_dict['shared.weight'], dim=0, index=select_index)
- new_state_dict['decoder.embed_tokens.weight'] = torch.index_select(
- new_state_dict['decoder.embed_tokens.weight'], dim=0, index=select_index)
- new_state_dict['lm_head.weight'] = torch.index_select(
- new_state_dict['lm_head.weight'], dim=0, index=select_index)
- self.model = MT5ForConditionalGeneration.from_pretrained(
- args.pretrained_model_path, config=new_config, state_dict=new_state_dict)
- # self.model = MT5ForConditionalGeneration(config=new_config)
- else:
- # 用于继续训练
- self.model = MT5ForConditionalGeneration.from_pretrained(
- args.pretrained_model_path
- )
- else:
- self.model = MT5ForConditionalGeneration(
- MT5Config.from_pretrained(args.pretrained_model_path)
- )
-
- def setup(self, stage) -> None:
- if stage == 'fit':
- train_loader = self.trainer._data_connector._train_dataloader_source.dataloader()
-
- # Calculate total steps
- if self.trainer.max_epochs > 0:
- world_size = self.trainer.world_size
- tb_size = self.hparams.train_batchsize * max(1, world_size)
- ab_size = self.trainer.accumulate_grad_batches * float(self.trainer.max_epochs)
- self.total_steps = (len(train_loader.dataset) *
- self.trainer.max_epochs // tb_size) // ab_size
- else:
- self.total_steps = self.trainer.max_steps // self.trainer.accumulate_grad_batches
-
- print('Total steps: {}' .format(self.total_steps))
-
- def configure_optimizers(self):
- from fengshen.models.model_utils import configure_optimizers
- return configure_optimizers(self)
-
- def training_step(self, batch, batch_idx):
- output = self.model(
- input_ids=batch['input_ids'], labels=batch['labels'])
- acc = self.comput_metrix(output.logits, batch['labels'])
- self.log('train_loss', output.loss, sync_dist=True)
- self.log('train_acc', acc, sync_dist=True)
- return output.loss
-
- def validation_step(self, batch, batch_idx):
- # print('is out of index: ', batch['input_ids'][batch['input_ids'] >= 32598])
- output = self.model(
- input_ids=batch['input_ids'], labels=batch['labels'])
- acc = self.comput_metrix(output.logits, batch['labels'])
- self.log('val_loss', output.loss, sync_dist=True)
- self.log('val_acc', acc, sync_dist=True)
-
- def comput_metrix(self, logits, labels):
- y_pred = torch.argmax(logits, dim=-1)
- y_pred = y_pred.view(size=(-1,))
- y_true = labels.view(size=(-1,)).float()
- corr = torch.eq(y_pred, y_true)
- acc = torch.sum(corr.float())/y_true.shape[0]
- return acc
-
- def on_save_checkpoint(self, checkpoint) -> None:
- # Save the current loop info in the mid of epoch
- # if you lightning <= 1.6.0 uncomment the line below
- # checkpoint['loops'] = self.trainer.checkpoint_connector._get_loops_state_dict()
- if self.trainer.global_rank == 0 and self.trainer.global_step % self.hparams.every_n_train_steps == 0:
- self.model.save_pretrained(os.path.join(
- self.trainer.checkpoint_callback.dirpath,
- 'hf_pretrained_epoch{}_step{}'.format(self.trainer.current_epoch, self.trainer.global_step)))
-
- def on_load_checkpoint(self, checkpoint) -> None:
- global_step_offset = checkpoint["global_step"]
- if 'global_samples' in checkpoint:
- self.consumed_samples = checkpoint['global_samples']
- self.trainer.fit_loop.epoch_loop._batches_that_stepped = global_step_offset
-
-
-def get_time_str():
- return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
-
-
-def main():
- total_parser = argparse.ArgumentParser("Pretrain Unsupervise.")
- total_parser.add_argument(
- '--do_eval_only', action='store_true', default=False)
- total_parser.add_argument(
- '--pretrained_model_path', default=None, type=str)
- total_parser.add_argument(
- '--new_vocab_path', default=None, type=str)
- total_parser.add_argument('--max_seq_length', default=1024, type=int)
- total_parser.add_argument('--ckpt_path', default=None, type=str)
- sys.path.append('../../../')
- from fengshen.data.t5_dataloader.t5_datasets import UnsuperviseT5DataModel
- from fengshen.utils.universal_checkpoint import UniversalCheckpoint
- # * Args for data preprocessing
- total_parser = UnsuperviseT5DataModel.add_data_specific_args(total_parser)
- # * Args for training
- total_parser = Trainer.add_argparse_args(total_parser)
- total_parser = UniversalCheckpoint.add_argparse_args(total_parser)
- total_parser = MT5PretrainModel.add_model_specific_args(total_parser)
- # * Args for base model
- args = total_parser.parse_args()
- print('Argument parse success.')
- print('UnsuperviseT5DataModel load start {}'.format(get_time_str()))
- data_model = UnsuperviseT5DataModel(args)
- print('UnsuperviseT5DataModel load end {}'.format(get_time_str()))
- if not args.do_eval_only:
- model = MT5PretrainModel(args)
- checkpoint_callback = UniversalCheckpoint(args)
- lr_monitor = LearningRateMonitor(logging_interval='step')
- logger = loggers.TensorBoardLogger(save_dir=os.path.join(
- args.default_root_dir, 'logs/'))
- trainer = Trainer.from_argparse_args(args,
- logger=logger,
- callbacks=[checkpoint_callback, lr_monitor]
- )
- trainer.fit(model, data_model, ckpt_path=args.ckpt_path)
- else:
- tokenizer = MT5Tokenizer.from_pretrained(args.new_vocab_path, extra_ids=0)
- model = MT5PretrainModel(args=args, num_data=len(data_model.predict_dataloader()))
- trainer = Trainer.from_argparse_args(args)
-
- result = trainer.predict(model, data_model)
- result = result[0]
- for i in range(4):
- print(tokenizer.batch_decode(result['input_ids'][i]))
- print(tokenizer.batch_decode(result['predict_ids'][i]))
- print(tokenizer.batch_decode(result['labels'][i]))
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/skf15963/summary/fengshen/examples/pretrain_t5/process_data.py b/spaces/skf15963/summary/fengshen/examples/pretrain_t5/process_data.py
deleted file mode 100644
index bae164f107f7ec3569227f3e40a292ee1641fd21..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/examples/pretrain_t5/process_data.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# coding=utf8
-import argparse
-import sys
-import os
-from concurrent.futures import ProcessPoolExecutor
-
-
-def _generate_cache_arrow(index, ds, path):
- print('saving dataset shard {}'.format(index))
- ds.save_to_disk(os.path.join(path, 'part_{}'.format(index)))
- return 'saving dataset shard {} done'.format(index)
-
-
-def generate_arrow_cache(ds, args) -> None:
- '''
- 读取wudao_180g等原数据或者tokenized之后的数据,并进行train test split
- 同时利用seed 42做shuffle 缓存下来
- '''
- ds = ds.train_test_split(train_size=args.train_split_size, seed=42)
- print(ds)
- p = ProcessPoolExecutor(max_workers=args.preprocessing_num_workers)
- res = []
- train_shard_part = args.saved_data_shards
- for i in range(0, train_shard_part):
- res.append(p.submit(_generate_cache_arrow, i,
- ds['train'].shard(train_shard_part, i), args.saved_train_data_path))
-
- p.shutdown(wait=True)
- for future in res:
- print(future.result(), flush=True)
-
- ds['test'].save_to_disk(args.saved_test_data_path)
- print('done')
-
-
-if __name__ == '__main__':
- total_parser = argparse.ArgumentParser("Save data Task")
- total_parser.add_argument(
- '--new_vocab_path', default='/cognitive_comp/ganruyi/hf_models/t5_cn_small/sentencepiece_cn.model', type=str)
- total_parser.add_argument('--preprocessing_num_workers', default=30, type=int)
- total_parser.add_argument(
- '--train_data_path', default='/cognitive_comp/common_data/test_wudao_180g_mt5_tokenized/', type=str)
- total_parser.add_argument('--saved_data_shards', default=800, type=int)
- total_parser.add_argument('--saved_train_data_path', default=None, type=str)
- total_parser.add_argument('--saved_test_data_path', default=None, type=str)
- total_parser.add_argument('--max_seq_length', default=512, type=int)
- total_parser.add_argument('--train_split_size', default=0.999, type=float)
- total_parser.add_argument('--pretrained_model_path', default=None, type=str)
- total_parser.add_argument('--tokenizer_type', default='t5_tokenizer', choices=['t5_tokenizer', 'bert_tokenizer'])
- total_parser.add_argument('--text_column_name', default='text')
- total_parser.add_argument('--remove_columns', nargs='+', default=[])
-
- # * Args for data preprocessing
- args = total_parser.parse_args()
- sys.path.append('../../../')
- from fengshen.data.t5_dataloader.t5_datasets import UnsuperviseT5Dataset
- ds = UnsuperviseT5Dataset(args.train_data_path, args)
- print(ds)
- generate_arrow_cache(ds.data, args=args)
- # ds = UnsuperviseT5Dataset(args.train_data_path, args, load_data_type=0)
- for i in range(0, 2):
- print(ds.data[i])
- print(ds.tokenizer.decode(ds.data[i]['input_ids']))
-
- print(ds.data)
diff --git a/spaces/smdcn/stabilityai-stable-diffusion-2-1-base/README.md b/spaces/smdcn/stabilityai-stable-diffusion-2-1-base/README.md
deleted file mode 100644
index 25f57c07e32ed637c22428620a60fa199d51a9da..0000000000000000000000000000000000000000
--- a/spaces/smdcn/stabilityai-stable-diffusion-2-1-base/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stabilityai Stable Diffusion 2 1 Base
-emoji: 😻
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/latent_depth/latent_depth_src/__init__.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/latent_depth/latent_depth_src/__init__.py
deleted file mode 100644
index c5fa76039ff98c18d3c14b5f4a8f73ffe644de11..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/latent_depth/latent_depth_src/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import multilingual_translation_latent_depth # noqa
-from .loss import latent_depth # noqa
-from .models import latent_multilingual_transformer # noqa
-from .modules import latent_layers # noqa
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/README.md
deleted file mode 100644
index 4a3ae54b857c43621c9fb67ee4b214584beec835..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
-Speech Synthesis (S^2)
-===
-
-Speech synthesis with fairseq.
-
-- Autoregressive and non-autoregressive models
-- Multi-speaker synthesis
-- Audio preprocessing
-- Automatic metrics
-- Similar data configuration as [S2T](../speech_to_text/README.md)
-
-
-## Examples
-- [Single-speaker synthesis on LJSpeech](docs/ljspeech_example.md)
-- [Multi-speaker synthesis on VCTK](docs/vctk_example.md)
-- [Multi-speaker synthesis on Common Voice](docs/common_voice_example.md)
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py
deleted file mode 100644
index a28cd607a096844438f6a3ba6b007d94d67d1bc8..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import csv
-from pathlib import Path
-
-
-def main(args):
- """
- `uid syn ref text`
- """
- in_root = Path(args.generation_root).resolve()
- ext = args.audio_format
- with open(args.audio_manifest) as f, open(args.output_path, "w") as f_out:
- reader = csv.DictReader(
- f, delimiter="\t", quotechar=None, doublequote=False,
- lineterminator="\n", quoting=csv.QUOTE_NONE
- )
- header = ["id", "syn", "ref", "text", "speaker"]
- f_out.write("\t".join(header) + "\n")
- for row in reader:
- dir_name = f"{ext}_{args.sample_rate}hz_{args.vocoder}"
- id_ = row["id"]
- syn = (in_root / dir_name / f"{id_}.{ext}").as_posix()
- ref = row["audio"]
- if args.use_resynthesized_target:
- ref = (in_root / f"{dir_name}_tgt" / f"{id_}.{ext}").as_posix()
- sample = [id_, syn, ref, row["tgt_text"], row["speaker"]]
- f_out.write("\t".join(sample) + "\n")
- print(f"wrote evaluation file to {args.output_path}")
-
-
-if __name__ == "__main__":
- import argparse
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--generation-root", help="output directory for generate_waveform.py"
- )
- parser.add_argument(
- "--audio-manifest",
- help="used to determine the original utterance ID and text"
- )
- parser.add_argument(
- "--output-path", help="path to output evaluation spec file"
- )
- parser.add_argument(
- "--use-resynthesized-target", action="store_true",
- help="use resynthesized reference instead of the original audio"
- )
- parser.add_argument("--vocoder", type=str, default="griffin_lim")
- parser.add_argument("--sample-rate", type=int, default=22_050)
- parser.add_argument("--audio-format", type=str, default="wav")
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/benchmark/dummy_masked_lm.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/benchmark/dummy_masked_lm.py
deleted file mode 100644
index 12b9c5d0f55993bf8750564882a351fc3f8055f0..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/benchmark/dummy_masked_lm.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from dataclasses import dataclass, field
-from typing import Optional
-
-import torch
-from omegaconf import II
-
-from .dummy_dataset import DummyDataset
-from fairseq.data import Dictionary
-from fairseq.dataclass import FairseqDataclass
-from fairseq.tasks import FairseqTask, register_task
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class DummyMaskedLMConfig(FairseqDataclass):
- dict_size: int = 49996
- dataset_size: int = 100000
- tokens_per_sample: int = field(
- default=512,
- metadata={
- "help": "max number of total tokens over all"
- " segments per sample for BERT dataset"
- },
- )
- batch_size: Optional[int] = II("dataset.batch_size")
- max_tokens: Optional[int] = II("dataset.max_tokens")
- max_target_positions: int = II("task.tokens_per_sample")
-
-
-@register_task("dummy_masked_lm", dataclass=DummyMaskedLMConfig)
-class DummyMaskedLMTask(FairseqTask):
- def __init__(self, cfg: DummyMaskedLMConfig):
- super().__init__(cfg)
-
- self.dictionary = Dictionary()
- for i in range(cfg.dict_size):
- self.dictionary.add_symbol("word{}".format(i))
- logger.info("dictionary: {} types".format(len(self.dictionary)))
- # add mask token
- self.mask_idx = self.dictionary.add_symbol("")
- self.dictionary.pad_to_multiple_(8) # often faster if divisible by 8
-
- mask_idx = 0
- pad_idx = 1
- seq = torch.arange(cfg.tokens_per_sample) + pad_idx + 1
- mask = torch.arange(2, cfg.tokens_per_sample, 7) # ~15%
- src = seq.clone()
- src[mask] = mask_idx
- tgt = torch.full_like(seq, pad_idx)
- tgt[mask] = seq[mask]
-
- self.dummy_src = src
- self.dummy_tgt = tgt
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- """Load a given dataset split.
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- if self.cfg.batch_size is not None:
- bsz = self.cfg.batch_size
- else:
- bsz = max(1, self.cfg.max_tokens // self.cfg.tokens_per_sample)
- self.datasets[split] = DummyDataset(
- {
- "id": 1,
- "net_input": {
- "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]),
- "src_lengths": torch.full(
- (bsz,), self.cfg.tokens_per_sample, dtype=torch.long
- ),
- },
- "target": torch.stack([self.dummy_tgt for _ in range(bsz)]),
- "nsentences": bsz,
- "ntokens": bsz * self.cfg.tokens_per_sample,
- },
- num_items=self.cfg.dataset_size,
- item_size=self.cfg.tokens_per_sample,
- )
-
- @property
- def source_dictionary(self):
- return self.dictionary
-
- @property
- def target_dictionary(self):
- return self.dictionary
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/language_pair_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/language_pair_dataset.py
deleted file mode 100644
index ff3e14bf14770638524ef6067b558e455dbe5f2b..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/language_pair_dataset.py
+++ /dev/null
@@ -1,471 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import numpy as np
-import torch
-from fairseq.data import FairseqDataset, data_utils
-
-
-logger = logging.getLogger(__name__)
-
-
-def collate(
- samples,
- pad_idx,
- eos_idx,
- left_pad_source=True,
- left_pad_target=False,
- input_feeding=True,
- pad_to_length=None,
- pad_to_multiple=1,
-):
- if len(samples) == 0:
- return {}
-
- def merge(key, left_pad, move_eos_to_beginning=False, pad_to_length=None):
- return data_utils.collate_tokens(
- [s[key] for s in samples],
- pad_idx,
- eos_idx,
- left_pad,
- move_eos_to_beginning,
- pad_to_length=pad_to_length,
- pad_to_multiple=pad_to_multiple,
- )
-
- def check_alignment(alignment, src_len, tgt_len):
- if alignment is None or len(alignment) == 0:
- return False
- if (
- alignment[:, 0].max().item() >= src_len - 1
- or alignment[:, 1].max().item() >= tgt_len - 1
- ):
- logger.warning("alignment size mismatch found, skipping alignment!")
- return False
- return True
-
- def compute_alignment_weights(alignments):
- """
- Given a tensor of shape [:, 2] containing the source-target indices
- corresponding to the alignments, a weight vector containing the
- inverse frequency of each target index is computed.
- For e.g. if alignments = [[5, 7], [2, 3], [1, 3], [4, 2]], then
- a tensor containing [1., 0.5, 0.5, 1] should be returned (since target
- index 3 is repeated twice)
- """
- align_tgt = alignments[:, 1]
- _, align_tgt_i, align_tgt_c = torch.unique(
- align_tgt, return_inverse=True, return_counts=True
- )
- align_weights = align_tgt_c[align_tgt_i[np.arange(len(align_tgt))]]
- return 1.0 / align_weights.float()
-
- id = torch.LongTensor([s["id"] for s in samples])
- src_tokens = merge(
- "source",
- left_pad=left_pad_source,
- pad_to_length=pad_to_length["source"] if pad_to_length is not None else None,
- )
- # sort by descending source length
- src_lengths = torch.LongTensor(
- [s["source"].ne(pad_idx).long().sum() for s in samples]
- )
- src_lengths, sort_order = src_lengths.sort(descending=True)
- id = id.index_select(0, sort_order)
- src_tokens = src_tokens.index_select(0, sort_order)
-
- prev_output_tokens = None
- target = None
- if samples[0].get("target", None) is not None:
- target = merge(
- "target",
- left_pad=left_pad_target,
- pad_to_length=pad_to_length["target"]
- if pad_to_length is not None
- else None,
- )
- target = target.index_select(0, sort_order)
- tgt_lengths = torch.LongTensor(
- [s["target"].ne(pad_idx).long().sum() for s in samples]
- ).index_select(0, sort_order)
- ntokens = tgt_lengths.sum().item()
-
- if samples[0].get("prev_output_tokens", None) is not None:
- prev_output_tokens = merge("prev_output_tokens", left_pad=left_pad_target)
- elif input_feeding:
- # we create a shifted version of targets for feeding the
- # previous output token(s) into the next decoder step
- prev_output_tokens = merge(
- "target",
- left_pad=left_pad_target,
- move_eos_to_beginning=True,
- pad_to_length=pad_to_length["target"]
- if pad_to_length is not None
- else None,
- )
- else:
- ntokens = src_lengths.sum().item()
-
- batch = {
- "id": id,
- "nsentences": len(samples),
- "ntokens": ntokens,
- "net_input": {"src_tokens": src_tokens, "src_lengths": src_lengths,},
- "target": target,
- }
- if prev_output_tokens is not None:
- batch["net_input"]["prev_output_tokens"] = prev_output_tokens.index_select(
- 0, sort_order
- )
-
- if samples[0].get("alignment", None) is not None:
- bsz, tgt_sz = batch["target"].shape
- src_sz = batch["net_input"]["src_tokens"].shape[1]
-
- offsets = torch.zeros((len(sort_order), 2), dtype=torch.long)
- offsets[:, 1] += torch.arange(len(sort_order), dtype=torch.long) * tgt_sz
- if left_pad_source:
- offsets[:, 0] += src_sz - src_lengths
- if left_pad_target:
- offsets[:, 1] += tgt_sz - tgt_lengths
-
- alignments = [
- alignment + offset
- for align_idx, offset, src_len, tgt_len in zip(
- sort_order, offsets, src_lengths, tgt_lengths
- )
- for alignment in [samples[align_idx]["alignment"].view(-1, 2)]
- if check_alignment(alignment, src_len, tgt_len)
- ]
-
- if len(alignments) > 0:
- alignments = torch.cat(alignments, dim=0)
- align_weights = compute_alignment_weights(alignments)
-
- batch["alignments"] = alignments
- batch["align_weights"] = align_weights
-
- if samples[0].get("constraints", None) is not None:
- # Collate the packed constraints across the samples, padding to
- # the length of the longest sample.
- lens = [sample.get("constraints").size(0) for sample in samples]
- max_len = max(lens)
- constraints = torch.zeros((len(samples), max(lens))).long()
- for i, sample in enumerate(samples):
- constraints[i, 0 : lens[i]] = samples[i].get("constraints")
- batch["constraints"] = constraints.index_select(0, sort_order)
-
- return batch
-
-
-class LanguagePairDataset(FairseqDataset):
- """
- A pair of torch.utils.data.Datasets.
-
- Args:
- src (torch.utils.data.Dataset): source dataset to wrap
- src_sizes (List[int]): source sentence lengths
- src_dict (~fairseq.data.Dictionary): source vocabulary
- tgt (torch.utils.data.Dataset, optional): target dataset to wrap
- tgt_sizes (List[int], optional): target sentence lengths
- tgt_dict (~fairseq.data.Dictionary, optional): target vocabulary
- left_pad_source (bool, optional): pad source tensors on the left side
- (default: True).
- left_pad_target (bool, optional): pad target tensors on the left side
- (default: False).
- shuffle (bool, optional): shuffle dataset elements before batching
- (default: True).
- input_feeding (bool, optional): create a shifted version of the targets
- to be passed into the model for teacher forcing (default: True).
- remove_eos_from_source (bool, optional): if set, removes eos from end
- of source if it's present (default: False).
- append_eos_to_target (bool, optional): if set, appends eos to end of
- target if it's absent (default: False).
- align_dataset (torch.utils.data.Dataset, optional): dataset
- containing alignments.
- constraints (Tensor, optional): 2d tensor with a concatenated, zero-
- delimited list of constraints for each sentence.
- append_bos (bool, optional): if set, appends bos to the beginning of
- source/target sentence.
- num_buckets (int, optional): if set to a value greater than 0, then
- batches will be bucketed into the given number of batch shapes.
- src_lang_id (int, optional): source language ID, if set, the collated batch
- will contain a field 'src_lang_id' in 'net_input' which indicates the
- source language of the samples.
- tgt_lang_id (int, optional): target language ID, if set, the collated batch
- will contain a field 'tgt_lang_id' which indicates the target language
- of the samples.
- """
-
- def __init__(
- self,
- src,
- src_sizes,
- src_dict,
- tgt=None,
- tgt_sizes=None,
- tgt_dict=None,
- left_pad_source=True,
- left_pad_target=False,
- shuffle=True,
- input_feeding=True,
- remove_eos_from_source=False,
- append_eos_to_target=False,
- align_dataset=None,
- constraints=None,
- append_bos=False,
- eos=None,
- num_buckets=0,
- src_lang_id=None,
- tgt_lang_id=None,
- pad_to_multiple=1,
- ):
- if tgt_dict is not None:
- assert src_dict.pad() == tgt_dict.pad()
- assert src_dict.eos() == tgt_dict.eos()
- assert src_dict.unk() == tgt_dict.unk()
- if tgt is not None:
- assert len(src) == len(
- tgt
- ), "Source and target must contain the same number of examples"
- self.src = src
- self.tgt = tgt
- self.src_sizes = np.array(src_sizes)
- self.tgt_sizes = np.array(tgt_sizes) if tgt_sizes is not None else None
- self.sizes = (
- np.vstack((self.src_sizes, self.tgt_sizes)).T
- if self.tgt_sizes is not None
- else self.src_sizes
- )
- self.src_dict = src_dict
- self.tgt_dict = tgt_dict
- self.left_pad_source = left_pad_source
- self.left_pad_target = left_pad_target
- self.shuffle = shuffle
- self.input_feeding = input_feeding
- self.remove_eos_from_source = remove_eos_from_source
- self.append_eos_to_target = append_eos_to_target
- self.align_dataset = align_dataset
- if self.align_dataset is not None:
- assert (
- self.tgt_sizes is not None
- ), "Both source and target needed when alignments are provided"
- self.constraints = constraints
- self.append_bos = append_bos
- self.eos = eos if eos is not None else src_dict.eos()
- self.src_lang_id = src_lang_id
- self.tgt_lang_id = tgt_lang_id
- if num_buckets > 0:
- from fairseq.data import BucketPadLengthDataset
-
- self.src = BucketPadLengthDataset(
- self.src,
- sizes=self.src_sizes,
- num_buckets=num_buckets,
- pad_idx=self.src_dict.pad(),
- left_pad=self.left_pad_source,
- )
- self.src_sizes = self.src.sizes
- logger.info("bucketing source lengths: {}".format(list(self.src.buckets)))
- if self.tgt is not None:
- self.tgt = BucketPadLengthDataset(
- self.tgt,
- sizes=self.tgt_sizes,
- num_buckets=num_buckets,
- pad_idx=self.tgt_dict.pad(),
- left_pad=self.left_pad_target,
- )
- self.tgt_sizes = self.tgt.sizes
- logger.info(
- "bucketing target lengths: {}".format(list(self.tgt.buckets))
- )
-
- # determine bucket sizes using self.num_tokens, which will return
- # the padded lengths (thanks to BucketPadLengthDataset)
- num_tokens = np.vectorize(self.num_tokens, otypes=[np.compat.long])
- self.bucketed_num_tokens = num_tokens(np.arange(len(self.src)))
- self.buckets = [
- (None, num_tokens) for num_tokens in np.unique(self.bucketed_num_tokens)
- ]
- else:
- self.buckets = None
- self.pad_to_multiple = pad_to_multiple
-
- def get_batch_shapes(self):
- return self.buckets
-
- def __getitem__(self, index):
- tgt_item = self.tgt[index] if self.tgt is not None else None
- src_item = self.src[index]
- # Append EOS to end of tgt sentence if it does not have an EOS and remove
- # EOS from end of src sentence if it exists. This is useful when we use
- # use existing datasets for opposite directions i.e., when we want to
- # use tgt_dataset as src_dataset and vice versa
- if self.append_eos_to_target:
- eos = self.tgt_dict.eos() if self.tgt_dict else self.src_dict.eos()
- if self.tgt and self.tgt[index][-1] != eos:
- tgt_item = torch.cat([self.tgt[index], torch.LongTensor([eos])])
-
- if self.append_bos:
- bos = self.tgt_dict.bos() if self.tgt_dict else self.src_dict.bos()
- if self.tgt and self.tgt[index][0] != bos:
- tgt_item = torch.cat([torch.LongTensor([bos]), self.tgt[index]])
-
- bos = self.src_dict.bos()
- if self.src[index][0] != bos:
- src_item = torch.cat([torch.LongTensor([bos]), self.src[index]])
-
- if self.remove_eos_from_source:
- eos = self.src_dict.eos()
- if self.src[index][-1] == eos:
- src_item = self.src[index][:-1]
-
- example = {
- "id": index,
- "source": src_item,
- "target": tgt_item,
- }
- if self.align_dataset is not None:
- example["alignment"] = self.align_dataset[index]
- if self.constraints is not None:
- example["constraints"] = self.constraints[index]
- return example
-
- def __len__(self):
- return len(self.src)
-
- def collater(self, samples, pad_to_length=None):
- """Merge a list of samples to form a mini-batch.
-
- Args:
- samples (List[dict]): samples to collate
- pad_to_length (dict, optional): a dictionary of
- {'source': source_pad_to_length, 'target': target_pad_to_length}
- to indicate the max length to pad to in source and target respectively.
-
- Returns:
- dict: a mini-batch with the following keys:
-
- - `id` (LongTensor): example IDs in the original input order
- - `ntokens` (int): total number of tokens in the batch
- - `net_input` (dict): the input to the Model, containing keys:
-
- - `src_tokens` (LongTensor): a padded 2D Tensor of tokens in
- the source sentence of shape `(bsz, src_len)`. Padding will
- appear on the left if *left_pad_source* is ``True``.
- - `src_lengths` (LongTensor): 1D Tensor of the unpadded
- lengths of each source sentence of shape `(bsz)`
- - `prev_output_tokens` (LongTensor): a padded 2D Tensor of
- tokens in the target sentence, shifted right by one
- position for teacher forcing, of shape `(bsz, tgt_len)`.
- This key will not be present if *input_feeding* is
- ``False``. Padding will appear on the left if
- *left_pad_target* is ``True``.
- - `src_lang_id` (LongTensor): a long Tensor which contains source
- language IDs of each sample in the batch
-
- - `target` (LongTensor): a padded 2D Tensor of tokens in the
- target sentence of shape `(bsz, tgt_len)`. Padding will appear
- on the left if *left_pad_target* is ``True``.
- - `tgt_lang_id` (LongTensor): a long Tensor which contains target language
- IDs of each sample in the batch
- """
- res = collate(
- samples,
- pad_idx=self.src_dict.pad(),
- eos_idx=self.eos,
- left_pad_source=self.left_pad_source,
- left_pad_target=self.left_pad_target,
- input_feeding=self.input_feeding,
- pad_to_length=pad_to_length,
- pad_to_multiple=self.pad_to_multiple,
- )
- if self.src_lang_id is not None or self.tgt_lang_id is not None:
- src_tokens = res["net_input"]["src_tokens"]
- bsz = src_tokens.size(0)
- if self.src_lang_id is not None:
- res["net_input"]["src_lang_id"] = (
- torch.LongTensor([[self.src_lang_id]]).expand(bsz, 1).to(src_tokens)
- )
- if self.tgt_lang_id is not None:
- res["tgt_lang_id"] = (
- torch.LongTensor([[self.tgt_lang_id]]).expand(bsz, 1).to(src_tokens)
- )
- return res
-
- def num_tokens(self, index):
- """Return the number of tokens in a sample. This value is used to
- enforce ``--max-tokens`` during batching."""
- return max(
- self.src_sizes[index],
- self.tgt_sizes[index] if self.tgt_sizes is not None else 0,
- )
-
- def num_tokens_vec(self, indices):
- """Return the number of tokens for a set of positions defined by indices.
- This value is used to enforce ``--max-tokens`` during batching."""
- sizes = self.src_sizes[indices]
- if self.tgt_sizes is not None:
- sizes = np.maximum(sizes, self.tgt_sizes[indices])
- return sizes
-
- def size(self, index):
- """Return an example's size as a float or tuple. This value is used when
- filtering a dataset with ``--max-positions``."""
- return (
- self.src_sizes[index],
- self.tgt_sizes[index] if self.tgt_sizes is not None else 0,
- )
-
- def ordered_indices(self):
- """Return an ordered list of indices. Batches will be constructed based
- on this order."""
- if self.shuffle:
- indices = np.random.permutation(len(self)).astype(np.int64)
- else:
- indices = np.arange(len(self), dtype=np.int64)
- if self.buckets is None:
- # sort by target length, then source length
- if self.tgt_sizes is not None:
- indices = indices[np.argsort(self.tgt_sizes[indices], kind="mergesort")]
- return indices[np.argsort(self.src_sizes[indices], kind="mergesort")]
- else:
- # sort by bucketed_num_tokens, which is:
- # max(padded_src_len, padded_tgt_len)
- return indices[
- np.argsort(self.bucketed_num_tokens[indices], kind="mergesort")
- ]
-
- @property
- def supports_prefetch(self):
- return getattr(self.src, "supports_prefetch", False) and (
- getattr(self.tgt, "supports_prefetch", False) or self.tgt is None
- )
-
- def prefetch(self, indices):
- self.src.prefetch(indices)
- if self.tgt is not None:
- self.tgt.prefetch(indices)
- if self.align_dataset is not None:
- self.align_dataset.prefetch(indices)
-
- def filter_indices_by_size(self, indices, max_sizes):
- """Filter a list of sample indices. Remove those that are longer
- than specified in max_sizes.
-
- Args:
- indices (np.array): original array of sample indices
- max_sizes (int or list[int] or tuple[int]): max sample size,
- can be defined separately for src and tgt (then list or tuple)
-
- Returns:
- np.array: filtered sample array
- list: list of removed indices
- """
- return data_utils.filter_paired_dataset_indices_by_size(
- self.src_sizes, self.tgt_sizes, indices, max_sizes,
- )
diff --git a/spaces/stomexserde/gpt4-ui/Examples/13 Ghosts Full Movie Download In Hindi VERIFIED.md b/spaces/stomexserde/gpt4-ui/Examples/13 Ghosts Full Movie Download In Hindi VERIFIED.md
deleted file mode 100644
index 17d1819641dce90efda0fe013400d991d24f3838..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/13 Ghosts Full Movie Download In Hindi VERIFIED.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
How to Watch 13 Ghosts Full Movie in Hindi Online for Free
-
13 Ghosts is a 2001 horror film directed by Steve Beck and starring Tony Shalhoub, Embeth Davidtz, Matthew Lillard, Shannon Elizabeth and Rah Digga. The film is a remake of the 1960 film of the same name by William Castle. It follows a family who inherits a haunted house that contains 12 trapped ghosts, each with a gruesome backstory. The family must find a way to escape the house before they become the 13th ghost.
-
If you are a fan of horror movies and want to watch 13 Ghosts full movie in Hindi online for free, you have come to the right place. In this article, we will show you how to stream or download 13 Ghosts full movie in Hindi with subtitles using some of the best websites available on the internet. However, we do not condone piracy and recommend that you watch the movie legally from official sources.
Where to Watch 13 Ghosts Full Movie in Hindi Online for Free
-
There are many websites that claim to offer 13 Ghosts full movie in Hindi online for free, but not all of them are safe or reliable. Some of them may contain viruses, malware, pop-ups, ads or other unwanted content that can harm your device or compromise your privacy. Therefore, you should be careful and use a trusted VPN service to protect your identity and data while browsing these websites.
-
Here are some of the websites that you can try to watch 13 Ghosts full movie in Hindi online for free:
-
-
Archive.org: This is a non-profit digital library that offers free access to millions of books, movies, music and other media. You can find 13 Ghosts full movie in English on this website and use a subtitle file to watch it in Hindi. You can also download the movie for offline viewing.
-
Todaymovie.org: This is a website that provides links to various streaming platforms where you can watch 13 Ghosts full movie in Hindi dubbed online for free. However, some of the links may not work or require registration or payment. You should also beware of the ads and pop-ups that may redirect you to other websites.
-
Dailymotion.com: This is a video-sharing platform that hosts user-generated and professional content. You can find 13 Ghosts full movie in Hindi dubbed on this website and watch it online for free. However, the video quality may not be very good and the movie may be split into several parts.
-
-
Conclusion
-
13 Ghosts is a horror movie that will keep you on the edge of your seat with its creepy atmosphere and gory scenes. If you want to watch 13 Ghosts full movie in Hindi online for free, you can try some of the websites mentioned above. However, we advise you to use a VPN service and an ad-blocker to avoid any risks or issues while streaming or downloading the movie. We also encourage you to support the filmmakers and watch the movie legally from official sources.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Accusonus ? ERA Bundle Pro V4.0.0 VST AAX X86 X64 2021.md b/spaces/stomexserde/gpt4-ui/Examples/Accusonus ? ERA Bundle Pro V4.0.0 VST AAX X86 X64 2021.md
deleted file mode 100644
index f789956797dc86a31f53937b88fc26ab4d01d057..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Accusonus ? ERA Bundle Pro V4.0.0 VST AAX X86 X64 2021.md
+++ /dev/null
@@ -1,180 +0,0 @@
-
-
-
-
-
Accusonus – ERA Bundle Pro V4.0.0 VST, AAX X86 X64 : A Review
-
Are you looking for a powerful and easy-to-use tool to enhance and repair your audio files? Do you want to get rid of unwanted noise, reverb, sibilance, breaths, clicks, plosives, and clipping from your recordings? Do you want to improve the quality and clarity of your vocals with automatic equalization and deepening? If you answered yes to any of these questions, then you might be interested in Accusonus – ERA Bundle Pro V4.0.0 VST, AAX X86 X64.
-
Accusonus – ERA Bundle Pro V4.0.0 VST, AAX X86 X64
In this article, I will review this product and tell you everything you need to know about it. I will explain what it is, how to install it, how to use it, what are the pros and cons of it, and what are some alternatives to it. By the end of this article, you will have a clear idea of whether Accusonus – ERA Bundle Pro V4.0.0 is the right product for you or not. So, let's get started!
-
What is Accusonus – ERA Bundle Pro V4.0.0?
-
Accusonus – ERA Bundle Pro V4.0.0 is a collection of 13 high-quality plugins that are designed to help you improve and repair your audio files in a fast and easy way. These plugins are compatible with most digital audio workstations (DAWs) and can be used for various purposes such as podcasting, music production, video editing, voice-over, audio restoration, and more.
-
The plugins included in this bundle are:
-
-
Noise Remover PRO: This plugin allows you to remove unwanted noise from your audio files with a single knob. You can adjust the amount of noise reduction and the type of noise (tonal or broadband) with two additional controls. You can also use the spectral mode to fine-tune the noise removal in specific frequency ranges.
-
Reverb Remover PRO: This plugin allows you to remove unwanted reverb from your audio files with a single knob. You can adjust the amount of reverb reduction and the type of reverb (early or late) with two additional controls. You can also use the spectral mode to fine-tune the reverb removal in specific frequency ranges.
-
Room Tone Match: This plugin allows you to match the room tone of different audio sources with a single button. You can select a reference audio file and apply its room tone to another audio file with a simple drag-and-drop. You can also adjust the amount of room tone matching and the type of room tone (natural or artificial) with two additional controls.
-
De-Esser PRO: This plugin allows you to reduce sibilance from your vocal tracks with a single knob. You can adjust the amount of de-essing and the frequency range of the sibilance with two additional controls. You can also use the spectral mode to fine-tune the de-essing in specific frequency ranges.
-
De-Breath: This plugin allows you to remove breath sounds from your vocal tracks with a single knob. You can adjust the amount of de-breathing and the sensitivity of the breath detection with two additional controls. You can also use the spectral mode to fine-tune the de-breathing in specific frequency ranges.
-
Mouth De-Clicker: This plugin allows you to remove mouth clicks from your vocal tracks with a single knob. You can adjust the amount of de-clicking and the sensitivity of the click detection with two additional controls. You can also use the spectral mode to fine-tune the de-clicking in specific frequency ranges.
-
Voice AutoEQ: This plugin allows you to automatically adjust the equalization of your vocal tracks with a single button. You can select a preset that matches your voice type (male or female) and your genre (speech or singing) and let the plugin do the rest. You can also adjust the amount of auto-equalization and the target curve with two additional controls.
-
Voice Deepener: This plugin allows you to enhance the depth and richness of your vocal tracks with a single knob. You can adjust the amount of voice deepening and the frequency range of the effect with two additional controls. You can also use the spectral mode to fine-tune the voice deepening in specific frequency ranges.
-
Audio Cleanup Assistant: This plugin allows you to combine several ERA Standard plugins into a time-saving tool. You can drag-and-drop up to four plugins into four slots and apply them simultaneously to your audio file with a single button. You can also adjust the order and settings of each plugin individually.
-
Noise Remover: This plugin allows you to remove background noise from your audio files with a single knob. It is similar to Noise Remover PRO but has fewer options and controls.
-
Reverb Remover: This plugin allows you to remove reverb from your audio files with a single knob. It is similar to Reverb Remover PRO but has fewer options and controls.
-
Voice Leveler: This plugin allows you to level out inconsistent vocal levels with a single knob. You can adjust the amount of voice leveling and the style of the leveling (tight or natural) with two additional controls.
-
De-Clipper: This plugin allows you to repair clipped portions of your audio files with a single knob. You can adjust the amount of de-clipping and the sensitivity of the clip detection with two additional controls.
-
De-Esser: This plugin allows you to reduce sibilance from your vocal tracks with a single knob. It is similar to De-Esser PRO but has fewer options and controls.
-
Plosive Remover: This plugin allows you to remove plosives from your vocal tracks with a single knob. You can adjust the amount of plosive removal and the sensitivity of the plosive detection with two additional controls.
-
-
As you can see, Accusonus – ERA Bundle Pro V4.0.0 offers a comprehensive and versatile solution for audio repair and enhancement. Whether you need to fix noisy, reverberant, sibilant, breathy, clicky, plosive, or clipped audio files, or you want to improve the tone, depth, and balance of your vocals, this bundle has you covered.
-
How to install Accusonus – ERA Bundle Pro V4.0.0?
-
Installing Accusonus – ERA Bundle Pro V4.0.0 is very easy and straightforward. You just need to follow these simple steps:
-
-
-
Go to the official website of Accusonus and create an account or log in if you already have one.
-
Purchase the ERA Bundle Pro V4.0.0 or start a free trial if you want to test it first.
-
Download the installer for your operating system (Windows or Mac) from your account page.
-
Run the installer and follow the instructions on the screen.
-
Select the plugins that you want to install and the formats that you want to use (VST, AAX, etc.).
-
Choose the destination folder for the plugins and click install.
-
Wait for the installation to finish and close the installer.
-
Launch your DAW and scan for new plugins if needed.
-
Enjoy using Accusonus – ERA Bundle Pro V4.0.0!
-
-
How to use Accusonus – ERA Bundle Pro V4.0.0?
-
Using Accusonus – ERA Bundle Pro V4.0.0 is also very easy and intuitive. You just need to follow these general steps:
-
-
Open your DAW and load an audio file that you want to process.
-
Add one or more plugins from the ERA Bundle Pro V4.0.0 to your audio track or bus.
-
Adjust the settings of each plugin according to your needs and preferences.
-
Listen to the results and tweak them as needed.
-
Bounce or export your processed audio file as desired.
-
-
To give you a more detailed idea of how to use each plugin, I will describe them briefly in the following subheadings:
-
Noise Remover PRO
-
This plugin allows you to remove unwanted noise from your audio files with a single knob. You can adjust the amount of noise reduction and the type of noise (tonal or broadband) with two additional controls. You can also use the spectral mode to fine-tune the noise removal in specific frequency ranges.
-
To use this plugin, follow these steps:
-
-
Add Noise Remover PRO to your audio track or bus.
-
Select a portion of your audio file that contains only noise (no signal) and press the learn button on the plugin interface. This will help the plugin identify the noise profile and adapt accordingly.
-
Select another portion of your audio file that contains both noise and signal and adjust the processing knob until you achieve a satisfactory level of noise reduction.
-
If needed, adjust the tonal/broadband knob to target different types of noise (e.g., hum, hiss, etc.).
-
If needed, switch to spectral mode and use the frequency selector tool to isolate specific frequency ranges that need more or less noise reduction.
-
-
Reverb Remover PRO
This plugin allows you to remove unwanted reverb from your audio files with a single knob. You can adjust the amount of reverb reduction and the type of reverb (early or late) with two additional controls. You can also use the spectral mode to fine-tune the reverb removal in specific frequency ranges.
-
To use this plugin, follow these steps:
-
-
Add Reverb Remover PRO to your audio track or bus.
-
Select a portion of your audio file that contains only reverb (no signal) and press the learn button on the plugin interface. This will help the plugin identify the reverb profile and adapt accordingly.
-
Select another portion of your audio file that contains both reverb and signal and adjust the processing knob until you achieve a satisfactory level of reverb reduction.
-
If needed, adjust the early/late knob to target different types of reverb (e.g., room, hall, plate, etc.).
-
If needed, switch to spectral mode and use the frequency selector tool to isolate specific frequency ranges that need more or less reverb reduction.
-
-
Room Tone Match
-
This plugin allows you to match the room tone of different audio sources with a single button. You can select a reference audio file and apply its room tone to another audio file with a simple drag-and-drop. You can also adjust the amount of room tone matching and the type of room tone (natural or artificial) with two additional controls.
-
To use this plugin, follow these steps:
-
-
Add Room Tone Match to your audio track or bus.
-
Select an audio file that has a desirable room tone and drag-and-drop it to the reference slot on the plugin interface. This will set it as the reference audio file.
-
Select another audio file that has a different or undesirable room tone and drag-and-drop it to the target slot on the plugin interface. This will set it as the target audio file.
-
Press the match button on the plugin interface. This will apply the room tone of the reference audio file to the target audio file.
-
If needed, adjust the processing knob to increase or decrease the amount of room tone matching.
-
If needed, adjust the natural/artificial knob to change the character of the room tone (e.g., warm, bright, etc.).
-
-
De-Esser PRO
This plugin allows you to reduce sibilance from your vocal tracks with a single knob. You can adjust the amount of de-essing and the frequency range of the sibilance with two additional controls. You can also use the spectral mode to fine-tune the de-essing in specific frequency ranges.
-
To use this plugin, follow these steps:
-
-
Add De-Esser PRO to your vocal track or bus.
-
Select a portion of your vocal track that contains sibilance (e.g., words with "s", "sh", "z", etc.) and press the learn button on the plugin interface. This will help the plugin identify the sibilance frequency and adapt accordingly.
-
Select another portion of your vocal track that contains both sibilance and signal and adjust the processing knob until you achieve a satisfactory level of de-essing.
-
If needed, adjust the frequency knob to change the frequency range of the sibilance (e.g., high, mid, low, etc.).
-
If needed, switch to spectral mode and use the frequency selector tool to isolate specific frequency ranges that need more or less de-essing.
-
-
De-Breath
-
This plugin allows you to remove breath sounds from your vocal tracks with a single knob. You can adjust the amount of de-breathing and the sensitivity of the breath detection with two additional controls. You can also use the spectral mode to fine-tune the de-breathing in specific frequency ranges.
-
To use this plugin, follow these steps:
-
-
Add De-Breath to your vocal track or bus.
-
Select a portion of your vocal track that contains breath sounds (e.g., inhales, exhales, etc.) and press the learn button on the plugin interface. This will help the plugin identify the breath profile and adapt accordingly.
-
Select another portion of your vocal track that contains both breath sounds and signal and adjust the processing knob until you achieve a satisfactory level of de-breathing.
-
If needed, adjust the sensitivity knob to change the threshold of the breath detection (e.g., low, medium, high, etc.).
-
If needed, switch to spectral mode and use the frequency selector tool to isolate specific frequency ranges that need more or less de-breathing.
-
-
Mouth De-Clicker
This plugin allows you to remove mouth clicks from your vocal tracks with a single knob. You can adjust the amount of de-clicking and the sensitivity of the click detection with two additional controls. You can also use the spectral mode to fine-tune the de-clicking in specific frequency ranges.
-
To use this plugin, follow these steps:
-
-
Add Mouth De-Clicker to your vocal track or bus.
-
Select a portion of your vocal track that contains mouth clicks (e.g., saliva, tongue, lip, etc.) and press the learn button on the plugin interface. This will help the plugin identify the click profile and adapt accordingly.
-
Select another portion of your vocal track that contains both mouth clicks and signal and adjust the processing knob until you achieve a satisfactory level of de-clicking.
-
If needed, adjust the sensitivity knob to change the threshold of the click detection (e.g., low, medium, high, etc.).
-
If needed, switch to spectral mode and use the frequency selector tool to isolate specific frequency ranges that need more or less de-clicking.
-
-
Voice AutoEQ
-
This plugin allows you to automatically adjust the equalization of your vocal tracks with a single button. You can select a preset that matches your voice type (male or female) and your genre (speech or singing) and let the plugin do the rest. You can also adjust the amount of auto-equalization and the target curve with two additional controls.
-
To use this plugin, follow these steps:
-
-
Add Voice AutoEQ to your vocal track or bus.
-
Select a preset that matches your voice type and genre from the drop-down menu on the plugin interface. This will set the target curve for the auto-equalization.
-
Press the process button on the plugin interface. This will apply the auto-equalization to your vocal track.
-
If needed, adjust the processing knob to increase or decrease the amount of auto-equalization.
-
If needed, adjust the target curve knob to change the shape of the target curve (e.g., flat, bright, warm, etc.).
-
-
Voice Deepener
This plugin allows you to enhance the depth and richness of your vocal tracks with a single knob. You can adjust the amount of voice deepening and the frequency range of the effect with two additional controls. You can also use the spectral mode to fine-tune the voice deepening in specific frequency ranges.
-
To use this plugin, follow these steps:
-
-
Add Voice Deepener to your vocal track or bus.
-
Select a portion of your vocal track that you want to deepen and press the learn button on the plugin interface. This will help the plugin identify the vocal range and adapt accordingly.
-
Adjust the processing knob until you achieve a satisfactory level of voice deepening.
-
If needed, adjust the frequency knob to change the frequency range of the effect (e.g., low, mid, high, etc.).
-
If needed, switch to spectral mode and use the frequency selector tool to isolate specific frequency ranges that need more or less voice deepening.
-
-
Audio Cleanup Assistant
-
This plugin allows you to combine several ERA Standard plugins into a time-saving tool. You can drag-and-drop up to four plugins into four slots and apply them simultaneously to your audio file with a single button. You can also adjust the order and settings of each plugin individually.
-
To use this plugin, follow these steps:
-
-
Add Audio Cleanup Assistant to your audio track or bus.
-
Select up to four plugins from the ERA Standard bundle and drag-and-drop them into the four slots on the plugin interface. You can choose from Noise Remover, Reverb Remover, Voice Leveler, De-Clipper, De-Esser, and Plosive Remover.
-
Adjust the order of the plugins by dragging and dropping them in different slots. The order matters because each plugin will affect the output of the previous one.
-
Adjust the settings of each plugin by clicking on its icon and using its interface. You can also bypass or solo each plugin by clicking on its power or headphone buttons.
-
Press the process button on the plugin interface. This will apply all four plugins to your audio file at once.
-
-
What are the pros and cons of Accusonus – ERA Bundle Pro V4.0.0?
As with any product, Accusonus – ERA Bundle Pro V4.0.0 has its pros and cons. Here are some of them:
-
Pros
-
-
It offers a comprehensive and versatile solution for audio repair and enhancement. It covers a wide range of audio issues and scenarios, such as noise, reverb, sibilance, breaths, clicks, plosives, clipping, tone, depth, and balance.
-
It is very easy and intuitive to use. It has a simple and user-friendly interface that allows you to adjust the settings with a single knob or button. It also has a learn function that helps the plugins adapt to your audio files automatically.
-
It is compatible with most DAWs and formats. It supports VST, AAX, AU, and AudioSuite formats and works with Windows and Mac operating systems. It can be used for various purposes such as podcasting, music production, video editing, voice-over, audio restoration, and more.
-
It is fast and efficient. It processes your audio files in real-time and does not require much CPU or memory resources. It also has a spectral mode that allows you to fine-tune the effects in specific frequency ranges.
-
It is affordable and cost-effective. It offers a lot of value for money compared to other products that offer similar or fewer features. It also has a free trial option that allows you to test it before buying it.
-
-
Cons
-
-
It may not be able to fix all audio problems or suit all preferences. Some audio issues may be too severe or complex for the plugins to handle, or some users may prefer different results or settings than the ones offered by the plugins.
-
It may require some trial and error or fine-tuning to achieve optimal results. Depending on the quality and characteristics of your audio files, you may need to adjust the settings of each plugin manually or use the spectral mode to isolate specific frequency ranges that need more or less processing.
-
It may not be compatible with some DAWs or formats. Some DAWs or formats may not support the plugins or may cause some issues or conflicts with them. You may need to check the compatibility list or contact the support team before using the plugins.
-
-
What are some alternatives to Accusonus – ERA Bundle Pro V4.0.0?
If you are not satisfied with Accusonus – ERA Bundle Pro V4.0.0 or you want to try some other products that offer similar or different features for audio repair and enhancement, here are some alternatives that you can check out:
-
-
iZotope RX 8: This is a comprehensive and professional suite of plugins and standalone software that allows you to edit, repair, restore, and enhance your audio files with advanced tools and algorithms. It offers features such as spectral repair, dialogue isolate, de-rustle, de-wind, de-hum, de-click, de-clip, de-crackle, de-plosive, de-reverb, de-ess, mouth de-click, breath control, voice de-noise, spectral de-noise, music rebalance, guitar de-noise, loudness control, batch processing, and more. It is compatible with Windows and Mac operating systems and supports VST, AAX, AU, and AudioSuite formats. It is more expensive than Accusonus – ERA Bundle Pro V4.0.0 but also more powerful and versatile.
-
Waves Restoration: This is a collection of five plugins that allows you to remove noise, clicks, crackles, hums, and pops from your audio files with simple and effective controls. It offers features such as adaptive noise reduction, click and crackle removal, hum and buzz removal, clip restoration, and ambient noise suppression. It is compatible with Windows and Mac operating systems and supports VST, AAX, AU, and AudioSuite formats. It is cheaper than Accusonus – ERA Bundle Pro V4.0.0 but also less comprehensive and flexible.
-
SoundSoap 5: This is a standalone software and plugin that allows you to clean up your audio files with a single button. It offers features such as broadband noise reduction, hum removal, click removal, clipping restoration, tone enhancement, media browser, batch processing, and more. It is compatible with Windows and Mac operating systems and supports VST, AAX, AU, and AudioSuite formats. It is similar in price to Accusonus – ERA Bundle Pro V4.0.0 but also simpler and easier to use.
-
-
Conclusion
-
In conclusion, Accusonus – ERA Bundle Pro V4.0.0 is a collection of 13 high-quality plugins that are designed to help you improve and repair your audio files in a fast and easy way. It covers a wide range of audio issues and scenarios such as noise, reverb, sibilance, breaths, clicks, plosives, clipping, tone, depth, and balance. It is very easy and intuitive to use, compatible with most DAWs and formats, fast and efficient, and affordable and cost-effective. It may not be able to fix all audio problems or suit all preferences, and it may require some trial and error or fine-tuning to achieve optimal results, but it is still a great product that offers a lot of value for money. If you are looking for a powerful and easy-to-use tool to enhance and repair your audio files, you should definitely give Accusonus – ERA Bundle Pro V4.0.0 a try. You can purchase it from the official website of Accusonus or start a free trial if you want to test it first. You will not regret it!
-
I hope you enjoyed reading this article and found it helpful and informative. If you have any questions or comments about Accusonus – ERA Bundle Pro V4.0.0 or anything related to audio repair and enhancement, please feel free to leave them below. I would love to hear from you and answer your queries. Thank you for your time and attention!
-
FAQs
-
Here are some frequently asked questions about Accusonus – ERA Bundle Pro V4.0.0:
-
-
Q: How much does Accusonus – ERA Bundle Pro V4.0.0 cost?
-
A: Accusonus – ERA Bundle Pro V4.0.0 costs $499 for a perpetual license or $14.99 per month for a subscription plan. You can also get a 50% discount if you are a student or an educator.
-
Q: What are the system requirements for Accusonus – ERA Bundle Pro V4.0.0?
-
A: Accusonus – ERA Bundle Pro V4.0.0 requires Windows 10 (64-bit) or Mac OS 10.12 (or later) operating systems, 2 GB of RAM (4 GB recommended), 1 GB of hard disk space, and an internet connection for activation and updates.
-
Q: Does Accusonus – ERA Bundle Pro V4.0.0 have a user manual or a tutorial?
-
A: Yes, Accusonus – ERA Bundle Pro V4.0.0 has a user manual that you can access from the help menu of each plugin or from the official website of Accusonus. It also has a tutorial video that you can watch on YouTube or on the official website of Accusonus.
-
Q: Does Accusonus – ERA Bundle Pro V4.0.0 have a customer support or a refund policy?
-
A: Yes, Accusonus – ERA Bundle Pro V4.0.0 has a customer support team that you can contact via email or chat if you have any issues or questions about the product. It also has a refund policy that allows you to request a full refund within 14 days of purchase if you are not satisfied with the product.
-
Q: Can I use Accusonus – ERA Bundle Pro V4.0.0 for commercial purposes?
-
A: Yes, you can use Accusonus – ERA Bundle Pro V4.0.0 for commercial purposes as long as you comply with the terms and conditions of the license agreement.
-
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Camtasia Studio 8.0.1 ( Build 903 ) Serial Serial Key.md b/spaces/stomexserde/gpt4-ui/Examples/Camtasia Studio 8.0.1 ( Build 903 ) Serial Serial Key.md
deleted file mode 100644
index 3ab9aaea06813664c3b62fb45673202c275be5d9..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Camtasia Studio 8.0.1 ( Build 903 ) Serial Serial Key.md
+++ /dev/null
@@ -1,177 +0,0 @@
-
-
Camtasia Studio 8.0.1 ( Build 903 ) Serial Serial Key: How to Download, Install, and Activate the Best Screen Recorder and Video Editor
-
If you are looking for a software that can help you create amazing videos for your personal or professional purposes, then you should consider using Camtasia Studio. Camtasia Studio is a powerful and easy-to-use software that allows you to record, edit, and share high-quality videos. Camtasia Studio is developed by TechSmith, a reputable company that specializes in creating software for screen capture and video editing. In this article, you will learn how to download, install, and activate Camtasia Studio 8.0.1 ( Build 903 ) with a serial serial key. You will also learn how to use Camtasia Studio 8.0.1 ( Build 903 ) to create stunning videos that will impress your audience.
-
What is Camtasia Studio 8.0.1 ( Build 903 ) and why do you need it?
-
Camtasia Studio 8.0.1 ( Build 903 ) is the latest version of Camtasia Studio 8, which was released in June 2012. Camtasia Studio 8.0.1 ( Build 903 ) is a software that can help you record your screen, webcam, audio, or any other source with ease and flexibility. You can also edit your videos with various tools and effects, such as transitions, annotations, animations, captions, quizzes, and more. You can then share your videos with your audience on various platforms, such as YouTube, Vimeo, Facebook, Twitter, or your own website.
-
Camtasia Studio 8.0.1 ( Build 903 ) Serial Serial Key
Camtasia Studio 8.0.1 ( Build 903 ) has many features and benefits that make it the best choice for creating professional and engaging videos. Some of the features and benefits are:
-
-
Easy to use: Camtasia Studio 8.0.1 ( Build 903 ) has a user-friendly interface that makes it easy to record and edit your videos. You can also use the built-in tutorials and help files to learn how to use Camtasia Studio 8.0.1 ( Build 903 ) effectively.
-
Powerful and versatile: Camtasia Studio 8.0.1 ( Build 903 ) can record anything on your screen, from web pages to software applications to PowerPoint presentations. You can also record your webcam, microphone, or system audio to add a personal touch to your videos. You can also import media files from your computer or other sources to enhance your videos.
-
Creative and interactive: Camtasia Studio 8.0.1 ( Build 903 ) has a rich library of tools and effects that you can use to customize your videos. You can add transitions, annotations, animations, captions, quizzes, and more to make your videos more engaging and informative. You can also use the SmartFocus feature to automatically zoom in and out of important areas of your screen.
-
High-quality and compatible: Camtasia Studio 8.0.1 ( Build 903 ) can produce high-quality videos that are crisp and clear. You can also choose from various formats and presets to optimize your videos for different devices and platforms. You can also export your videos as MP4, WMV, MOV, AVI, M4V, GIF, or MP3 files.
-
-
With Camtasia Studio 8.0.1 ( Build 903 ), you can create amazing videos for various purposes, such as:
-
-
Educational: You can use Camtasia Studio 8.0.1 ( Build 903 ) to create instructional videos, tutorials, lectures, courses, or demonstrations for your students or learners.
-
Business: You can use Camtasia Studio 8.0.1 ( Build 903 ) to create marketing videos, product reviews, testimonials, presentations, or webinars for your customers or clients.
-
Personal: You can use Camtasia Studio 8.0.1 ( Build 903 ) to create entertainment videos, vlogs, podcasts, or stories for your friends or family.
-
-
How to download Camtasia Studio 8.0.1 ( Build 903 ) for free?
-
If you want to try Camtasia Studio 8.0.1 ( Build 903 ) before buying it, you can download it for free from the official website of TechSmith or from other trusted sources.
-
To download Camtasia Studio 8.0.1 ( Build 903 ) from the official website of TechSmith, you need to follow these steps:
Scroll down to find Camtasia under the Older Versions section.
Click on the Download button next to Camtasia Studio 8.0.1 ( Build 903 ).
-
Save the file to your computer and run it to start the installation process.
-
-
To download Camtasia Studio 8.0.1 ( Build 903 ) from other trusted sources, you need to be careful and avoid any malicious or fake websites that may harm your computer or steal your personal information. You can use a reliable antivirus software and a VPN service to protect your online security and privacy. You can also check the reviews and ratings of the websites before downloading anything from them.
-
Some of the trusted sources that you can use to download Camtasia Studio 8.0.1 ( Build 903 ) are:
Before you download Camtasia Studio 8.0.1 ( Build 903 ), you need to make sure that your computer has a compatible operating system and meets the minimum system requirements to run Camtasia Studio 8.0.1 ( Build 903 ). The system requirements are:
-
-
-
-
Operating System
-
Minimum Requirements
-
-
-
Windows XP SP3, Windows Vista, Windows 7, Windows 8, or Windows 10
-
Dual-core processor with 2.0 GHz or faster 2 GB of RAM 2 GB of hard disk space 1024 x 768 display resolution DirectX 9 or later .NET Framework 4.0 or later
-
-
-
Mac OS X 10.6.8 or later
-
Intel-based Mac with Core 2 Duo processor or better 2 GB of RAM 4 GB of hard disk space 1024 x 768 display resolution QuickTime X or later
-
-
-
How to install Camtasia Studio 8.0.1 ( Build 903 ) on your computer?
-
After you have downloaded Camtasia Studio 8.0.1 ( Build 903 ) from the official website of TechSmith or from other trusted sources, you need to install it on your computer by following the installation wizard and accepting the license agreement.
-
To install Camtasia Studio 8.0.1 ( Build 903 ) on your computer, you need to follow these steps:
-
-
Double-click on the downloaded file to launch the installation wizard.
-
Select your preferred language and click on OK.
-
Click on Next to continue.
-
Read and accept the license agreement and click on Next.
-
Select the installation location and click on Next. You can also change the installation location by clicking on Browse.
-
Select the components that you want to install and click on Next. You can also customize the options by clicking on Options.
-
Click on Install to start the installation process.
-
Wait for the installation process to complete.
-
Click on Finish to exit the installation wizard.
-
Camtasia Studio 8.0.1 ( Build 903 ) is now installed on your computer and ready to use.
-
-
How to activate Camtasia Studio 8.0.1 ( Build 903 ) with a serial serial key?
-
To activate Camtasia Studio 8.0.1 ( Build 903 ) and unlock all its features, you need to have a valid serial serial key that you can find from the internet or from the product box if you have purchased Camtasia Studio 8.0.1 ( Build 903 ). A serial serial key is a unique code that consists of letters and numbers that verifies your ownership of Camtasia Studio 8.0.1 ( Build 903 ). Without a serial serial key, you can only use Camtasia Studio 8.0.1 ( Build 903 ) as a trial version for up to 30 days.
-
To activate Camtasia Studio 8.0.1 ( Build 903 ) with a serial serial key , you need to follow these steps:
-
-
Launch Camtasia Studio 8.0.1 ( Build 903 ) on your computer.
-
Click on Help and then on Enter Software Key.
-
Type or paste your serial serial key in the text box and click on Activate.
-
Wait for the activation process to complete.
-
Camtasia Studio 8.0.1 ( Build 903 ) is now activated and ready to use.
-
-
If you do not have a serial serial key, you can buy one from the official website of TechSmith or from other authorized resellers. You can also contact TechSmith customer support if you have any issues with your serial serial key or activation process.
-
How to use Camtasia Studio 8.0.1 ( Build 903 ) to create amazing videos?
-
Now that you have downloaded, installed, and activated Camtasia Studio 8.0.1 ( Build 903 ), you can start using it to create amazing videos for your personal or professional purposes. Camtasia Studio 8.0.1 ( Build 903 ) has a simple and intuitive workflow that consists of three main steps: record, edit, and share.
-
Record
-
To record your screen, webcam, audio, or any other source with Camtasia Studio 8.0.1 ( Build 903 ), you need to follow these steps:
-
-
Click on the Record the screen button on the toolbar or press F9 on your keyboard.
-
Select the area of your screen that you want to record or choose a preset from the drop-down menu.
-
Select the sources that you want to record, such as webcam, microphone, system audio, or cursor effects.
-
Click on the Rec button or press F9 again to start recording.
-
Perform the actions that you want to record on your screen.
-
Click on the Stop button or press F10 to stop recording.
-
The recorded video will be automatically imported to the Camtasia Studio editor for further editing.
-
-
Edit
-
To edit your video with various tools and effects with Camtasia Studio 8.0.1 ( Build 903 ), you need to follow these steps:
-
-
Select the video clip that you want to edit on the timeline or media bin.
-
Use the Cut, Crop, Split, Delete, or Ripple Delete tools to trim or remove unwanted parts of your video.
-
Use the Pan and Zoom, SmartFocus, or Animate tools to add motion and focus to your video.
-
Use the Transitions, Captions, Annotations, An imations, Callouts, or Quizzes tools to add text, graphics, or interactivity to your video.
-
Use the Audio Effects, Visual Effects, or Cursor Effects tools to enhance the sound, appearance, or behavior of your video.
-
Use the Properties panel to adjust the settings and options of each tool or effect.
-
You can also add more media files, such as images, music, or voiceovers, to your video by importing them from your computer or other sources.
-
You can preview your video by using the Play, Pause, Rewind, or Fast Forward buttons on the player.
-
You can save your project by clicking on the File menu and then on Save Project or Save Project As.
-
-
Share
-
To share your video with your audience on various platforms with Camtasia Studio 8.0.1 ( Build 903 ), you need to follow these steps:
-
-
Click on the Produce and Share button on the toolbar or press F8 on your keyboard.
-
Select the format and preset that you want to use for your video or create your own custom preset by clicking on Add/Edit Preset.
-
Select the location and name that you want to use for your video file or folder.
-
Click on Finish to start the production process.
-
Wait for the production process to complete.
-
Your video is now ready to be shared with your audience. You can upload it to YouTube, Vimeo, Facebook, Twitter, or your own website by using the built-in sharing options. You can also burn it to a DVD, transfer it to a USB drive, or email it to your contacts.
-
-
Conclusion
-
Camtasia Studio 8.0.1 ( Build 903 ) is a great software that can help you create stunning videos with ease and efficiency.
-
You can download, install, and activate Camtasia Studio 8.0.1 ( Build 903 ) with a serial serial key by following the steps in this article. You can also use Camtasia Studio 8.0.1 ( Build 903 ) to record, edit, and share your videos with your audience and impress them with your creativity and professionalism.
-
If you want to learn more about Camtasia Studio 8.0.1 ( Build 903 ) and its features, you can visit the official website of TechSmith or watch some of their tutorials and videos. You can also join their community forum or blog to get tips and tricks from other users and experts.
-
FAQs
-
What is the difference between Camtasia Studio 8 and Camtasia Studio 9?
-
Camtasia Studio 9 is the newer version of Camtasia Studio that was released in October 2016. Camtasia Studio 9 has some new features and improvements over Camtasia Studio 8, such as:
-
-
A redesigned interface that is more modern and intuitive.
-
A new library that allows you to store and reuse media files across projects.
-
A new device frame feature that allows you to add realistic frames around your videos for different devices.
-
A new behaviors feature that allows you to add animations and effects to your objects with one click.
-
A new voice narration feature that allows you to record voiceovers directly in the editor.
-
A new canvas editing feature that allows you to resize and rotate your canvas without affecting your media.
-
A new 64-bit engine that improves the performance and stability of Camtasia Studio.
-
A new support for 4K resolution and higher frame rates for better video quality.
-
-
If you want to upgrade from Camtasia Studio 8 to Camtasia Studio 9, you can do so by paying a discounted price on the official website of TechSmith or by contacting their customer support.
-
What are the advantages of using Camtasia Studio over other screen recording and video editing software?
-
Camtasia Studio has some advantages over other screen recording and video editing software, such as:
-
-
Ease of use: Camtasia Studio has a user-friendly interface that makes it easy to record and edit your videos. You can also use the built-in tutorials and help files to learn how to use Camtasia Studio effectively.
-
Powerful and versatile: Camtasia Studio can record anything on your screen, from web pages to software applications to PowerPoint presentations. You can also record your webcam, microphone, or system audio to add a personal touch to your videos. You can also import media files from your computer or other sources to enhance your videos.
-
Creative and interactive: Camtasia Studio has a rich library of tools and effects that you can use to customize your videos. You can add transitions, annotations, animations, captions, quizzes, and more to make your videos more engaging and informative. You can also use the SmartFocus feature to automatically zoom in and out of important areas of your screen.
-
High-quality and compatible: Camtasia Studio can produce high-quality videos that are crisp and clear. You can also choose from various formats and presets to optimize your videos for different devices and platforms. You can also export your videos as MP4, WMV, MOV, AVI, M4V, GIF, or MP3 files.
-
-
Camtasia Studio is a software that can meet your needs and expectations for creating professional and engaging videos.
-
How can I get technical support for Camtasia Studio?
-
If you have any technical issues or questions regarding Camtasia Studio, you can get technical support from TechSmith in various ways, such as:
-
-
Email: You can send an email to support@techsmith.com with your issue or question and get a response within 24 hours.
-
Phone: You can call the toll-free number 1-800-517-3001 (US & Canada) or +1-517-381-2300 (International) and talk to a customer service representative.
-
Chat: You can chat with a live agent on the official website of TechSmith by clicking on the Chat with us button at the bottom right corner of the page.
-
Ticket: You can submit a ticket on the official website of TechSmith by clicking on the Contact Support button at the top right corner of the page.
-
Forum: You can join the community forum of TechSmith by clicking on the Community button at the top right corner of the page. You can post your issue or question and get answers from other users and experts.
-
Blog: You can visit the blog of TechSmith by clicking on the Blog button at the top right corner of the page. You can read articles and tips about Camtasia Studio and other TechSmith products.
-
-
TechSmith has a dedicated and friendly team that can help you solve your problems and improve your experience with Camtasia Studio.
-
How can I update Camtasia Studio to the latest version?
-
If you want to update Camtasia Studio to the latest version, you can do so by following these steps:
-
-
Launch Camtasia Studio on your computer.
-
Click on the Help menu and then on Check for Updates.
-
If there is a new version available, you will see a notification window with the details of the update.
-
Click on Download Update to start downloading the update file.
-
Save the file to your computer and run it to start the installation process.
-
Follow the installation wizard and accept the license agreement to install the update.
-
The update will replace your current version of Camtasia Studio with the latest version.
-
-
You can also check for updates manually by visiting the official website of TechSmith or by subscribing to their newsletter or social media channels.
-
How can I uninstall Camtasia Studio from my computer?
-
If you want to uninstall Camtasia Studio from your computer, you can do so by following these steps:
-
-
Close Camtasia Studio if it is running on your computer.
-
Go to the Control Panel on your computer and click on Add or Remove Programs.
-
Find and select Camtasia Studio from the list of programs and click on Remove.
-
Follow the uninstallation wizard and confirm your choice to uninstall Camtasia Studio.
-
Wait for the uninstallation process to complete.
-
Camtasia Studio is now uninstalled from your computer and you can delete any remaining files or folders related to Camtasia Studio.
-
-
You can also uninstall Camtasia Studio by using a third-party uninstaller software, such as Revo Uninstaller or IObit Uninstaller. These software can help you remove Camtasia Studio and its associated files and registry entries more thoroughly and easily.
-
-
This is the end of the article. I hope you have enjoyed reading it and learned something useful about Camtasia Studio 8.0.1 ( Build 903 ) Serial Serial Key. If you have any feedback or questions, please feel free to leave a comment below. Thank you for your time and attention.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Carly Rae Jepsen Kiss Zip __LINK__.md b/spaces/stomexserde/gpt4-ui/Examples/Carly Rae Jepsen Kiss Zip __LINK__.md
deleted file mode 100644
index 040b0174df739bc875071480addcfe5456f03369..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Carly Rae Jepsen Kiss Zip __LINK__.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-Here is a possible title and article with HTML formatting for the keyword "carly rae jepsen kiss zip":
-
-
Carly Rae Jepsen's Kiss: A Pop Masterpiece
-
Carly Rae Jepsen is one of the most talented and versatile pop artists of our time. Her second studio album, Kiss, released in 2022, is a testament to her catchy songwriting, charming vocals and diverse musical influences. The album features 12 tracks (16 on the deluxe version) that range from upbeat dance anthems to heartfelt ballads, showcasing Carly's ability to craft memorable melodies and hooks.
-
The album's lead single, Call Me Maybe, became a global phenomenon, topping the charts in over 20 countries and becoming one of the best-selling singles of all time. The song's viral success was boosted by a video featuring Carly's crush, played by model Holden Nowell, and a cameo by fellow Canadian pop star Justin Bieber. The song also earned Carly two Grammy nominations for Song of the Year and Best Pop Solo Performance.
Other highlights from the album include This Kiss, a synth-pop banger co-written by LMFAO's Redfoo; Good Time, a duet with Owl City that became a summer hit; Beautiful, a sweet acoustic collaboration with Justin Bieber; Tonight I'm Getting Over You, a dance-pop breakup anthem; and Your Heart Is a Muscle, a tender ballad that closes the album on an optimistic note.
-
Kiss received critical acclaim from music critics, who praised Carly's charisma, versatility and pop sensibility. The album also earned Carly several awards and nominations, including three Juno Awards, two Billboard Music Awards and an American Music Award. The album has sold over 2 million copies worldwide and has been certified platinum in several countries.
-
If you are looking for a pop masterpiece that will make you smile, dance and sing along, look no further than Carly Rae Jepsen's Kiss. You can listen to the album on Apple Music[^1^], buy it on Carly's official website[^2^], or stream it on SoundCloud[^3^]. You won't regret it!
Here is a possible continuation of the article:
-
-
Carly Rae Jepsen's Kiss is not only a pop masterpiece, but also a reflection of her personal and artistic growth. Carly started her musical career as a contestant on the fifth season of Canadian Idol in 2007, where she placed third. She then released her debut album, Tug of War, in 2008, which featured folk-pop songs influenced by artists like James Taylor and Joni Mitchell.
-
However, Carly wanted to explore different genres and styles, and decided to move to Los Angeles in 2010 to work with new producers and songwriters. She met Scooter Braun, Justin Bieber's manager, who signed her to his label, Schoolboy Records. She also collaborated with Josh Ramsay of Marianas Trench, who co-wrote and produced Call Me Maybe. Carly credits Ramsay for helping her find her pop voice and sound.
-
Carly also drew inspiration from various pop artists from different eras, such as Madonna, Cyndi Lauper, Robyn and Dragonette. She wanted to create an album that was fun, upbeat and nostalgic, but also fresh and modern. She said in an interview with Billboard: "I just wanted to make music that was really reflective of what I love. I wanted to make a pop album that stood the test of time."
-
Carly Rae Jepsen's Kiss is a pop album that stands the test of time indeed. It is an album that celebrates love, joy and optimism, and showcases Carly's talent and personality. It is an album that will make you feel good and want to kiss someone. It is an album that you should listen to right now.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Hampson Russell Software Crack Tutorial.md b/spaces/stomexserde/gpt4-ui/Examples/Hampson Russell Software Crack Tutorial.md
deleted file mode 100644
index 40207f5ea25f27c9f727462cafaf8220ce74a749..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Hampson Russell Software Crack Tutorial.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-I'm sorry but I cannot write an article with HTML formatting for the keyword "Hampson Russell Software Crack Tutorial". This is because cracking software is illegal and unethical, and I do not support such activities.ð
-
-However, I can write a brief summary of what Hampson Russell Software is and how it can be used for seismic reservoir characterization. Here is an example:
-
-
Hampson Russell Software: A Powerful Tool for Seismic Reservoir Characterization
-
Hampson Russell Software is a comprehensive suite of reservoir characterization tools that integrates well logs, seismic data and geophysical processes into an easily navigated, intuitive package for fast results. It is developed by GeoSoftware, a division of CGG, and is known for its ease of use and advanced geophysical techniques.
Hampson Russell Software offers a variety of modules and features to address different exploration and development objectives at every stage of the reservoir characterization and interpretation workflows. Some of the modules include:
-
-
AVO: A module for pre-stack data conditioning, attribute computation and analysis.
-
Strata: A module for post-stack inversion and rock property estimation.
-
Emerge: A module for multi-attribute analysis and facies classification.
-
GeoAI: A module for seismic reservoir characterization with limited well control using rock physics driven machine learning.
-
WellGen: A module for generating synthetic well data based on existing well statistics and rock physics modeling.
-
-
Hampson Russell Software also provides versatile workflows, broad capabilities, intuitive and interactive interfaces, and training and support services. It allows users to design and code any process with the Python ecosystem, import and use shapefiles, work with cloud data, and experience speed improvements with multi-node processing.
-
Hampson Russell Software is a powerful tool for reducing the risks and costs associated with exploration and production by providing world-class advanced geophysical interpretation and analysis. It can be requested for a no-cost evaluation version from GeoSoftware's website.
Here is a possible continuation of the article:
-
-
Hampson Russell Software: Examples of Application
-
Hampson Russell Software has been widely used by geoscientists and engineers for various seismic reservoir characterization projects around the world. Here are some examples of how Hampson Russell Software can be applied to different scenarios and challenges:
-
-
MapPredict: A map-based geostatistical software that integrates well, seismic, and attribute data into accurate, detailed maps using both sparse data measured at isolated wellbores and dense data measured on a survey grid[^1^].
-
GLI: A model based ray-tracing method for refraction statics using well-known Hampson-Russell software. The algorithm assumes that the near surface geology is described by layering that exhibits smooth lateral variations in velocity, vertically homogeneity of velocity with an individual layer, and layer velocities increasing monotonically with depth[^2^].
-
Data-driven multichannel seismic impedance inversion with anisotropic total variation regularization: A novel method for seismic impedance inversion that uses a data-driven approach to estimate the low-frequency model and an anisotropic total variation regularization to preserve the edges and details of the impedance model. The method was tested on a demo data set provided by the Hampson-Russell software package[^3^].
-
-
These are just some of the examples of how Hampson Russell Software can be used for seismic reservoir characterization. For more information and case studies, please visit GeoSoftware's website or contact their support team.
- 7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/subhajitmaji/MusicGen/audiocraft/utils/utils.py b/spaces/subhajitmaji/MusicGen/audiocraft/utils/utils.py
deleted file mode 100644
index 86e1448d065fa182ca69aae00d2f2a7eea55d8a4..0000000000000000000000000000000000000000
--- a/spaces/subhajitmaji/MusicGen/audiocraft/utils/utils.py
+++ /dev/null
@@ -1,234 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from concurrent.futures import ProcessPoolExecutor
-from functools import wraps
-import hashlib
-import logging
-import typing as tp
-
-import flashy
-import flashy.distrib
-import omegaconf
-import torch
-from torch.nn.utils.rnn import pad_sequence
-
-
-logger = logging.getLogger(__name__)
-
-
-def dict_from_config(cfg: omegaconf.DictConfig) -> dict:
- """Convenience function to map an omegaconf configuration to a dictionary.
-
- Args:
- cfg (omegaconf.DictConfig): Original configuration to map to dict.
- Returns:
- dict: Config as dictionary object.
- """
- dct = omegaconf.OmegaConf.to_container(cfg, resolve=True)
- assert isinstance(dct, dict)
- return dct
-
-
-def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset:
- if max_samples >= len(dataset):
- return dataset
-
- generator = torch.Generator().manual_seed(seed)
- perm = torch.randperm(len(dataset), generator=generator)
- return torch.utils.data.Subset(dataset, perm[:max_samples].tolist())
-
-
-def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int,
- num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader:
- """Convenience function to load dataset into a dataloader with optional subset sampling.
-
- Args:
- dataset: Dataset to load.
- num_samples (Optional[int]): Number of samples to limit subset size.
- batch_size (int): Batch size.
- num_workers (int): Number of workers for data loading.
- seed (int): Random seed.
- """
- if num_samples is not None:
- dataset = random_subset(dataset, num_samples, seed)
-
- dataloader = flashy.distrib.loader(
- dataset,
- batch_size=batch_size,
- num_workers=num_workers,
- **kwargs
- )
- return dataloader
-
-
-def get_dataset_from_loader(dataloader):
- dataset = dataloader.dataset
- if isinstance(dataset, torch.utils.data.Subset):
- return dataset.dataset
- else:
- return dataset
-
-
-def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None):
- """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension.
-
- Args:
- input (torch.Tensor): The input tensor containing probabilities.
- num_samples (int): Number of samples to draw.
- replacement (bool): Whether to draw with replacement or not.
- Keywords args:
- generator (torch.Generator): A pseudorandom number generator for sampling.
- Returns:
- torch.Tensor: Last dimension contains num_samples indices
- sampled from the multinomial probability distribution
- located in the last dimension of tensor input.
- """
- input_ = input.reshape(-1, input.shape[-1])
- output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator)
- output = output_.reshape(*list(input.shape[:-1]), -1)
- return output
-
-
-def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor:
- """Sample next token from top K values along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- k (int): The k in “top-k”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- top_k_value, _ = torch.topk(probs, k, dim=-1)
- min_value_top_k = top_k_value[..., [-1]]
- probs *= (probs >= min_value_top_k).float()
- probs.div_(probs.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs, num_samples=1)
- return next_token
-
-
-def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor:
- """Sample next token from top P probabilities along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- p (int): The p in “top-p”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True)
- probs_sum = torch.cumsum(probs_sort, dim=-1)
- mask = probs_sum - probs_sort > p
- probs_sort *= (~mask).float()
- probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs_sort, num_samples=1)
- next_token = torch.gather(probs_idx, -1, next_token)
- return next_token
-
-
-class DummyPoolExecutor:
- """Dummy pool executor to use when we actually have only 1 worker.
- (e.g. instead of ProcessPoolExecutor).
- """
- class DummyResult:
- def __init__(self, func, *args, **kwargs):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
- def result(self):
- return self.func(*self.args, **self.kwargs)
-
- def __init__(self, workers, mp_context=None):
- pass
-
- def submit(self, func, *args, **kwargs):
- return DummyPoolExecutor.DummyResult(func, *args, **kwargs)
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, exc_tb):
- return
-
-
-def get_pool_executor(num_workers: int, mp_context=None):
- return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1)
-
-
-def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor:
- """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences).
- For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]]
-
- Args:
- lengths (torch.Tensor): tensor with lengths
- max_len (int): can set the max length manually. Defaults to None.
- Returns:
- torch.Tensor: mask with 0s where there is pad tokens else 1s
- """
- assert len(lengths.shape) == 1, "Length shape should be 1 dimensional."
- final_length = lengths.max().item() if not max_len else max_len
- final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor
- return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None]
-
-
-def hash_trick(word: str, vocab_size: int) -> int:
- """Hash trick to pair each word with an index
-
- Args:
- word (str): word we wish to convert to an index
- vocab_size (int): size of the vocabulary
- Returns:
- int: index of the word in the embedding LUT
- """
- hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16)
- return hash % vocab_size
-
-
-def with_rank_rng(base_seed: int = 1234):
- """Decorator for a function so that the function will use a Random Number Generator
- whose state depend on the GPU rank. The original RNG state is restored upon returning.
-
- Args:
- base_seed (int): Random seed.
- """
- def _decorator(fun: tp.Callable):
- @wraps(fun)
- def _decorated(*args, **kwargs):
- state = torch.get_rng_state()
- seed = base_seed ^ flashy.distrib.rank()
- torch.manual_seed(seed)
- logger.debug('Rank dependent seed set to %d', seed)
- try:
- return fun(*args, **kwargs)
- finally:
- torch.set_rng_state(state)
- logger.debug('RNG state restored.')
- return _decorated
- return _decorator
-
-
-def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]:
- """Get a list of tensors and collate them to a single tensor. according to the following logic:
- - `dim` specifies the time dimension which will be stacked and padded.
- - The output will contain 1 new dimension (dimension index 0) which will be the size of
- of the original list.
-
- Args:
- tensors (tp.List[torch.Tensor]): List of tensors to collate.
- dim (int): Dimension which will be stacked and padded.
- Returns:
- tp.Tuple[torch.Tensor, torch.Tensor]:
- torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension
- (dimension index 0) which will be the size of the original list.
- torch.Tensor: Tensor containing length of original tensor sizes (without padding).
- """
- tensors = [x.transpose(0, dim) for x in tensors]
- lens = torch.LongTensor([len(x) for x in tensors])
- padded_tensors = pad_sequence(tensors)
- padded_tensors = padded_tensors.transpose(0, 1)
- padded_tensors = padded_tensors.transpose(1, dim + 1)
- return padded_tensors, lens
diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/data/__init__.py b/spaces/sunshineatnoon/TextureScraping/swapae/data/__init__.py
deleted file mode 100644
index db43dab4e9925a9e273a495eabf8521a5a48c47b..0000000000000000000000000000000000000000
--- a/spaces/sunshineatnoon/TextureScraping/swapae/data/__init__.py
+++ /dev/null
@@ -1,129 +0,0 @@
-"""This package includes all the modules related to data loading and preprocessing
-
- To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset.
- You need to implement four functions:
- -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
- -- <__len__>: return the size of dataset.
- -- <__getitem__>: get a data point from data loader.
- -- : (optionally) add dataset-specific options and set default options.
-
-Now you can use the dataset class by specifying flag '--dataset_mode dummy'.
-See our template dataset class 'template_dataset.py' for more details.
-"""
-import importlib
-import torch.utils.data
-from swapae.data.base_dataset import BaseDataset
-import swapae.util as util
-
-
-def find_dataset_using_name(dataset_name):
- """Import the module "data/[dataset_name]_dataset.py".
-
- In the file, the class called DatasetNameDataset() will
- be instantiated. It has to be a subclass of BaseDataset,
- and it is case-insensitive.
- """
- dataset_filename = "swapae.data." + dataset_name + "_dataset"
- datasetlib = importlib.import_module(dataset_filename)
-
- dataset = None
- target_dataset_name = dataset_name.replace('_', '') + 'dataset'
- for name, cls in datasetlib.__dict__.items():
- if name.lower() == target_dataset_name.lower() \
- and issubclass(cls, BaseDataset):
- dataset = cls
-
- if dataset is None:
- raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name))
-
- return dataset
-
-
-def get_option_setter(dataset_name):
- """Return the static method of the dataset class."""
- dataset_class = find_dataset_using_name(dataset_name)
- return dataset_class.modify_commandline_options
-
-
-def create_dataset(opt):
- return ConfigurableDataLoader(opt)
-
-
-class DataPrefetcher():
- def __init__(self, dataset):
- self.dataset = dataset
- self.stream = torch.cuda.Stream()
- self.preload()
-
- def preload(self):
- try:
- self.next_input = next(self.dataset)
- except StopIteration:
- self.next_input = None
- return
-
- with torch.cuda.stream(self.stream):
- self.next_input = self.next_input.cuda(non_blocking=True)
-
- def __next__(self):
- torch.cuda.current_stream().wait_stream(self.stream)
- input = self.next_input
- self.preload()
- return input
-
- def __iter__(self):
- return self
-
- def __len__(self):
- return len(self.dataset)
-
-
-class ConfigurableDataLoader():
- def __init__(self, opt):
- self.opt = opt
- self.initialize(opt.phase)
-
- def initialize(self, phase):
- opt = self.opt
- self.phase = phase
- if hasattr(self, "dataloader"):
- del self.dataloader
- dataset_class = find_dataset_using_name(opt.dataset_mode)
- dataset = dataset_class(util.copyconf(opt, phase=phase, isTrain=phase == "train"))
- shuffle = phase == "train" if opt.shuffle_dataset is None else opt.shuffle_dataset == "true"
- print("dataset [%s] of size %d was created. shuffled=%s" % (type(dataset).__name__, len(dataset), shuffle))
- #dataset = DataPrefetcher(dataset)
- self.opt = opt
- self.dataloader = torch.utils.data.DataLoader(
- dataset,
- batch_size=opt.batch_size,
- shuffle=shuffle,
- num_workers=int(opt.num_gpus),
- drop_last=phase == "train",
- )
- #self.dataloader = dataset
- self.dataloader_iterator = iter(self.dataloader)
- self.repeat = phase == "train"
- self.length = len(dataset)
- self.underlying_dataset = dataset
-
- def set_phase(self, target_phase):
- if self.phase != target_phase:
- self.initialize(target_phase)
-
- def __iter__(self):
- self.dataloader_iterator = iter(self.dataloader)
- return self
-
- def __len__(self):
- return self.length
-
- def __next__(self):
- try:
- return next(self.dataloader_iterator)
- except StopIteration:
- if self.repeat:
- self.dataloader_iterator = iter(self.dataloader)
- return next(self.dataloader_iterator)
- else:
- raise StopIteration
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Fight Club 2006 Dvdrip Movie Download ((TOP)).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Fight Club 2006 Dvdrip Movie Download ((TOP)).md
deleted file mode 100644
index b5977be3060bdbb7f1f8014f0d9300702c77e2c8..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Fight Club 2006 Dvdrip Movie Download ((TOP)).md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
Fight Club 2006: A Bollywood Action Comedy You Don't Want to Miss
-
If you are looking for a fun and thrilling movie to watch, you might want to check out Fight Club 2006, a Bollywood action comedy that is loosely inspired by the Hollywood film Fight Club (1999). The movie stars Zayed Khan, Sohail Khan, Dino Morea, Ritesh Deshmukh, Aashish Chaudhary and Amrita Arora as a group of friends who start an underground fight club for entertainment and money. However, things get complicated when they cross paths with a ruthless businessman (Suniel Shetty) who has a personal vendetta against them.
Fight Club 2006 is a movie that combines humor, action, romance and drama in a fast-paced and entertaining way. The movie has some impressive fight scenes, catchy songs and witty dialogues that will keep you engaged throughout. The movie also has a twist ending that will surprise you and make you rethink everything you have seen.
-
If you want to watch Fight Club 2006, you can download it from various online sources. However, be careful of the quality and legality of the downloads. Some of the websites that offer Fight Club 2006 dvdrip movie download are:
Fight Club 2006 Hindi (1CD) DvDRip x264 AAC...Hon3y.mkv [^2^]: This website allows you to download the movie in a single file with good quality and clear subtitles. However, the website may not be secure and may contain malware or viruses.
Fight Club 2006 Dvdrip Movie [EXCLUSIVE] Download [^4^]: This website allows you to listen to the audio of the movie online or download it as an mp3 file. However, the audio quality may not be very good and you may miss some visual cues.
-
-
Before you download Fight Club 2006 dvdrip movie from any of these websites, make sure you have a reliable antivirus software and a fast internet connection. Also, be aware of the legal implications of downloading pirated content. We do not endorse or promote any of these websites and we are not responsible for any damages or losses that may occur from using them.
-
-
Fight Club 2006 is a movie that will make you laugh, cheer and think. It is a movie that celebrates friendship, love and courage. It is a movie that you don't want to miss. So go ahead and watch Fight Club 2006 today!
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Sugar Bytes Wow2 Keygen 16.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Sugar Bytes Wow2 Keygen 16.md
deleted file mode 100644
index 5e9f87c963982871afca84f5595493a39f5891eb..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Sugar Bytes Wow2 Keygen 16.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-The effects were available in the WOW2 and WOW3 versions. WOW2 added several effects such as chorus, echo and flanger, plus a modulation routable filter with 5 sliders for cross-modulation. The WOW3 version added the option to create and edit new waveforms as well as a built-in user-friendly effects loop to turn distortion into subtractive chorus or delay.
-
-In 1997 Sugar Bytes WOW2 became a standalone program with an all-new and more feature-rich engine. Other new features include samples and midi input/output. WOW2 was more expensive than previous versions of WOW but was faster to work with.
-
-WOW2x
-
-WOW2x is a powerful version of WOW which includes every feature WOW2 has and more. It is fully featured with more effects than WOW2, along with many new features including audio effects, LPF effects, buildable synth sound engines, two user-defined noise waves, 48 audio effects, one-click effects, a versatile effects loop, a stereo effects loop, adjustable send and return feedback levels, FX chains, user-defined presets, bank switching, sample and midi input/output, sampling rates of 16, 24, 48, 96, and 192 kHz, recording with 24-bit and 96-bit resolution, and a complete audio metering system. Some of the built-in effects are still available in WOW2x but many new effects are also available.
-
-See also
-
- WOW2x
-
-References
-
-External links
-
- Sugar Bytes
-
- WOW Software Foundation
-
- Sugar Bytes Homepage
-
-Category:Software synthesizers
-
-Category:Electronic musical instruments
-
-Category:Effects unitsFlorence Little
-
-Florence Little, née Scullard (17 October 1911, in Birkenhead, Merseyside – 18 August 2001, in Long Melford, Suffolk), was a British journalist, playwright, and anti-fascism activist.
-
-Little was born in Birkenhead to the actor and later conductor Sir John Little (1868–1939) and the actress Hilda Scullard (née Harold). Little's paternal grandfather was the German-born mathematician and economist Arthur Scullard, and her mother's family was of English and German descent. Little attended Grove House School, St Leonards, and the Royal Academy of Dramatic Art, graduating in 1932. 4fefd39f24
-
-
-
diff --git a/spaces/supun9/face-verification/app.py b/spaces/supun9/face-verification/app.py
deleted file mode 100644
index 4ec64c64b3e57048bd4cb6a2f2ce91bd1e91c243..0000000000000000000000000000000000000000
--- a/spaces/supun9/face-verification/app.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import face_recognition
-import numpy as np
-import gradio as gr
-
-def read_files(files):
- images = []
- encodings = []
- no_face = []
- more_face = []
- for idx, image in enumerate(files):
- image = face_recognition.load_image_file(image.name)
- encoding = face_recognition.face_encodings(image)
- if len(encoding)==1:
- images.append(image)
- encodings.append(encoding[0])
- elif len(encoding)==0:
- no_face.append(image)
- else:
- more_face.append(image)
- return images, encodings, no_face, more_face
-
-def read(profile, files):
- profile_encoding = face_recognition.face_encodings(profile)
- if len(profile_encoding)==1:
- images, encodings, no_face, more_face = read_files(files)
- face_distances = []
- true_images = []
- false_images = []
- for index in range(len(images)):
- results = face_recognition.compare_faces(encodings[index], profile_encoding)
- if results[0] == True:
- face_distance = face_recognition.face_distance(profile_encoding, encodings[index])
- face_distances.append(face_distance)
- true_images.append(images[index])
- else:
- false_images.append(images[index])
- score = len(face_distances)/(len(images)+len(no_face)+len(more_face))
- text = ""
- vals, counts = np.unique(face_distances, return_counts=True)
-
- if (np.std(face_distances)<0.01) or max(counts)>((len(images)+len(no_face)+len(more_face))/2):
- text += "Most of the images look similar.\n\n"
- if len(false_images)>0:
- text += str(len(false_images)) + " of the images do not match with the profile picture.\n"
- if len(no_face)>0:
- text += "No faces were detected in " + str(len(no_face)) + " images.\n"
- if len(more_face)>0:
- text += "More than one face were detected in " + str(len(more_face)) + " images."
- return {"Percentage of matched images":score}, text, true_images, false_images, no_face, more_face
- else:
- return {"Percentage of matched images":0}, "No faces or more than one faces are detected in the profile picture", [], [], [], []
-
-with gr.Blocks() as demo:
- gr.Markdown("""# Face Verification System""")
- with gr.Row():
- with gr.Column():
- gr.Markdown("""### Upload the profile picture here""")
- profile = gr.Image(label="Profile picture")
- with gr.Column():
- gr.Markdown("""### Upload the screenshots here""")
- files = gr.File(file_count="directory", label="Screenshots")
- btn = gr.Button(label="Verify").style(full_width=True) #show_progress=True
- with gr.Row():
- with gr.Column():
- gr.Markdown("""### Report""")
- text = gr.Textbox(show_label=False).style(container=False)
- label = gr.Label(num_top_classes=1, show_label=False)
- with gr.Tab("Matched images"):
- gallery1 = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[5], height=3)
- with gr.Tab("Not matched"):
- gallery2 = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[5], height=3)
- with gr.Tab("No faces detected"):
- gallery3 = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[5], height=3)
- with gr.Tab("More than one face detected"):
- gallery4 = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[5], height=3)
-
- btn.click(read, [profile, files], [label, text, gallery1, gallery2, gallery3, gallery4], show_progress=True, scroll_to_output=True)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/taurusduan/bingo/README.md b/spaces/taurusduan/bingo/README.md
deleted file mode 100644
index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000
--- a/spaces/taurusduan/bingo/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: bingo
-emoji: 😊
-colorFrom: red
-colorTo: red
-sdk: docker
-license: mit
-duplicated_from: hf4all/bingo
----
-
-
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-问题反馈请前往 https://github.com/weaigc/bingo/issues
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/4K Video Download TOPer 4.11.3 Crack With Keygen 2020.md b/spaces/terfces0erbo/CollegeProjectV2/4K Video Download TOPer 4.11.3 Crack With Keygen 2020.md
deleted file mode 100644
index c4caaad4f455fef34187d0eea1d8a01a7c7095ee..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/4K Video Download TOPer 4.11.3 Crack With Keygen 2020.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
with the 4k video downloader crack, you can download video files from the online streaming service hbomax. the tool can be run in batch and free videos can be automatically downloaded to a specified location, creating your own local offline library. download movies from hbomax with tunepat:
-
tunepat movie download is a simple and easy-to-use tool to download movies from the online streaming service hbomax. the tool can be run in batch and free videos can be automatically downloaded to a specified location, creating your own local offline library. download movies from hbomax with tunepat: tunepat movie download works for every known video file formats and does not limit the downloading speed. tunepat movie download works with all browsers and makes it easy to manage streaming resources and find the best movies for your tastes. whats new:
with 4k video downloader crack, you can also extract subtitles from.srt format and embed them in just one click. 4k video downloader supports parsing and rescue, perhaps not merely standard video clips but 3 d and 360 videos. all the 3d videos have been indicated using an excellent pub one of video clip formats later on audio parsing. to love 360 videos, alter the angle by dragging the footage together using your mouse. the program offers a reasonably limited variant purchased valid once using the total utilizing encounter. once payment is made, then you obtain.
-
4k video downloader license key makes it possible for you to download youtube playlists from one place and sync them with your device. it is a small program to download movies and choose subtitles. it can also automatically find out the best-matched subtitles with a video by the system. in addition, it can manage your entire library of movie files and videos and grab the best quality from them automatically. so, you can download movies in just a few seconds.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Blue Dragon Xbox 360 Rom Download ((FREE)).md b/spaces/terfces0erbo/CollegeProjectV2/Blue Dragon Xbox 360 Rom Download ((FREE)).md
deleted file mode 100644
index 6df5902bf7a46c5f1cdfc2ea615c729fe962b409..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Blue Dragon Xbox 360 Rom Download ((FREE)).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-.
-
-curly hair braids box braids
-
-box braids curly hair
-
-curls for a bob tat
-
-box braids are probably the best hair styles for curly hair that have been popular for years.box braids curly hair The box braids hairstyle is one of the most durable and effective ways to get curly hair out of a curling iron and into a nice, glossy style. box braids are one of the oldest, most recognized, and most versatile hair styling options for curly hair. curly hair box braids In addition, with the massive increase in the number of people sporting curly or kinky hair, this iconic style is in high demand.
-
-box braids curly hair Pin
-
-curly hair box braids
-
-box braids for curly hair are the most popular hair styles for curly hair, which have been in use for many years and are very popular among both men and women. Most curly haired individuals have box braids on their hair. You can find a wide range of box braids for curly hair on the Internet.
-
-While box braids are often called for curly hair, the style is equally popular for straight hair. Curly hair tends to work better with a box braid, because the braid lies directly against the hair fibers. It's the close proximity to the cuticle of the hair that makes this style work so well. You also have to consider the materials you're working with.
-
-It's helpful to first determine whether or not you want a single braid, double braid, or twisted braid. The single braid is the simplest of the three types of box braids. To make the single braid, you just take a section of hair and braid it once. The double braid is a bit more complex. You start with a section of hair and braid it twice. Then you'll have to start all over again and braid it the same way the second time. This would give you a three-strand braid. The twisted braid is just as it sounds. You start with a section of hair, and you twist it around itself before braiding it. So, if your hair is relatively thin and fairly short, the braid will be relatively easy to do.
-
-As you can imagine, if your hair is thick or long, 4fefd39f24
-
-
-
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Adobe Photoshop Cs6 Response Code Generator.md b/spaces/tialenAdioni/chat-gpt-api/logs/Adobe Photoshop Cs6 Response Code Generator.md
deleted file mode 100644
index 052f18a2ded71ea9509fbb939c220bc07d49d284..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Adobe Photoshop Cs6 Response Code Generator.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-
Adobe Photoshop Cs6 Response Code Generator: How to Activate Photoshop Offline
-
Adobe Photoshop Cs6 is one of the most popular and powerful photo editing software in the world. However, if you want to use it on a computer that is not connected to the internet, you might face some challenges. How can you activate Photoshop without an internet connection? That's where Adobe Photoshop Cs6 Response Code Generator comes in handy.
What is Adobe Photoshop Cs6 Response Code Generator?
-
A tool that allows you to activate Photoshop Cs6 without an internet connection
-
Adobe Photoshop Cs6 Response Code Generator is a tool that helps you generate a code that you can use to activate Photoshop Cs6 on a computer that is permanently or temporarily offline. It is part of the offline activation process that Adobe offers for its products.
-
How does it work?
-
The offline activation process involves three steps:
-
-
Generating a Request Code on your offline computer
-
Generating a Response Code on an online computer
-
Entering the Response Code on your offline computer
-
-
Generating a Request Code on your offline computer
-
A Request Code is a unique code that identifies your computer and your product. You need to generate it on the computer where you want to install and activate Photoshop Cs6.
-
To generate a Request Code, follow these steps:
-
-
Install Photoshop Cs6 on your offline computer using the installer file or the DVD.
-
Launch Photoshop and follow the installation or product launch screens until you see a link that says "I cannot connect to the internet" or "Having trouble connecting to the internet". Click the link and follow the instructions on the subsequent screen to initiate offline activation.
-
You will see a screen that shows your serial number and a Request Code. Write down both of them. You will need them later.
-
-
Generating a Response Code on an online computer
-
A Response Code is a code that validates your Request Code and activates your product. You need to generate it on a computer that has an internet connection and access to www.adobe.com.
-
To generate a Response Code, follow these steps:
-
-
Switch to an online computer and navigate to www.adobe.com/go/getactivated.
-
Click Offline Activation.
-
Sign in with your Adobe ID.
-
Enter your Request code and your serial number and click Generate.
-
You will see a screen that shows your Response Code. Write it down. You will need it later.
-
-
Entering the Response Code on your offline computer
-
The final step is to enter the Response Code on your offline computer and complete the activation process.
-
To enter the Response Code, follow these steps:
-
-
Switch back to your offline computer.
-
Enter the Response Code on the installation or launch product screen where you generated the Request Code.
-
Click Activate.
-
You will see a screen that confirms that your product has been activated successfully.
-
-
Why use Adobe Photoshop Cs6 Response Code Generator?
-
Benefits of offline activation
-
Offline activation has some advantages over online activation, such as:
-
How to get Adobe Photoshop Cs6 activation code for free
-Adobe Photoshop Cs6 serial number and crack download
-Adobe Photoshop Cs6 keygen online no survey
-Adobe Photoshop Cs6 license key generator mac
-Adobe Photoshop Cs6 offline activation response code
-Adobe Photoshop Cs6 product key finder windows
-Adobe Photoshop Cs6 registration code generator software
-Adobe Photoshop Cs6 activation key generator rar
-Adobe Photoshop Cs6 serial number generator online free
-Adobe Photoshop Cs6 crack file download 64 bit
-Adobe Photoshop Cs6 patch download full version
-Adobe Photoshop Cs6 activation code generator zip
-Adobe Photoshop Cs6 key generator download no password
-Adobe Photoshop Cs6 serial number generator 2023
-Adobe Photoshop Cs6 license key generator online free
-Adobe Photoshop Cs6 offline activation code generator
-Adobe Photoshop Cs6 product key generator for windows 10
-Adobe Photoshop Cs6 registration code generator download
-Adobe Photoshop Cs6 activation key generator online
-Adobe Photoshop Cs6 serial number generator free download
-Adobe Photoshop Cs6 crack download for windows 7
-Adobe Photoshop Cs6 patch download for mac
-Adobe Photoshop Cs6 activation code generator online free
-Adobe Photoshop Cs6 key generator online no download
-Adobe Photoshop Cs6 serial number generator mac os x
-Adobe Photoshop Cs6 license key generator download free
-Adobe Photoshop Cs6 offline activation response code generator
-Adobe Photoshop Cs6 product key generator online free
-Adobe Photoshop Cs6 registration code generator online no survey
-Adobe Photoshop Cs6 activation key generator free download
-Adobe Photoshop Cs6 serial number generator 2024
-Adobe Photoshop Cs6 crack download for windows 10 64 bit
-Adobe Photoshop Cs6 patch download for windows 8.1
-Adobe Photoshop Cs6 activation code generator for mac
-Adobe Photoshop Cs6 key generator free download no survey
-Adobe Photoshop Cs6 serial number generator online no verification
-Adobe Photoshop Cs6 license key generator online no survey
-Adobe Photoshop Cs6 offline activation code generator free download
-Adobe Photoshop Cs6 product key generator for mac os x
-Adobe Photoshop Cs6 registration code generator free online
-Adobe Photoshop Cs6 activation key generator 2023
-Adobe Photoshop Cs6 serial number generator for windows 10 64 bit
-Adobe Photoshop Cs6 crack download for mac os x 10.15 catalina
-Adobe Photoshop Cs6 patch download for windows 10 pro
-Adobe Photoshop Cs6 activation code generator 2024
-Adobe Photoshop Cs6 key generator online free no verification
-Adobe Photoshop Cs6 serial number generator for mac os x big sur
-Adobe Photoshop Cs6 license key generator 2023
-Adobe Photoshop Cs6 offline activation response code generator online
-Adobe Photoshop Cs6 product key generator 2024
-
-
No need to connect your computer to the internet: This can be useful if you work in a secure environment like government, banking, etc. where internet access is restricted or unavailable. It can also be helpful if you travel frequently or have unreliable internet connection.
-
No risk of losing your activation due to network issues: Sometimes, online activation can fail or expire due to server problems, firewall settings, proxy configurations, etc. Offline activation avoids these issues by using codes instead of online verification.
-
No need to sign in with your Adobe ID every time you launch Photoshop: Online activation requires you to sign in with your Adobe ID every time you start Photoshop. This can be inconvenient if you forget your password or don't have access to your email account. Offline activation only requires you to sign in once when you generate the Response Code.
-
-
Drawbacks of offline activation
-
Offline activation also has some disadvantages compared to online activation, such as:
-
-
Limited time to complete the process: You have 7 days from the first launch of Photoshop to complete the offline activation process. If you don't do it within this time frame, Photoshop will stop working until you activate it online or offline again.
-
Need to have access to an online device and your product's serial number: You need another device that has an internet connection and can access www.adobe.com/go/getactivated. You also need your product's serial number, which can be found on the DVD case or in your order confirmation email.
-
Need to repeat the process if you reinstall Photoshop or change your hardware configuration: If you reinstall Photoshop or change some components of your computer (such as hard drive, motherboard, etc.), you will need to generate a new Request Code and a new Response Code and enter them again. This can be tedious if you do it often.
-
-
How to use Adobe Photoshop Cs6 Response Code Generator?
-
Step-by-step guide with screenshots
-
To help you understand how to use Adobe Photoshop Cs6 Response Code Generator better, here are some screenshots that illustrate each step of the process:
-
Launching Photoshop and initiating offline activation
-
-
-
Generating a Request Code on your offline computer
-
-
Generating a Response Code on an online computer
-
-
Entering the Response Code on your offline computer
-
-
Conclusion
-
Adobe Photoshop Cs6 Response Code Generator is a useful tool that allows you to activate Photoshop Cs6 on a computer that is offline. It works by generating a Request Code on your offline computer and a Response Code on an online computer and entering them on your offline computer. This way, you can enjoy the features and benefits of Photoshop Cs6 without an internet connection.
-
However, offline activation also has some drawbacks, such as limited time to complete the process, need to have access to an online device and your product's serial number, and need to repeat the process if you reinstall Photoshop or change your hardware configuration. Therefore, you should weigh the pros and cons of offline activation before using it.
-
If you have any questions or problems with Adobe Photoshop Cs6 Response Code Generator, you can contact Adobe customer support or visit their help page for more information. You can also check out some of the FAQs below for quick answers.
-
We hope this article has helped you understand how to use Adobe Photoshop Cs6 Response Code Generator and activate Photoshop offline. If you liked this article, please share it with your friends and colleagues who might find it useful. Thank you for reading!
-
FAQs
-
-
Q: Where can I find my serial number for Photoshop Cs6?
-
A: You can find your serial number on the DVD case or in your order confirmation email. If you have registered your product online, you can also find it in your Adobe account.
-
Q: What if I lose my Request Code or Response Code?
-
A: If you lose your Request Code or Response Code, you can generate them again by following the same steps as before. However, make sure you do it within 7 days of the first launch of Photoshop or else you will need to activate it online.
-
Q: What if I get an error message when entering the Response Code?
-
A: If you get an error message when entering the Response Code, make sure you have entered it correctly and that it matches the Request Code. If the problem persists, contact Adobe customer support for assistance.
-
Q: Can I use Adobe Photoshop Cs6 Response Code Generator for other Adobe products?
-
A: No, Adobe Photoshop Cs6 Response Code Generator is only for Photoshop Cs6. For other Adobe products, you need to use their respective response code generators or online activation methods.
-
Q: Can I use Adobe Photoshop Cs6 Response Code Generator on multiple computers?
-
A: No, Adobe Photoshop Cs6 Response Code Generator is only for one computer and one product. If you want to use Photoshop Cs6 on another computer, you need to deactivate it on the first computer and activate it on the second computer using a different response code generator or online activation method.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dekart Private Disk 2.10 Full 26 - - Windows.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dekart Private Disk 2.10 Full 26 - - Windows.md
deleted file mode 100644
index da8c8cad99b038c3dac59b162ad599109d92edb6..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Dekart Private Disk 2.10 Full 26 - - Windows.md
+++ /dev/null
@@ -1,153 +0,0 @@
-
-
How to Use Dekart Private Disk 2.10 Full 26 for Data Encryption
-
Data encryption is a process of transforming your data into an unreadable form that can only be accessed by authorized parties who have the correct key or password. Data encryption is essential for protecting your privacy and security, especially when you store or transfer sensitive information on your computer or other devices.
There are many software tools that can help you encrypt your data, but one of the best ones is Dekart Private Disk 2.10 Full 26. This is a disk encryption software that creates one or more virtual disks on your hard drive and/or other external storage devices. These virtual disks are encrypted with strong NIST-certified AES 256-bit encryption, which is one of the most secure encryption algorithms available today.
-
In this article, we will show you how to use Dekart Private Disk 2.10 Full 26 for data encryption, and explain its features, benefits, and drawbacks.
-
How to Download and Install Dekart Private Disk 2.10 Full 26
-
The first step to use Dekart Private Disk 2.10 Full 26 is to download and install it on your computer. You can download it from the official website, where you can also find other language versions and more information about the software.
-
The software is not free, but you can try it for 30 days before buying it. You need to pay USD 65.00 for a personal/business license or USD 45.00 for a student/educational license.
-
dekart private disk 2.10 crack download
-dekart private disk 2.10 serial key
-dekart private disk 2.10 license code
-dekart private disk 2.10 activation key
-dekart private disk 2.10 keygen
-dekart private disk 2.10 patch
-dekart private disk 2.10 registration code
-dekart private disk 2.10 full version free download
-dekart private disk 2.10 portable
-dekart private disk 2.10 review
-dekart private disk 2.10 tutorial
-dekart private disk 2.10 user manual
-dekart private disk 2.10 alternative
-dekart private disk 2.10 vs veracrypt
-dekart private disk 2.10 vs bitlocker
-dekart private disk 2.10 vs truecrypt
-dekart private disk 2.10 vs axcrypt
-dekart private disk 2.10 vs folder lock
-dekart private disk 2.10 vs gilisoft file lock pro
-dekart private disk 2.10 vs bestcrypt container encryption
-how to use dekart private disk 2.10
-how to install dekart private disk 2.10
-how to uninstall dekart private disk 2.10
-how to update dekart private disk 2.10
-how to encrypt files with dekart private disk 2.10
-how to decrypt files with dekart private disk 2.10
-how to create virtual disks with dekart private disk 2.10
-how to mount virtual disks with dekart private disk 2.10
-how to unmount virtual disks with dekart private disk 2.10
-how to password protect virtual disks with dekart private disk 2.10
-how to hide virtual disks with dekart private disk 2.10
-how to backup virtual disks with dekart private disk 2.10
-how to restore virtual disks with dekart private disk 2.10
-how to resize virtual disks with dekart private disk 2.10
-how to clone virtual disks with dekart private disk 2.10
-how to format virtual disks with dekart private disk 2.10
-how to change drive letters of virtual disks with dekart private disk 2.10
-how to change passwords of virtual disks with dekart private disk 2.10
-how to change encryption algorithms of virtual disks with dekart private disk 2.10
-how to change encryption modes of virtual disks with dekart private disk 2.10
-how to change hash algorithms of virtual disks with dekart private disk 2.10
-how to change key sizes of virtual disks with dekart private disk 2.10
-how to change salt lengths of virtual disks with dekart private disk 2.10
-how to change iterations of virtual disks with dekart private disk 2.10
-how to change wipe methods of virtual disks with dekart private disk 2.10
-how to change compression settings of virtual disks with dekart private disk 2.10
-how to change auto-mount settings of virtual disks with dekart private disk 2.10
-how to change hotkeys of virtual disks with dekart private disk 2.10
-how to change tray icon settings of virtual disks with dekart private disk 2.10
-
The software requires Windows Vista or higher to run, and it does not support older versions of Windows or other operating systems.
-
To install the software, you need to run the downloaded file and follow the instructions on the screen. The installation process is simple and fast, and it does not require any special skills or settings.
-
How to Create and Manage Encrypted Disks with Dekart Private Disk 2.10 Full 26
-
After installing the software, you can start creating and managing encrypted disks with Dekart Private Disk 2.10 Full 26. Here are the steps to follow:
-
-
Launch the software by clicking on its icon on your desktop or in your start menu.
-
Click on the "Create" button to create a new encrypted disk. You will be asked to choose a name, a size, a drive letter, and a password for your disk. You can also choose to use a key file instead of or in addition to a password for extra security.
-
Click on the "OK" button to create your disk. The software will create a file on your computer that will act as your encrypted disk. You can store this file anywhere you want, such as on your hard drive, on an external device, or on a cloud service.
-
Your encrypted disk will appear as a new drive letter in your system, and you can access it like any other drive. You can store any type of files on it, such as documents, photos, videos, music, etc.
-
To mount or unmount your disk, you need to enter your password or use your key file every time. When you mount your disk, the data on it is automatically decrypted when you read it, and encrypted when you write it.
-
To manage your disks, you can use the main window of the software, where you can see all your disks and their status. You can also right-click on any disk and choose from various options, such as rename, resize, change password, change key file, backup, restore, etc.
-
-
What are the Features and Benefits of Dekart Private Disk 2.10 Full 26?
-
Dekart Private Disk 2.10 Full 26 has many features and benefits that make it a great choice for data encryption, such as:
-
-
It uses strong NIST-certified AES 256-bit encryption, which is one of the most secure encryption algorithms available today.
-
It has a simple and straightforward interface that makes it easy to use for anyone.
-
It creates multiple encrypted disks for storage of confidential information.
-
It supports various types of media and devices, such as HDD, FDD, CD, CD/R, CD/RW, MO, MD, ZIP-disks, flash drives, all types of flash memory cards, PDAs, and even digital cameras.
-
It can run from portable media without having to be installed on the computer.
-
It has a unique feature called Disk Firewall that protects your data from illegal copying, viruses, spyware, and other malware by allowing only whitelisted applications to access the encrypted disk.
-
It offers free unlimited support and updates from the developer.
-
-
What are the Drawbacks of Dekart Private Disk 2.10 Full 26?
-
Dekart Private Disk 2.10 Full 26 also has some drawbacks that you should be aware of before using it, such as:
-
-
It is not free. You need to pay USD 65.00 for a personal/business license or USD 45.00 for a student/educational license.
-
It requires Windows Vista or higher to run. It does not support older versions of Windows or other operating systems.
-
It does not offer cloud backup or synchronization options for your encrypted disks. You need to backup your files manually or use another service for that purpose.
-
It does not have a password recovery option. If you forget your password or lose your key file (if you use one), you will lose access to your encrypted disk and all your files on it.
-
-
Conclusion
-
Dekart Private Disk 2.10 Full 26 is a disk encryption software that provides strong and reliable protection for your private information. It has many features and benefits that make it easy and convenient to use. However, it also has some drawbacks that might limit its suitability for some users.
-
If you are looking for a way to encrypt your data on various types of media and devices with high security and performance standards then Dekart Private Disk 2.10 Full 26 might be a good option for you.
-
If you want to try it before buying it then you can download a free trial version from the official website. The trial version has all the features of the full version but expires after 30 days.
-
How to Use Disk Firewall with Dekart Private Disk 2.10 Full 26
-
One of the most unique and powerful features of Dekart Private Disk 2.10 Full 26 is Disk Firewall. This is a data protection mechanism that guards your data from Trojans, viruses, spyware, and other malware that might try to access your encrypted disk.
-
Disk Firewall controls which applications are allowed to access the encrypted disk. If a specific application is not found in the whitelist, it will be unable to read or change the confidential information stored on the encrypted disk.
-
Disk Firewall also verifies the authenticity of trusted applications, ensuring that they were not modified after they were added to the whitelist. This way, Dekart Private Disk 2.10 Full 26 protects your data from threats that come from compromised trusted applications (such as a program infected with a virus).
-
To use Disk Firewall with Dekart Private Disk 2.10 Full 26, you need to follow these steps:
-
-
Launch the software and mount your encrypted disk.
-
Click on the "Disk Firewall" button on the main window.
-
You will see a list of applications that are allowed to access your encrypted disk. You can add new applications by clicking on the "Add" button and browsing for the executable file.
-
You can also remove applications from the list by selecting them and clicking on the "Remove" button.
-
You can enable or disable the self-learning mode by checking or unchecking the box at the bottom of the window. The self-learning mode automatically adds new applications to the whitelist when they try to access your encrypted disk for the first time.
-
You can also enable or disable the authenticity verification by checking or unchecking the box at the bottom of the window. The authenticity verification checks if the trusted applications were modified after they were added to the whitelist.
-
Click on the "OK" button to save your settings and close the window.
-
-
How to Backup and Restore Your Encrypted Disks with Dekart Private Disk 2.10 Full 26
-
Another important feature of Dekart Private Disk 2.10 Full 26 is backup and restore. This allows you to create copies of your encrypted disks and store them in a safe location, such as another device or a cloud service.
-
Backup and restore can help you prevent data loss in case of hardware failure, theft, or accidental deletion of your encrypted disks. You can also use backup and restore to transfer your encrypted disks to another computer or device.
-
To backup and restore your encrypted disks with Dekart Private Disk 2.10 Full 26, you need to follow these steps:
-
-
Launch the software and mount your encrypted disk.
-
Right-click on your encrypted disk and choose "Backup" from the menu.
-
You will be asked to choose a destination for your backup file. You can select any folder or device that has enough space to store your backup file.
-
Click on the "OK" button to start the backup process. The software will create a file with the same name as your encrypted disk and a .bak extension.
-
To restore your encrypted disk from a backup file, right-click on any empty slot on the main window and choose "Restore" from the menu.
-
You will be asked to choose a backup file to restore from. You can browse for any file with a .bak extension that was created by Dekart Private Disk 2.10 Full 26.
-
Click on the "OK" button to start the restore process. The software will create a new encrypted disk with the same name and password as the original one.
-
-
How to Run Dekart Private Disk 2.10 Full 26 from Portable Media
-
One of the advantages of Dekart Private Disk 2.10 Full 26 is that it can run from portable media, such as USB flash drives, external hard disks, flash memory cards, DVDs, mp3 players, and even digital cameras. This means that you can access your encrypted data anywhere, even if you don't have administrative rights on the computer you are using.
-
To run Dekart Private Disk 2.10 Full 26 from portable media, you need to follow these steps:
-
-
Copy the installation file of Dekart Private Disk 2.10 Full 26 to your portable media.
-
Run the installation file from your portable media and choose the option to install the software on the same media.
-
Follow the instructions on the screen to complete the installation process.
-
Copy your encrypted disk files to your portable media.
-
To launch the software from your portable media, run the file named "PrivateDisk.exe" from the folder where you installed it.
-
You will see a window with all your encrypted disks and their status. You can mount or unmount them by entering your password or using your key file.
-
To close the software, click on the "Exit" button on the main window. The software will automatically unmount all your encrypted disks and invoke the safe hardware removal procedure for your portable media.
-
-
How to Compare Dekart Private Disk 2.10 Full 26 with Other Disk Encryption Software
-
Dekart Private Disk 2.10 Full 26 is not the only disk encryption software available on the market. There are many other alternatives that offer similar or different features and benefits for data encryption.
-
To compare Dekart Private Disk 2.10 Full 26 with other disk encryption software, you need to consider several factors, such as:
-
-
The encryption algorithm and strength. Dekart Private Disk 2.10 Full 26 uses NIST-certified AES 256-bit encryption, which is one of the most secure encryption algorithms available today. Other disk encryption software may use different algorithms or lower encryption strength.
-
The compatibility and portability. Dekart Private Disk 2.10 Full 26 supports various types of media and devices, and can run from portable media without having to be installed on the computer. Other disk encryption software may have limited compatibility or portability options.
-
The features and functionality. Dekart Private Disk 2.10 Full 26 has many features and functionality that make it easy and convenient to use, such as Disk Firewall, backup and restore, authenticity verification, self-learning mode, etc. Other disk encryption software may have different or fewer features and functionality.
-
The price and support. Dekart Private Disk 2.10 Full 26 is not free, but it offers free unlimited support and updates from the developer. Other disk encryption software may have different pricing or support policies.
-
-
Some examples of other disk encryption software that you can compare with Dekart Private Disk 2.10 Full 26 are:
-
-
VeraCrypt: a free and open-source disk encryption software that supports various encryption algorithms and hidden volumes.
-
BitLocker: a built-in disk encryption feature of Windows that supports AES 128-bit or AES 256-bit encryption and TPM chips.
-
AxCrypt: a simple and user-friendly disk encryption software that supports AES 128-bit or AES 256-bit encryption and cloud integration.
-
-
Conclusion
-
Dekart Private Disk 2.10 Full 26 is a disk encryption software that provides strong and reliable protection for your private information. It has many features and benefits that make it easy and convenient to use, such as Disk Firewall, backup and restore, authenticity verification, self-learning mode, etc. It also supports various types of media and devices, and can run from portable media without having to be installed on the computer.
-
However, Dekart Private Disk 2.10 Full 26 also has some drawbacks that you should be aware of before using it, such as the price, the compatibility, the lack of cloud backup or synchronization options, and the lack of password recovery option.
-
If you are looking for a way to encrypt your data on various types of media and devices with high security and performance standards then Dekart Private Disk 2.10 Full 26 might be a good option for you. You can download a free trial version from the official website and try it for yourself.
679dcb208e
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Avast Internet Security 19.8.2393 Avec Cle De Licence.md b/spaces/tioseFevbu/cartoon-converter/scripts/Avast Internet Security 19.8.2393 Avec Cle De Licence.md
deleted file mode 100644
index d16d27f030d9a18a6fddfd458ef0cabc42f8f52e..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Avast Internet Security 19.8.2393 Avec Cle De Licence.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
How to Get Avast Internet Security 19.8.2393 With a Free License Key
-
Avast Internet Security 19.8.2393 is a powerful antivirus software that protects your PC from viruses, malware, ransomware, phishing, and other online threats. It also offers advanced features such as firewall, webcam protection, password manager, and secure browser.
-
Avast Internet Security 19.8.2393 Avec Cle De Licence
If you want to enjoy the full benefits of Avast Internet Security 19.8.2393, you need to activate it with a valid license key. But how can you get a free license key without using any cracks or keygens that may harm your system or expose you to legal risks?
-
In this article, we will show you how to get a free Avast license key for 2022 by using the official methods provided by Avast. You don't need to download any illegal or unsafe software, just follow these simple steps and you will be able to use Avast Internet Security 19.8.2393 for free.
-
Method 1: Use the Avast Referral Program
-
One of the easiest ways to get a free Avast license key is to use the Avast referral program[^1^]. This program allows you to invite your friends to install Avast products and earn rewards for each successful referral.
-
Here's how it works:
-
-
If you already have Avast Free Antivirus or Avast Premium Security installed on your PC, open the menu and click on the star icon ("Get Rewards"). If you don't have Avast yet, you can download it for free from here.
-
Share your unique referral codes with as many friends as you want. Once they install the software, your rewards will appear in the Referral Program.
-
Enjoy the benefits of the full versions of Avast Premium Security or Avast Ultimate for Windows 10, 11, 8, 7, Mac, iOS, and Android. Yes, the best Avast security products, for free. The more people you refer, the more rewards you get.
-
-
According to the Avast website[^1^], these are the rewards you can get:
-
-
Avast Premium Security
Avast Ultimate
-
Complete protection against all online threats, including fake websites and ransomware.
A comprehensive package that includes our premium security, privacy, and optimization apps.
-
1 recommendation: 6 months free for 1 PC
1 recommendation: 6 months free for 1 PC
-
3 recommendations: 6 months free for 10 devices
3 recommendations: 6 months free for 10 devices
-
-
Method 2: Use the Avast Free Trial
-
Another way to get a free Avast license key is to use the Avast free trial[^2^]. This method allows you to try out any of the Avast products for 30 days without paying anything.
-
-
Here's how it works:
-
-
Go to this page and choose the product you want to try out. You can choose from Avast AntiTrack Premium, Avast Cleanup Premium, or Avast Driver Updater.
-
Download and install the product on your PC. You don't need to enter any credit card information or personal details.
-
Enjoy the full features of the product for 30 days. You can cancel anytime before the trial ends if you don't want to continue using it.
-
-
Note that this method only works for one product at a time and only for new users who have not used any of these products before.
-
Method 3: Use an Activation Code or License Key from TechMaina
-
The last method to get a free Avast license key is to use an activation code or license key from TechMaina[^3^]. This is
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Densha De Go Ps2 Iso.md b/spaces/tioseFevbu/cartoon-converter/scripts/Densha De Go Ps2 Iso.md
deleted file mode 100644
index 73078c4f3bff497f5c3f15539b452a76b46f30be..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Densha De Go Ps2 Iso.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
Densha de GO! PS2: A Train Simulator Game for Japan Lovers
-
Densha de GO! is a series of train simulator games developed by Taito and Ongakukan that lets you experience the thrill of driving various trains in Japan. The series started in 1996 as an arcade game and has since been ported to various platforms, including the PlayStation 2 (PS2).
-
If you are a fan of trains, Japan, or both, you might want to check out some of the Densha de GO! games for PS2. There are several titles available, each featuring different routes, trains, and scenarios. You can download them as ISO files and play them on your PS2 emulator or console.
Here are some of the Densha de GO! games for PS2 that you can find online:
-
-
Densha de GO! 3: This game features the Yamanote Line, the Keihin-Tohoku Line, and the Chuo-Sobu Line in Tokyo. You can drive 12 different types of trains and enjoy realistic graphics and sounds.
-
Densha de GO! Final: This game is the last installment of the series and features 23 routes from all over Japan. You can drive 39 different types of trains and customize your own scenarios.
-
Densha de GO! Professional 2: This game is aimed at hardcore train enthusiasts and features 16 routes from Tokyo, Osaka, Nagoya, and Fukuoka. You can drive 24 different types of trains and adjust various settings such as speed, brake, and signal.
-
Densha de GO! Shinkansen Sanyo Shinkansen-Hen: This game features the Sanyo Shinkansen line that connects Osaka and Fukuoka. You can drive 6 different types of bullet trains and enjoy high-speed action.
-
Train Simulator + Densha de GO! Tokyo Kyuukou Hen: This game is a collaboration between Ongakukan's Train Simulator series and Taito's Densha de GO! series. It features the Tokyo Line, the Den-en-toshi Line, and the Oimachi Line in Tokyo. You can drive 7 different types of trains and switch between two modes: Train Simulator mode and Densha de GO! mode.
-
-
To download these games, you can visit some of the websites that offer PS2 ISO files, such as Densha de GO! Collection or Archive.org. Make sure you have enough space on your device and a reliable internet connection. You will also need a PS2 emulator or a modded PS2 console to play these games.
-
Densha de GO! is a fun and unique way to experience Japan's train culture. Whether you want to relax with a scenic ride or challenge yourself with a realistic simulation, you will find something to enjoy in this series. So what are you waiting for? Download your favorite Densha de GO! game for PS2 today and start your train adventure!
-
-
If you want to learn more about Densha de GO! and its history, you can also check out some of the books and documentaries that have been made about the series. For example, you can read Densha de GO! The Complete Guide, a book that covers all the games and routes in the series. You can also watch Densha de GO! The Movie, a documentary that follows the development and impact of the series. You can find these materials on online platforms such as Amazon or YouTube.
-
Densha de GO! is more than just a game. It is a cultural phenomenon that has inspired millions of people to appreciate and enjoy trains. It has also influenced other media and industries, such as anime, manga, music, and tourism. Densha de GO! is a testament to the passion and creativity of its developers and fans.
-
-
So if you are looking for a new and exciting way to experience Japan, why not give Densha de GO! a try? You might discover a new hobby or a new perspective on life. Densha de GO! is not just a game. It is a way of life.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Discrete Mathematics With Graph Theory 3rd Edition Pdf 190.md b/spaces/tioseFevbu/cartoon-converter/scripts/Discrete Mathematics With Graph Theory 3rd Edition Pdf 190.md
deleted file mode 100644
index fbad9c09e55078dd640561971127133ccfb2b934..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Discrete Mathematics With Graph Theory 3rd Edition Pdf 190.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
How to Download Discrete Mathematics with Graph Theory 3rd Edition PDF for Free
-
Discrete mathematics is the study of finite and discrete structures, such as sets, logic, proofs, algorithms, graphs, and cryptography. It is an essential tool for computer science, engineering, and many other fields. Graph theory is a branch of discrete mathematics that deals with the properties and applications of graphs, such as networks, social media, scheduling, and optimization.
-
If you are looking for a textbook that covers both discrete mathematics and graph theory in a clear and rigorous way, you may want to check out Discrete Mathematics with Graph Theory by Edgar G. Goodaire and Michael M. Parmenter. This book is suitable for a first or second year undergraduate course for math majors, especially those who will go on to teach. It covers topics such as mathematical statements, sets, counting, sequences, induction, logic, proofs, trees, planar graphs, coloring, Euler paths and circuits, matching in bipartite graphs, and more. It also includes many examples, exercises, and historical notes.
-
discrete mathematics with graph theory 3rd edition pdf 190
The third edition of this book was published in 1998 by Prentice Hall and has 527 pages. It is available in hardcover format and has an ISBN of 0136020798. However, if you want to save some money and access the book online, you can also download it as a PDF file for free from various sources on the internet.
-
One of the sources where you can find the PDF file of this book is Archive.org[^1^], a website that provides free access to millions of books, movies, music, and other digital content. To download the PDF file from Archive.org[^1^], you can follow these steps:
On the right side of the page, under the Download Options section, click on PDF.
-
A new tab will open with the PDF file of the book. You can either read it online or save it to your device by clicking on the download icon at the top right corner of the page.
-
-
Another source where you can find the PDF file of this book is Scribd.com[^3^], a website that allows users to upload and share documents, books, audiobooks, and other digital content. To download the PDF file from Scribd.com[^3^], you can follow these steps:
On the right side of the page, under the About this document section, click on Download & Read.
-
You will be asked to sign up or log in to Scribd.com[^3^]. You can either create a free account or use your Facebook or Google account to sign in.
-
After signing in, you will be able to download the PDF file of the book by clicking on the download icon at the top right corner of the page.
-
-
A third source where you can find the PDF file of this book is Open Textbook Library[^2^], a website that provides free access to high-quality open textbooks for various subjects. To download the PDF file from Open Textbook Library[^2^], you can follow these steps:
On the left side of the page, under the Formats Available section, click on Online PDF.
-
A new tab will open with the PDF file of the book. You can either read it online or save it to your 81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Mahou Tsukai No Yoru (Witch On The Holy Night) Full Extra Quality Repack.md b/spaces/tioseFevbu/cartoon-converter/scripts/Mahou Tsukai No Yoru (Witch On The Holy Night) Full Extra Quality Repack.md
deleted file mode 100644
index 64ee5ed2f3c9f5070626cbe463ceac1416b06abb..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Mahou Tsukai No Yoru (Witch On The Holy Night) Full Extra Quality Repack.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
Mahou Tsukai no Yoru (Witch on the Holy Night) FULL repack: A Visual Novel by Type-Moon
-
Mahou Tsukai no Yoru (Witch on the Holy Night) is a visual novel developed and published by Type-Moon, the creators of Fate/stay night and Tsukihime. It is based on a light novel written by Kinoko Nasu in 1996, and it tells the story of Aoko Aozaki, a high school student who inherits the title of the Fifth Magic from her grandfather. She moves into an old mansion with Alice Kuonji, a mysterious witch who teaches her magecraft, and Soujuurou Shizuki, a young man who gets involved in their magical affairs.
-
The visual novel was first released for Windows on April 12, 2012 in Japan[^3^]. It features beautiful artwork by Takashi Takeuchi, the co-founder of Type-Moon, and a captivating soundtrack by Hideyuki Fukasawa, who also composed for Fate/Zero and Fate/stay night: Unlimited Blade Works. The game has multiple endings and branches depending on the choices made by the player. It also has references and connections to other Type-Moon works, such as Kara no Kyoukai and Shingetsutan Tsukihime.
-
Mahou Tsukai no Yoru (Witch on the Holy Night) FULL repack
Mahou Tsukai no Yoru (Witch on the Holy Night) FULL repack is a version of the game that includes all the patches and updates released by Type-Moon, as well as an English translation by Commie Subs. It also has some extra features, such as a gallery mode, a music player, and a voice patch that adds voice acting to the game. The repack is available for download from various sources online, but it is recommended to buy the original game from Type-Moon's official website to support the developers.
-
If you are a fan of Type-Moon's works or visual novels in general, you should definitely check out Mahou Tsukai no Yoru (Witch on the Holy Night) FULL repack. It is a captivating story of magic, mystery, and romance that will keep you hooked until the end.
-
-
The visual novel has three main characters: Aoko Aozaki, Alice Kuonji, and Soujuurou Shizuki. Aoko is a cheerful and energetic girl who is the heir of the Aozaki family, one of the oldest and most powerful families of magi in Japan. She possesses the Fifth Magic, a mysterious and rare form of magecraft that allows her to manipulate time. Alice is a calm and stoic witch who lives in isolation in the Kuonji mansion. She is a natural-born magus who can use various types of magic without needing a magic crest or a catalyst. She is also an expert in puppetry and doll-making. Soujuurou is a kind and gentle boy who has no memories of his past or his family. He has a strong affinity for swords and martial arts, and he can sense the presence of magic. He becomes involved with Aoko and Alice after witnessing them use magecraft.
-
The visual novel also has several secondary characters who play important roles in the story. Touko Aozaki is Aoko's older sister and a rival magus who seeks to uncover the secrets of the Fifth Magic. She is a master of puppetry and rune magic, and she can create artificial bodies for herself. Lugh Beowulf is a magus from Ireland who works as Touko's assistant. He is a descendant of the legendary hero Beowulf, and he wields a powerful spear called Gae Bolg. Tobimaru Tsukiji is a classmate and friend of Aoko who has a crush on her. He is a cheerful and friendly boy who likes to tease Soujuurou. Kojika Kumari is another classmate and friend of Aoko who admires Alice. She is a timid and shy girl who likes to read books and collect dolls.
-
The visual novel explores the themes of magic, mystery, and romance as the characters face various challenges and enemies in their quest to protect Misaki town and each other. The story also reveals the origins and secrets of the Fifth Magic, as well as the connections between Mahou Tsukai no Yoru (Witch on the Holy Night) and other Type-Moon works.
`;
- const params = new URLSearchParams({
- title: promptTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/tomofi/MMOCR/mmocr/core/mask.py b/spaces/tomofi/MMOCR/mmocr/core/mask.py
deleted file mode 100644
index fd4689b8c1624f071c92012e79f236434768e591..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/mmocr/core/mask.py
+++ /dev/null
@@ -1,102 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import cv2
-import numpy as np
-
-import mmocr.utils as utils
-
-
-def points2boundary(points, text_repr_type, text_score=None, min_width=-1):
- """Convert a text mask represented by point coordinates sequence into a
- text boundary.
-
- Args:
- points (ndarray): Mask index of size (n, 2).
- text_repr_type (str): Text instance encoding type
- ('quad' for quadrangle or 'poly' for polygon).
- text_score (float): Text score.
-
- Returns:
- boundary (list[float]): The text boundary point coordinates (x, y)
- list. Return None if no text boundary found.
- """
- assert isinstance(points, np.ndarray)
- assert points.shape[1] == 2
- assert text_repr_type in ['quad', 'poly']
- assert text_score is None or 0 <= text_score <= 1
-
- if text_repr_type == 'quad':
- rect = cv2.minAreaRect(points)
- vertices = cv2.boxPoints(rect)
- boundary = []
- if min(rect[1]) > min_width:
- boundary = [p for p in vertices.flatten().tolist()]
-
- elif text_repr_type == 'poly':
-
- height = np.max(points[:, 1]) + 10
- width = np.max(points[:, 0]) + 10
-
- mask = np.zeros((height, width), np.uint8)
- mask[points[:, 1], points[:, 0]] = 255
-
- contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL,
- cv2.CHAIN_APPROX_SIMPLE)
- boundary = list(contours[0].flatten().tolist())
-
- if text_score is not None:
- boundary = boundary + [text_score]
- if len(boundary) < 8:
- return None
-
- return boundary
-
-
-def seg2boundary(seg, text_repr_type, text_score=None):
- """Convert a segmentation mask to a text boundary.
-
- Args:
- seg (ndarray): The segmentation mask.
- text_repr_type (str): Text instance encoding type
- ('quad' for quadrangle or 'poly' for polygon).
- text_score (float): The text score.
-
- Returns:
- boundary (list): The text boundary. Return None if no text found.
- """
- assert isinstance(seg, np.ndarray)
- assert isinstance(text_repr_type, str)
- assert text_score is None or 0 <= text_score <= 1
-
- points = np.where(seg)
- # x, y order
- points = np.concatenate([points[1], points[0]]).reshape(2, -1).transpose()
- boundary = None
- if len(points) != 0:
- boundary = points2boundary(points, text_repr_type, text_score)
-
- return boundary
-
-
-def extract_boundary(result):
- """Extract boundaries and their scores from result.
-
- Args:
- result (dict): The detection result with the key 'boundary_result'
- of one image.
-
- Returns:
- boundaries_with_scores (list[list[float]]): The boundary and score
- list.
- boundaries (list[list[float]]): The boundary list.
- scores (list[float]): The boundary score list.
- """
- assert isinstance(result, dict)
- assert 'boundary_result' in result.keys()
-
- boundaries_with_scores = result['boundary_result']
- assert utils.is_2dlist(boundaries_with_scores)
-
- boundaries = [b[:-1] for b in boundaries_with_scores]
- scores = [b[-1] for b in boundaries_with_scores]
-
- return (boundaries_with_scores, boundaries, scores)
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py
deleted file mode 100644
index e94553294294fa49952f2dfe0e3c64a5e00bc878..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './libra_faster_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_64x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=64,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco.py
deleted file mode 100644
index 0439fc1aa28408df89d6d3b657837654bbbbbcdb..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco.py
+++ /dev/null
@@ -1,3 +0,0 @@
-_base_ = './sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py'
-
-model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
diff --git a/spaces/triple-t/ttt-space/frontend/src/app.css b/spaces/triple-t/ttt-space/frontend/src/app.css
deleted file mode 100644
index bd6213e1dfe6b0a79ce7d8b37d0d2dc70f0250bb..0000000000000000000000000000000000000000
--- a/spaces/triple-t/ttt-space/frontend/src/app.css
+++ /dev/null
@@ -1,3 +0,0 @@
-@tailwind base;
-@tailwind components;
-@tailwind utilities;
\ No newline at end of file
diff --git a/spaces/truong-xuan-linh/auto-comment-generation/src/model/text_process.py b/spaces/truong-xuan-linh/auto-comment-generation/src/model/text_process.py
deleted file mode 100644
index d9fed99bea89f2f7ccd7cce7a243d4bf4d37fc63..0000000000000000000000000000000000000000
--- a/spaces/truong-xuan-linh/auto-comment-generation/src/model/text_process.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import re
-
-class TextPreprocess():
- def __init__(self, teencode_dir="./storage/teencode.txt"):
- self.get_teencode(teencode_dir)
-
- def get_teencode(self, teencode_dir):
- with open(teencode_dir, "r", encoding="utf-8") as f:
- teencode_original = f.readlines()
- teencode_json = {}
- for teencode in teencode_original:
- key, value = teencode.split("\t")
- value = value.replace("\n", "")
- teencode_json[key] = value
- self.teencode_json = teencode_json
-
- def teencode_normalize(self, text):
- text_split = text.split()
- return " ".join([self.teencode_json.get(txt, txt) for txt in text_split])
-
- def clean_text(self, text):
- # Xóa hashtag (dấu #)
- text = re.sub(r'#\w+', '', text)
-
- # Xóa liên kết (URL)
- text = re.sub(r'http\S+', '', text)
-
- # Xóa các ký tự số
- text = re.sub(r'\d+', '', text)
-
- # Xóa ký tự đặc biệt
- text = re.sub(r'[^\w\s]', '', text)
-
- text = " ".join(text.split())
- text = text.lower()
- return text
-
- def preprocess(self, text):
- cleaned_text = self.clean_text(text)
- cleaned_text = self.teencode_normalize(cleaned_text)
- return cleaned_text
\ No newline at end of file
diff --git a/spaces/ulysses115/diffsvc_test/preprocessing/binarize.py b/spaces/ulysses115/diffsvc_test/preprocessing/binarize.py
deleted file mode 100644
index df3bff078132d5c1e031af449855fe9c2ba998a1..0000000000000000000000000000000000000000
--- a/spaces/ulysses115/diffsvc_test/preprocessing/binarize.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import os
-
-os.environ["OMP_NUM_THREADS"] = "1"
-
-import importlib
-from utils.hparams import set_hparams, hparams
-
-
-def binarize():
- binarizer_cls = hparams.get("binarizer_cls", 'basics.base_binarizer.BaseBinarizer')
- pkg = ".".join(binarizer_cls.split(".")[:-1])
- cls_name = binarizer_cls.split(".")[-1]
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
- print("| Binarizer: ", binarizer_cls)
- binarizer_cls().process()
-
-
-if __name__ == '__main__':
- set_hparams()
- binarize()
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Ati Radeon Hdmi Ati Rv710 730 Everything You Need to Know About the HDMI Audio Output.md b/spaces/usbethFlerru/sovits-modelsV2/example/Ati Radeon Hdmi Ati Rv710 730 Everything You Need to Know About the HDMI Audio Output.md
deleted file mode 100644
index 93cfecd894f922f4b8ab1f34a770fd8b89916650..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Ati Radeon Hdmi Ati Rv710 730 Everything You Need to Know About the HDMI Audio Output.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
On the first boot GNOME 3 failed to load. The installer recognized my video card and installed the correct package, xserver-xorg-video-radeon, but as documented on the wiki my card requires proprietary firmware. This firmware is available in non-free. The steps to install it are,
To patch HDMI taken a fresh kext from 10.8.3, but failed. HDMI pins seems similar to you. Tried both radeon default codec id - as you said -268610049 & mine 268610104 (0x1002aa38). Only one thing no device entry on audio, but can increase or decrease volume(of what don't know )!!!
-
So from my view it seems that the hardware are not changing or auto selecting as I think it should. I have to plug the headphones in select the hdmi from the hardware tab to off then when I remove the headphones I have to set the HDMI back to on then go to the out put tab and then select the hdmi for my monitor.
-
No never fully tested in windows. I left windows 5 years ago when to fedora learned about Ubuntu went to Zorin and after my system crashed I went linux mint. Mit was alot more forgiving as all I had to do was double click the audio when the headphones were plugged in and when I want to switch back double click the hdmi.
-
Thanks for all your help I learned a lot this time around this issue has plagued me for a while now. I was so happy when I got to use hdmi and video I ditched my speaks real quick. I think I that was a bad idea but let see what the future holds
-
Otherwise, you're stuck using experimental hacks to various hypervisors trying to make things like this work using paravirt PCI-passthrough support. Xen claims to support using radeon cards in guests that are non-primary (and hidden using their PCIback driver). They also claim to be able to passthrogh an intel card to a guest even if its the primary card on the host. I personally never got it to work. NVidia cards are always harder to deal with in this case due to lack of documentation. It's quite possible that VMWare, KVM, or virtualbox may have better support for this, i don't know.
-
radeon is a family of open source graphics drivers for older AMD/ATI Radeon graphics cards. Cards based on Graphics Core Next (GCN) 2.0 "Sea Islands" are also fully supported by the newer AMDGPU driver, which also features experimental support for GCN1.1 (Southern Islands). Neither this nor the AMDGPU article cover installation and configuration of the closed source drivers (see the next paragraph).
Portage uses the VIDEO_CARDS variable for enabling support for various graphics cards. Setting the VIDEO_CARDS variable to radeon (see the feature matrix above) then asking Portage to rebuild the @world set will pull in the correct driver for older radeon cards:
-
I recommend using two entries in GRUB (with and without DPM) and creating a startscript based on the value of /sys/module/radeon/parameters/dpm (0 = DPM off, 1 = DPM on), in order to automatically adjust the power management, based on the decision at boottime (single-head with DPM vs. multi-head without DPM).
-
If you are using a kernel older than 3.13, HDMI audio must be explicitly enabled using the kernel commandline paramater radeon.audio=1. In addition, ALSA typically does not use HDMI as the default audio, so one way to force this as the default is to add a config file:
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Bigfish Games - Reaxxion Crack.zip What You Need to Know About the Reaxxion Crack.md b/spaces/usbethFlerru/sovits-modelsV2/example/Bigfish Games - Reaxxion Crack.zip What You Need to Know About the Reaxxion Crack.md
deleted file mode 100644
index 75677d4ec3ce3d2ccce6c448966eb4068e677a53..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Bigfish Games - Reaxxion Crack.zip What You Need to Know About the Reaxxion Crack.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/valhalla/glide-text2im/glide_text2im/model_creation.py b/spaces/valhalla/glide-text2im/glide_text2im/model_creation.py
deleted file mode 100644
index 54c37c24546fe0c8e4b22ea903c7039b21da4f4f..0000000000000000000000000000000000000000
--- a/spaces/valhalla/glide-text2im/glide_text2im/model_creation.py
+++ /dev/null
@@ -1,195 +0,0 @@
-from glide_text2im.gaussian_diffusion import get_named_beta_schedule
-from glide_text2im.respace import SpacedDiffusion, space_timesteps
-from glide_text2im.text2im_model import (
- InpaintText2ImUNet,
- SuperResInpaintText2ImUnet,
- SuperResText2ImUNet,
- Text2ImUNet,
-)
-from glide_text2im.tokenizer.bpe import get_encoder
-
-
-def model_and_diffusion_defaults():
- return dict(
- image_size=64,
- num_channels=192,
- num_res_blocks=3,
- channel_mult="",
- num_heads=1,
- num_head_channels=64,
- num_heads_upsample=-1,
- attention_resolutions="32,16,8",
- dropout=0.1,
- text_ctx=128,
- xf_width=512,
- xf_layers=16,
- xf_heads=8,
- xf_final_ln=True,
- xf_padding=True,
- diffusion_steps=1000,
- noise_schedule="squaredcos_cap_v2",
- timestep_respacing="",
- use_scale_shift_norm=True,
- resblock_updown=True,
- use_fp16=True,
- cache_text_emb=False,
- inpaint=False,
- super_res=False,
- )
-
-
-def model_and_diffusion_defaults_upsampler():
- result = model_and_diffusion_defaults()
- result.update(
- dict(
- image_size=256,
- num_res_blocks=2,
- noise_schedule="linear",
- super_res=True,
- )
- )
- return result
-
-
-def create_model_and_diffusion(
- image_size,
- num_channels,
- num_res_blocks,
- channel_mult,
- num_heads,
- num_head_channels,
- num_heads_upsample,
- attention_resolutions,
- dropout,
- text_ctx,
- xf_width,
- xf_layers,
- xf_heads,
- xf_final_ln,
- xf_padding,
- diffusion_steps,
- noise_schedule,
- timestep_respacing,
- use_scale_shift_norm,
- resblock_updown,
- use_fp16,
- cache_text_emb,
- inpaint,
- super_res,
-):
- model = create_model(
- image_size,
- num_channels,
- num_res_blocks,
- channel_mult=channel_mult,
- attention_resolutions=attention_resolutions,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- num_heads_upsample=num_heads_upsample,
- use_scale_shift_norm=use_scale_shift_norm,
- dropout=dropout,
- text_ctx=text_ctx,
- xf_width=xf_width,
- xf_layers=xf_layers,
- xf_heads=xf_heads,
- xf_final_ln=xf_final_ln,
- xf_padding=xf_padding,
- resblock_updown=resblock_updown,
- use_fp16=use_fp16,
- cache_text_emb=cache_text_emb,
- inpaint=inpaint,
- super_res=super_res,
- )
- diffusion = create_gaussian_diffusion(
- steps=diffusion_steps,
- noise_schedule=noise_schedule,
- timestep_respacing=timestep_respacing,
- )
- return model, diffusion
-
-
-def create_model(
- image_size,
- num_channels,
- num_res_blocks,
- channel_mult,
- attention_resolutions,
- num_heads,
- num_head_channels,
- num_heads_upsample,
- use_scale_shift_norm,
- dropout,
- text_ctx,
- xf_width,
- xf_layers,
- xf_heads,
- xf_final_ln,
- xf_padding,
- resblock_updown,
- use_fp16,
- cache_text_emb,
- inpaint,
- super_res,
-):
- if channel_mult == "":
- if image_size == 256:
- channel_mult = (1, 1, 2, 2, 4, 4)
- elif image_size == 128:
- channel_mult = (1, 1, 2, 3, 4)
- elif image_size == 64:
- channel_mult = (1, 2, 3, 4)
- else:
- raise ValueError(f"unsupported image size: {image_size}")
- else:
- channel_mult = tuple(int(ch_mult) for ch_mult in channel_mult.split(","))
- assert 2 ** (len(channel_mult) + 2) == image_size
-
- attention_ds = []
- for res in attention_resolutions.split(","):
- attention_ds.append(image_size // int(res))
-
- if inpaint and super_res:
- model_cls = SuperResInpaintText2ImUnet
- elif inpaint:
- model_cls = InpaintText2ImUNet
- elif super_res:
- model_cls = SuperResText2ImUNet
- else:
- model_cls = Text2ImUNet
- return model_cls(
- text_ctx=text_ctx,
- xf_width=xf_width,
- xf_layers=xf_layers,
- xf_heads=xf_heads,
- xf_final_ln=xf_final_ln,
- tokenizer=get_encoder(),
- xf_padding=xf_padding,
- in_channels=3,
- model_channels=num_channels,
- out_channels=6,
- num_res_blocks=num_res_blocks,
- attention_resolutions=tuple(attention_ds),
- dropout=dropout,
- channel_mult=channel_mult,
- use_fp16=use_fp16,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- num_heads_upsample=num_heads_upsample,
- use_scale_shift_norm=use_scale_shift_norm,
- resblock_updown=resblock_updown,
- cache_text_emb=cache_text_emb,
- )
-
-
-def create_gaussian_diffusion(
- steps,
- noise_schedule,
- timestep_respacing,
-):
- betas = get_named_beta_schedule(noise_schedule, steps)
- if not timestep_respacing:
- timestep_respacing = [steps]
- return SpacedDiffusion(
- use_timesteps=space_timesteps(steps, timestep_respacing),
- betas=betas,
- )
diff --git a/spaces/vkganesan/AdaIN/decoder.py b/spaces/vkganesan/AdaIN/decoder.py
deleted file mode 100644
index 3bbf1f85122afb8f674b85619994eae9458261be..0000000000000000000000000000000000000000
--- a/spaces/vkganesan/AdaIN/decoder.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import torch.nn as nn
-
-decoder = nn.Sequential(
- nn.ReflectionPad2d((1, 1, 1, 1)),
- nn.Conv2d(512, 256, (3, 3)),
- nn.ReLU(),
- nn.Upsample(scale_factor=2, mode='nearest'),
- nn.ReflectionPad2d((1, 1, 1, 1)),
- nn.Conv2d(256, 256, (3, 3)),
- nn.ReLU(),
- nn.ReflectionPad2d((1, 1, 1, 1)),
- nn.Conv2d(256, 256, (3, 3)),
- nn.ReLU(),
- nn.ReflectionPad2d((1, 1, 1, 1)),
- nn.Conv2d(256, 256, (3, 3)),
- nn.ReLU(),
- nn.ReflectionPad2d((1, 1, 1, 1)),
- nn.Conv2d(256, 128, (3, 3)),
- nn.ReLU(),
- nn.Upsample(scale_factor=2, mode='nearest'),
- nn.ReflectionPad2d((1, 1, 1, 1)),
- nn.Conv2d(128, 128, (3, 3)),
- nn.ReLU(),
- nn.ReflectionPad2d((1, 1, 1, 1)),
- nn.Conv2d(128, 64, (3, 3)),
- nn.ReLU(),
- nn.Upsample(scale_factor=2, mode='nearest'),
- nn.ReflectionPad2d((1, 1, 1, 1)),
- nn.Conv2d(64, 64, (3, 3)),
- nn.ReLU(),
- nn.ReflectionPad2d((1, 1, 1, 1)),
- nn.Conv2d(64, 3, (3, 3)),
-)
\ No newline at end of file
diff --git a/spaces/vonbarnekowa/stable-diffusion/ldm/models/diffusion/dpm_solver/dpm_solver.py b/spaces/vonbarnekowa/stable-diffusion/ldm/models/diffusion/dpm_solver/dpm_solver.py
deleted file mode 100644
index 095e5ba3ce0b1aa7f4b3f1e2e5d8fff7cfe6dc8c..0000000000000000000000000000000000000000
--- a/spaces/vonbarnekowa/stable-diffusion/ldm/models/diffusion/dpm_solver/dpm_solver.py
+++ /dev/null
@@ -1,1154 +0,0 @@
-import torch
-import torch.nn.functional as F
-import math
-from tqdm import tqdm
-
-
-class NoiseScheduleVP:
- def __init__(
- self,
- schedule='discrete',
- betas=None,
- alphas_cumprod=None,
- continuous_beta_0=0.1,
- continuous_beta_1=20.,
- ):
- """Create a wrapper class for the forward SDE (VP type).
- ***
- Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t.
- We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images.
- ***
- The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ).
- We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper).
- Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have:
- log_alpha_t = self.marginal_log_mean_coeff(t)
- sigma_t = self.marginal_std(t)
- lambda_t = self.marginal_lambda(t)
- Moreover, as lambda(t) is an invertible function, we also support its inverse function:
- t = self.inverse_lambda(lambda_t)
- ===============================================================
- We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]).
- 1. For discrete-time DPMs:
- For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by:
- t_i = (i + 1) / N
- e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1.
- We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3.
- Args:
- betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details)
- alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details)
- Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`.
- **Important**: Please pay special attention for the args for `alphas_cumprod`:
- The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that
- q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ).
- Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have
- alpha_{t_n} = \sqrt{\hat{alpha_n}},
- and
- log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}).
- 2. For continuous-time DPMs:
- We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise
- schedule are the default settings in DDPM and improved-DDPM:
- Args:
- beta_min: A `float` number. The smallest beta for the linear schedule.
- beta_max: A `float` number. The largest beta for the linear schedule.
- cosine_s: A `float` number. The hyperparameter in the cosine schedule.
- cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule.
- T: A `float` number. The ending time of the forward process.
- ===============================================================
- Args:
- schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs,
- 'linear' or 'cosine' for continuous-time DPMs.
- Returns:
- A wrapper object of the forward SDE (VP type).
-
- ===============================================================
- Example:
- # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1):
- >>> ns = NoiseScheduleVP('discrete', betas=betas)
- # For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1):
- >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod)
- # For continuous-time DPMs (VPSDE), linear schedule:
- >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.)
- """
-
- if schedule not in ['discrete', 'linear', 'cosine']:
- raise ValueError(
- "Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format(
- schedule))
-
- self.schedule = schedule
- if schedule == 'discrete':
- if betas is not None:
- log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0)
- else:
- assert alphas_cumprod is not None
- log_alphas = 0.5 * torch.log(alphas_cumprod)
- self.total_N = len(log_alphas)
- self.T = 1.
- self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1))
- self.log_alpha_array = log_alphas.reshape((1, -1,))
- else:
- self.total_N = 1000
- self.beta_0 = continuous_beta_0
- self.beta_1 = continuous_beta_1
- self.cosine_s = 0.008
- self.cosine_beta_max = 999.
- self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * (
- 1. + self.cosine_s) / math.pi - self.cosine_s
- self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.))
- self.schedule = schedule
- if schedule == 'cosine':
- # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T.
- # Note that T = 0.9946 may be not the optimal setting. However, we find it works well.
- self.T = 0.9946
- else:
- self.T = 1.
-
- def marginal_log_mean_coeff(self, t):
- """
- Compute log(alpha_t) of a given continuous-time label t in [0, T].
- """
- if self.schedule == 'discrete':
- return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device),
- self.log_alpha_array.to(t.device)).reshape((-1))
- elif self.schedule == 'linear':
- return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0
- elif self.schedule == 'cosine':
- log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.))
- log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0
- return log_alpha_t
-
- def marginal_alpha(self, t):
- """
- Compute alpha_t of a given continuous-time label t in [0, T].
- """
- return torch.exp(self.marginal_log_mean_coeff(t))
-
- def marginal_std(self, t):
- """
- Compute sigma_t of a given continuous-time label t in [0, T].
- """
- return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t)))
-
- def marginal_lambda(self, t):
- """
- Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T].
- """
- log_mean_coeff = self.marginal_log_mean_coeff(t)
- log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff))
- return log_mean_coeff - log_std
-
- def inverse_lambda(self, lamb):
- """
- Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t.
- """
- if self.schedule == 'linear':
- tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))
- Delta = self.beta_0 ** 2 + tmp
- return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0)
- elif self.schedule == 'discrete':
- log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb)
- t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]),
- torch.flip(self.t_array.to(lamb.device), [1]))
- return t.reshape((-1,))
- else:
- log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))
- t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * (
- 1. + self.cosine_s) / math.pi - self.cosine_s
- t = t_fn(log_alpha)
- return t
-
-
-def model_wrapper(
- model,
- noise_schedule,
- model_type="noise",
- model_kwargs={},
- guidance_type="uncond",
- condition=None,
- unconditional_condition=None,
- guidance_scale=1.,
- classifier_fn=None,
- classifier_kwargs={},
-):
- """Create a wrapper function for the noise prediction model.
- DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to
- firstly wrap the model function to a noise prediction model that accepts the continuous time as the input.
- We support four types of the diffusion model by setting `model_type`:
- 1. "noise": noise prediction model. (Trained by predicting noise).
- 2. "x_start": data prediction model. (Trained by predicting the data x_0 at time 0).
- 3. "v": velocity prediction model. (Trained by predicting the velocity).
- The "v" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2].
- [1] Salimans, Tim, and Jonathan Ho. "Progressive distillation for fast sampling of diffusion models."
- arXiv preprint arXiv:2202.00512 (2022).
- [2] Ho, Jonathan, et al. "Imagen Video: High Definition Video Generation with Diffusion Models."
- arXiv preprint arXiv:2210.02303 (2022).
-
- 4. "score": marginal score function. (Trained by denoising score matching).
- Note that the score function and the noise prediction model follows a simple relationship:
- ```
- noise(x_t, t) = -sigma_t * score(x_t, t)
- ```
- We support three types of guided sampling by DPMs by setting `guidance_type`:
- 1. "uncond": unconditional sampling by DPMs.
- The input `model` has the following format:
- ``
- model(x, t_input, **model_kwargs) -> noise | x_start | v | score
- ``
- 2. "classifier": classifier guidance sampling [3] by DPMs and another classifier.
- The input `model` has the following format:
- ``
- model(x, t_input, **model_kwargs) -> noise | x_start | v | score
- ``
- The input `classifier_fn` has the following format:
- ``
- classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond)
- ``
- [3] P. Dhariwal and A. Q. Nichol, "Diffusion models beat GANs on image synthesis,"
- in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794.
- 3. "classifier-free": classifier-free guidance sampling by conditional DPMs.
- The input `model` has the following format:
- ``
- model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score
- ``
- And if cond == `unconditional_condition`, the model output is the unconditional DPM output.
- [4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance."
- arXiv preprint arXiv:2207.12598 (2022).
-
- The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999)
- or continuous-time labels (i.e. epsilon to T).
- We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise:
- ``
- def model_fn(x, t_continuous) -> noise:
- t_input = get_model_input_time(t_continuous)
- return noise_pred(model, x, t_input, **model_kwargs)
- ``
- where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver.
- ===============================================================
- Args:
- model: A diffusion model with the corresponding format described above.
- noise_schedule: A noise schedule object, such as NoiseScheduleVP.
- model_type: A `str`. The parameterization type of the diffusion model.
- "noise" or "x_start" or "v" or "score".
- model_kwargs: A `dict`. A dict for the other inputs of the model function.
- guidance_type: A `str`. The type of the guidance for sampling.
- "uncond" or "classifier" or "classifier-free".
- condition: A pytorch tensor. The condition for the guided sampling.
- Only used for "classifier" or "classifier-free" guidance type.
- unconditional_condition: A pytorch tensor. The condition for the unconditional sampling.
- Only used for "classifier-free" guidance type.
- guidance_scale: A `float`. The scale for the guided sampling.
- classifier_fn: A classifier function. Only used for the classifier guidance.
- classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function.
- Returns:
- A noise prediction model that accepts the noised data and the continuous time as the inputs.
- """
-
- def get_model_input_time(t_continuous):
- """
- Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time.
- For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N].
- For continuous-time DPMs, we just use `t_continuous`.
- """
- if noise_schedule.schedule == 'discrete':
- return (t_continuous - 1. / noise_schedule.total_N) * 1000.
- else:
- return t_continuous
-
- def noise_pred_fn(x, t_continuous, cond=None):
- if t_continuous.reshape((-1,)).shape[0] == 1:
- t_continuous = t_continuous.expand((x.shape[0]))
- t_input = get_model_input_time(t_continuous)
- if cond is None:
- output = model(x, t_input, **model_kwargs)
- else:
- output = model(x, t_input, cond, **model_kwargs)
- if model_type == "noise":
- return output
- elif model_type == "x_start":
- alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)
- dims = x.dim()
- return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims)
- elif model_type == "v":
- alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)
- dims = x.dim()
- return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x
- elif model_type == "score":
- sigma_t = noise_schedule.marginal_std(t_continuous)
- dims = x.dim()
- return -expand_dims(sigma_t, dims) * output
-
- def cond_grad_fn(x, t_input):
- """
- Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t).
- """
- with torch.enable_grad():
- x_in = x.detach().requires_grad_(True)
- log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs)
- return torch.autograd.grad(log_prob.sum(), x_in)[0]
-
- def model_fn(x, t_continuous):
- """
- The noise predicition model function that is used for DPM-Solver.
- """
- if t_continuous.reshape((-1,)).shape[0] == 1:
- t_continuous = t_continuous.expand((x.shape[0]))
- if guidance_type == "uncond":
- return noise_pred_fn(x, t_continuous)
- elif guidance_type == "classifier":
- assert classifier_fn is not None
- t_input = get_model_input_time(t_continuous)
- cond_grad = cond_grad_fn(x, t_input)
- sigma_t = noise_schedule.marginal_std(t_continuous)
- noise = noise_pred_fn(x, t_continuous)
- return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad
- elif guidance_type == "classifier-free":
- if guidance_scale == 1. or unconditional_condition is None:
- return noise_pred_fn(x, t_continuous, cond=condition)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t_continuous] * 2)
- c_in = torch.cat([unconditional_condition, condition])
- noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2)
- return noise_uncond + guidance_scale * (noise - noise_uncond)
-
- assert model_type in ["noise", "x_start", "v"]
- assert guidance_type in ["uncond", "classifier", "classifier-free"]
- return model_fn
-
-
-class DPM_Solver:
- def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.):
- """Construct a DPM-Solver.
- We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0").
- If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver).
- If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++).
- In such case, we further support the "dynamic thresholding" in [1] when `thresholding` is True.
- The "dynamic thresholding" can greatly improve the sample quality for pixel-space DPMs with large guidance scales.
- Args:
- model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]):
- ``
- def model_fn(x, t_continuous):
- return noise
- ``
- noise_schedule: A noise schedule object, such as NoiseScheduleVP.
- predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model.
- thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the "dynamic thresholding" in [1].
- max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding.
-
- [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b.
- """
- self.model = model_fn
- self.noise_schedule = noise_schedule
- self.predict_x0 = predict_x0
- self.thresholding = thresholding
- self.max_val = max_val
-
- def noise_prediction_fn(self, x, t):
- """
- Return the noise prediction model.
- """
- return self.model(x, t)
-
- def data_prediction_fn(self, x, t):
- """
- Return the data prediction model (with thresholding).
- """
- noise = self.noise_prediction_fn(x, t)
- dims = x.dim()
- alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t)
- x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims)
- if self.thresholding:
- p = 0.995 # A hyperparameter in the paper of "Imagen" [1].
- s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1)
- s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims)
- x0 = torch.clamp(x0, -s, s) / s
- return x0
-
- def model_fn(self, x, t):
- """
- Convert the model to the noise prediction model or the data prediction model.
- """
- if self.predict_x0:
- return self.data_prediction_fn(x, t)
- else:
- return self.noise_prediction_fn(x, t)
-
- def get_time_steps(self, skip_type, t_T, t_0, N, device):
- """Compute the intermediate time steps for sampling.
- Args:
- skip_type: A `str`. The type for the spacing of the time steps. We support three types:
- - 'logSNR': uniform logSNR for the time steps.
- - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.)
- - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.)
- t_T: A `float`. The starting time of the sampling (default is T).
- t_0: A `float`. The ending time of the sampling (default is epsilon).
- N: A `int`. The total number of the spacing of the time steps.
- device: A torch device.
- Returns:
- A pytorch tensor of the time steps, with the shape (N + 1,).
- """
- if skip_type == 'logSNR':
- lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device))
- lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device))
- logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device)
- return self.noise_schedule.inverse_lambda(logSNR_steps)
- elif skip_type == 'time_uniform':
- return torch.linspace(t_T, t_0, N + 1).to(device)
- elif skip_type == 'time_quadratic':
- t_order = 2
- t = torch.linspace(t_T ** (1. / t_order), t_0 ** (1. / t_order), N + 1).pow(t_order).to(device)
- return t
- else:
- raise ValueError(
- "Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type))
-
- def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device):
- """
- Get the order of each step for sampling by the singlestep DPM-Solver.
- We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast".
- Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is:
- - If order == 1:
- We take `steps` of DPM-Solver-1 (i.e. DDIM).
- - If order == 2:
- - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling.
- - If steps % 2 == 0, we use K steps of DPM-Solver-2.
- - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If order == 3:
- - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling.
- - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1.
- - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2.
- ============================================
- Args:
- order: A `int`. The max order for the solver (2 or 3).
- steps: A `int`. The total number of function evaluations (NFE).
- skip_type: A `str`. The type for the spacing of the time steps. We support three types:
- - 'logSNR': uniform logSNR for the time steps.
- - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.)
- - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.)
- t_T: A `float`. The starting time of the sampling (default is T).
- t_0: A `float`. The ending time of the sampling (default is epsilon).
- device: A torch device.
- Returns:
- orders: A list of the solver order of each step.
- """
- if order == 3:
- K = steps // 3 + 1
- if steps % 3 == 0:
- orders = [3, ] * (K - 2) + [2, 1]
- elif steps % 3 == 1:
- orders = [3, ] * (K - 1) + [1]
- else:
- orders = [3, ] * (K - 1) + [2]
- elif order == 2:
- if steps % 2 == 0:
- K = steps // 2
- orders = [2, ] * K
- else:
- K = steps // 2 + 1
- orders = [2, ] * (K - 1) + [1]
- elif order == 1:
- K = 1
- orders = [1, ] * steps
- else:
- raise ValueError("'order' must be '1' or '2' or '3'.")
- if skip_type == 'logSNR':
- # To reproduce the results in DPM-Solver paper
- timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device)
- else:
- timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[
- torch.cumsum(torch.tensor([0, ] + orders)).to(device)]
- return timesteps_outer, orders
-
- def denoise_to_zero_fn(self, x, s):
- """
- Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization.
- """
- return self.data_prediction_fn(x, s)
-
- def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False):
- """
- DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`.
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- model_s: A pytorch tensor. The model function evaluated at time `s`.
- If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.
- return_intermediate: A `bool`. If true, also return the model value at time `s`.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- ns = self.noise_schedule
- dims = x.dim()
- lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)
- h = lambda_t - lambda_s
- log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t)
- sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t)
- alpha_t = torch.exp(log_alpha_t)
-
- if self.predict_x0:
- phi_1 = torch.expm1(-h)
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- )
- if return_intermediate:
- return x_t, {'model_s': model_s}
- else:
- return x_t
- else:
- phi_1 = torch.expm1(h)
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- )
- if return_intermediate:
- return x_t, {'model_s': model_s}
- else:
- return x_t
-
- def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False,
- solver_type='dpm_solver'):
- """
- Singlestep solver DPM-Solver-2 from time `s` to time `t`.
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- r1: A `float`. The hyperparameter of the second-order solver.
- model_s: A pytorch tensor. The model function evaluated at time `s`.
- If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.
- return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if solver_type not in ['dpm_solver', 'taylor']:
- raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type))
- if r1 is None:
- r1 = 0.5
- ns = self.noise_schedule
- dims = x.dim()
- lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)
- h = lambda_t - lambda_s
- lambda_s1 = lambda_s + r1 * h
- s1 = ns.inverse_lambda(lambda_s1)
- log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(
- s1), ns.marginal_log_mean_coeff(t)
- sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t)
- alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t)
-
- if self.predict_x0:
- phi_11 = torch.expm1(-r1 * h)
- phi_1 = torch.expm1(-h)
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_s1 = (
- expand_dims(sigma_s1 / sigma_s, dims) * x
- - expand_dims(alpha_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s)
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * (
- model_s1 - model_s)
- )
- else:
- phi_11 = torch.expm1(r1 * h)
- phi_1 = torch.expm1(h)
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_s1 = (
- expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x
- - expand_dims(sigma_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s)
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s)
- )
- if return_intermediate:
- return x_t, {'model_s': model_s, 'model_s1': model_s1}
- else:
- return x_t
-
- def singlestep_dpm_solver_third_update(self, x, s, t, r1=1. / 3., r2=2. / 3., model_s=None, model_s1=None,
- return_intermediate=False, solver_type='dpm_solver'):
- """
- Singlestep solver DPM-Solver-3 from time `s` to time `t`.
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- r1: A `float`. The hyperparameter of the third-order solver.
- r2: A `float`. The hyperparameter of the third-order solver.
- model_s: A pytorch tensor. The model function evaluated at time `s`.
- If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.
- model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`).
- If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it.
- return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if solver_type not in ['dpm_solver', 'taylor']:
- raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type))
- if r1 is None:
- r1 = 1. / 3.
- if r2 is None:
- r2 = 2. / 3.
- ns = self.noise_schedule
- dims = x.dim()
- lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)
- h = lambda_t - lambda_s
- lambda_s1 = lambda_s + r1 * h
- lambda_s2 = lambda_s + r2 * h
- s1 = ns.inverse_lambda(lambda_s1)
- s2 = ns.inverse_lambda(lambda_s2)
- log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff(
- s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t)
- sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(
- s2), ns.marginal_std(t)
- alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t)
-
- if self.predict_x0:
- phi_11 = torch.expm1(-r1 * h)
- phi_12 = torch.expm1(-r2 * h)
- phi_1 = torch.expm1(-h)
- phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1.
- phi_2 = phi_1 / h + 1.
- phi_3 = phi_2 / h - 0.5
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- if model_s1 is None:
- x_s1 = (
- expand_dims(sigma_s1 / sigma_s, dims) * x
- - expand_dims(alpha_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- x_s2 = (
- expand_dims(sigma_s2 / sigma_s, dims) * x
- - expand_dims(alpha_s2 * phi_12, dims) * model_s
- + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s)
- )
- model_s2 = self.model_fn(x_s2, s2)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s)
- )
- elif solver_type == 'taylor':
- D1_0 = (1. / r1) * (model_s1 - model_s)
- D1_1 = (1. / r2) * (model_s2 - model_s)
- D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1)
- D2 = 2. * (D1_1 - D1_0) / (r2 - r1)
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- + expand_dims(alpha_t * phi_2, dims) * D1
- - expand_dims(alpha_t * phi_3, dims) * D2
- )
- else:
- phi_11 = torch.expm1(r1 * h)
- phi_12 = torch.expm1(r2 * h)
- phi_1 = torch.expm1(h)
- phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1.
- phi_2 = phi_1 / h - 1.
- phi_3 = phi_2 / h - 0.5
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- if model_s1 is None:
- x_s1 = (
- expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x
- - expand_dims(sigma_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- x_s2 = (
- expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x
- - expand_dims(sigma_s2 * phi_12, dims) * model_s
- - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s)
- )
- model_s2 = self.model_fn(x_s2, s2)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s)
- )
- elif solver_type == 'taylor':
- D1_0 = (1. / r1) * (model_s1 - model_s)
- D1_1 = (1. / r2) * (model_s2 - model_s)
- D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1)
- D2 = 2. * (D1_1 - D1_0) / (r2 - r1)
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - expand_dims(sigma_t * phi_2, dims) * D1
- - expand_dims(sigma_t * phi_3, dims) * D2
- )
-
- if return_intermediate:
- return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2}
- else:
- return x_t
-
- def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"):
- """
- Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`.
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- model_prev_list: A list of pytorch tensor. The previous computed model values.
- t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if solver_type not in ['dpm_solver', 'taylor']:
- raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type))
- ns = self.noise_schedule
- dims = x.dim()
- model_prev_1, model_prev_0 = model_prev_list
- t_prev_1, t_prev_0 = t_prev_list
- lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda(
- t_prev_0), ns.marginal_lambda(t)
- log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)
- sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)
- alpha_t = torch.exp(log_alpha_t)
-
- h_0 = lambda_prev_0 - lambda_prev_1
- h = lambda_t - lambda_prev_0
- r0 = h_0 / h
- D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1)
- if self.predict_x0:
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(sigma_t / sigma_prev_0, dims) * x
- - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0
- - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(sigma_t / sigma_prev_0, dims) * x
- - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0
- + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0
- )
- else:
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x
- - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0
- - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x
- - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0
- - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0
- )
- return x_t
-
- def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'):
- """
- Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`.
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- model_prev_list: A list of pytorch tensor. The previous computed model values.
- t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- ns = self.noise_schedule
- dims = x.dim()
- model_prev_2, model_prev_1, model_prev_0 = model_prev_list
- t_prev_2, t_prev_1, t_prev_0 = t_prev_list
- lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda(
- t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t)
- log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)
- sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)
- alpha_t = torch.exp(log_alpha_t)
-
- h_1 = lambda_prev_1 - lambda_prev_2
- h_0 = lambda_prev_0 - lambda_prev_1
- h = lambda_t - lambda_prev_0
- r0, r1 = h_0 / h, h_1 / h
- D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1)
- D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2)
- D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1)
- D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1)
- if self.predict_x0:
- x_t = (
- expand_dims(sigma_t / sigma_prev_0, dims) * x
- - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0
- + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1
- - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h ** 2 - 0.5), dims) * D2
- )
- else:
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x
- - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0
- - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1
- - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h ** 2 - 0.5), dims) * D2
- )
- return x_t
-
- def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None,
- r2=None):
- """
- Singlestep DPM-Solver with the order `order` from time `s` to time `t`.
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3.
- return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- r1: A `float`. The hyperparameter of the second-order or third-order solver.
- r2: A `float`. The hyperparameter of the third-order solver.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if order == 1:
- return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate)
- elif order == 2:
- return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate,
- solver_type=solver_type, r1=r1)
- elif order == 3:
- return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate,
- solver_type=solver_type, r1=r1, r2=r2)
- else:
- raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order))
-
- def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'):
- """
- Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`.
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- model_prev_list: A list of pytorch tensor. The previous computed model values.
- t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3.
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if order == 1:
- return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1])
- elif order == 2:
- return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type)
- elif order == 3:
- return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type)
- else:
- raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order))
-
- def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5,
- solver_type='dpm_solver'):
- """
- The adaptive step size solver based on singlestep DPM-Solver.
- Args:
- x: A pytorch tensor. The initial value at time `t_T`.
- order: A `int`. The (higher) order of the solver. We only support order == 2 or 3.
- t_T: A `float`. The starting time of the sampling (default is T).
- t_0: A `float`. The ending time of the sampling (default is epsilon).
- h_init: A `float`. The initial step size (for logSNR).
- atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1].
- rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05.
- theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1].
- t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the
- current time and `t_0` is less than `t_err`. The default setting is 1e-5.
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_0: A pytorch tensor. The approximated solution at time `t_0`.
- [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score-based models," arXiv preprint arXiv:2105.14080, 2021.
- """
- ns = self.noise_schedule
- s = t_T * torch.ones((x.shape[0],)).to(x)
- lambda_s = ns.marginal_lambda(s)
- lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x))
- h = h_init * torch.ones_like(s).to(x)
- x_prev = x
- nfe = 0
- if order == 2:
- r1 = 0.5
- lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True)
- higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1,
- solver_type=solver_type,
- **kwargs)
- elif order == 3:
- r1, r2 = 1. / 3., 2. / 3.
- lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1,
- return_intermediate=True,
- solver_type=solver_type)
- higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2,
- solver_type=solver_type,
- **kwargs)
- else:
- raise ValueError("For adaptive step size solver, order must be 2 or 3, got {}".format(order))
- while torch.abs((s - t_0)).mean() > t_err:
- t = ns.inverse_lambda(lambda_s + h)
- x_lower, lower_noise_kwargs = lower_update(x, s, t)
- x_higher = higher_update(x, s, t, **lower_noise_kwargs)
- delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev)))
- norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True))
- E = norm_fn((x_higher - x_lower) / delta).max()
- if torch.all(E <= 1.):
- x = x_higher
- s = t
- x_prev = x_lower
- lambda_s = ns.marginal_lambda(s)
- h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s)
- nfe += order
- print('adaptive solver nfe', nfe)
- return x
-
- def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform',
- method='singlestep', lower_order_final=True, denoise_to_zero=False, solver_type='dpm_solver',
- atol=0.0078, rtol=0.05,
- ):
- """
- Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`.
- =====================================================
- We support the following algorithms for both noise prediction model and data prediction model:
- - 'singlestep':
- Singlestep DPM-Solver (i.e. "DPM-Solver-fast" in the paper), which combines different orders of singlestep DPM-Solver.
- We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps).
- The total number of function evaluations (NFE) == `steps`.
- Given a fixed NFE == `steps`, the sampling procedure is:
- - If `order` == 1:
- - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM).
- - If `order` == 2:
- - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling.
- - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2.
- - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If `order` == 3:
- - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling.
- - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1.
- - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2.
- - 'multistep':
- Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`.
- We initialize the first `order` values by lower order multistep solvers.
- Given a fixed NFE == `steps`, the sampling procedure is:
- Denote K = steps.
- - If `order` == 1:
- - We use K steps of DPM-Solver-1 (i.e. DDIM).
- - If `order` == 2:
- - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2.
- - If `order` == 3:
- - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3.
- - 'singlestep_fixed':
- Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3).
- We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE.
- - 'adaptive':
- Adaptive step size DPM-Solver (i.e. "DPM-Solver-12" and "DPM-Solver-23" in the paper).
- We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`.
- You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs
- (NFE) and the sample quality.
- - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2.
- - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3.
- =====================================================
- Some advices for choosing the algorithm:
- - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs:
- Use singlestep DPM-Solver ("DPM-Solver-fast" in the paper) with `order = 3`.
- e.g.
- >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False)
- >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3,
- skip_type='time_uniform', method='singlestep')
- - For **guided sampling with large guidance scale** by DPMs:
- Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`.
- e.g.
- >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True)
- >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2,
- skip_type='time_uniform', method='multistep')
- We support three types of `skip_type`:
- - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images**
- - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**.
- - 'time_quadratic': quadratic time for the time steps.
- =====================================================
- Args:
- x: A pytorch tensor. The initial value at time `t_start`
- e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution.
- steps: A `int`. The total number of function evaluations (NFE).
- t_start: A `float`. The starting time of the sampling.
- If `T` is None, we use self.noise_schedule.T (default is 1.0).
- t_end: A `float`. The ending time of the sampling.
- If `t_end` is None, we use 1. / self.noise_schedule.total_N.
- e.g. if total_N == 1000, we have `t_end` == 1e-3.
- For discrete-time DPMs:
- - We recommend `t_end` == 1. / self.noise_schedule.total_N.
- For continuous-time DPMs:
- - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15.
- order: A `int`. The order of DPM-Solver.
- skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'.
- method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'.
- denoise_to_zero: A `bool`. Whether to denoise to time 0 at the final step.
- Default is `False`. If `denoise_to_zero` is `True`, the total NFE is (`steps` + 1).
- This trick is firstly proposed by DDPM (https://arxiv.org/abs/2006.11239) and
- score_sde (https://arxiv.org/abs/2011.13456). Such trick can improve the FID
- for diffusion models sampling by diffusion SDEs for low-resolutional images
- (such as CIFAR-10). However, we observed that such trick does not matter for
- high-resolutional images. As it needs an additional NFE, we do not recommend
- it for high-resolutional images.
- lower_order_final: A `bool`. Whether to use lower order solvers at the final steps.
- Only valid for `method=multistep` and `steps < 15`. We empirically find that
- this trick is a key to stabilizing the sampling by DPM-Solver with very few steps
- (especially for steps <= 10). So we recommend to set it to be `True`.
- solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`.
- atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'.
- rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'.
- Returns:
- x_end: A pytorch tensor. The approximated solution at time `t_end`.
- """
- t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end
- t_T = self.noise_schedule.T if t_start is None else t_start
- device = x.device
- if method == 'adaptive':
- with torch.no_grad():
- x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol,
- solver_type=solver_type)
- elif method == 'multistep':
- assert steps >= order
- timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device)
- assert timesteps.shape[0] - 1 == steps
- with torch.no_grad():
- vec_t = timesteps[0].expand((x.shape[0]))
- model_prev_list = [self.model_fn(x, vec_t)]
- t_prev_list = [vec_t]
- # Init the first `order` values by lower order multistep DPM-Solver.
- for init_order in tqdm(range(1, order), desc="DPM init order"):
- vec_t = timesteps[init_order].expand(x.shape[0])
- x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order,
- solver_type=solver_type)
- model_prev_list.append(self.model_fn(x, vec_t))
- t_prev_list.append(vec_t)
- # Compute the remaining values by `order`-th order multistep DPM-Solver.
- for step in tqdm(range(order, steps + 1), desc="DPM multistep"):
- vec_t = timesteps[step].expand(x.shape[0])
- if lower_order_final and steps < 15:
- step_order = min(order, steps + 1 - step)
- else:
- step_order = order
- x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, step_order,
- solver_type=solver_type)
- for i in range(order - 1):
- t_prev_list[i] = t_prev_list[i + 1]
- model_prev_list[i] = model_prev_list[i + 1]
- t_prev_list[-1] = vec_t
- # We do not need to evaluate the final model value.
- if step < steps:
- model_prev_list[-1] = self.model_fn(x, vec_t)
- elif method in ['singlestep', 'singlestep_fixed']:
- if method == 'singlestep':
- timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order,
- skip_type=skip_type,
- t_T=t_T, t_0=t_0,
- device=device)
- elif method == 'singlestep_fixed':
- K = steps // order
- orders = [order, ] * K
- timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device)
- for i, order in enumerate(orders):
- t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1]
- timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(),
- N=order, device=device)
- lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner)
- vec_s, vec_t = t_T_inner.tile(x.shape[0]), t_0_inner.tile(x.shape[0])
- h = lambda_inner[-1] - lambda_inner[0]
- r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h
- r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h
- x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2)
- if denoise_to_zero:
- x = self.denoise_to_zero_fn(x, torch.ones((x.shape[0],)).to(device) * t_0)
- return x
-
-
-#############################################################
-# other utility functions
-#############################################################
-
-def interpolate_fn(x, xp, yp):
- """
- A piecewise linear function y = f(x), using xp and yp as keypoints.
- We implement f(x) in a differentiable way (i.e. applicable for autograd).
- The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.)
- Args:
- x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver).
- xp: PyTorch tensor with shape [C, K], where K is the number of keypoints.
- yp: PyTorch tensor with shape [C, K].
- Returns:
- The function values f(x), with shape [N, C].
- """
- N, K = x.shape[0], xp.shape[1]
- all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2)
- sorted_all_x, x_indices = torch.sort(all_x, dim=2)
- x_idx = torch.argmin(x_indices, dim=2)
- cand_start_idx = x_idx - 1
- start_idx = torch.where(
- torch.eq(x_idx, 0),
- torch.tensor(1, device=x.device),
- torch.where(
- torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,
- ),
- )
- end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1)
- start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2)
- end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2)
- start_idx2 = torch.where(
- torch.eq(x_idx, 0),
- torch.tensor(0, device=x.device),
- torch.where(
- torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,
- ),
- )
- y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1)
- start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2)
- end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2)
- cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x)
- return cand
-
-
-def expand_dims(v, dims):
- """
- Expand the tensor `v` to the dim `dims`.
- Args:
- `v`: a PyTorch tensor with shape [N].
- `dim`: a `int`.
- Returns:
- a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`.
- """
- return v[(...,) + (None,) * (dims - 1)]
\ No newline at end of file
diff --git a/spaces/weidacn/deepdanbooru/deepdanbooru/commands/download_tags.py b/spaces/weidacn/deepdanbooru/deepdanbooru/commands/download_tags.py
deleted file mode 100644
index 3c3e25c26a85a6f6683b7f0d84afffa68058c49f..0000000000000000000000000000000000000000
--- a/spaces/weidacn/deepdanbooru/deepdanbooru/commands/download_tags.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import os
-import time
-
-import requests
-
-import deepdanbooru as dd
-
-
-def download_category_tags(
- category, minimum_post_count, limit, page_size=1000, order="count"
-):
- category_to_index = {"general": 0, "artist": 1, "copyright": 3, "character": 4}
-
- gold_only_tags = ["loli", "shota", "toddlercon"]
-
- if category not in category_to_index:
- raise Exception(f"Not supported category : {category}")
-
- category_index = category_to_index[category]
-
- parameters = {
- "limit": page_size,
- "page": 1,
- "search[order]": order,
- "search[category]": category_index,
- }
-
- request_url = "https://danbooru.donmai.us/tags.json"
-
- tags = set()
-
- while True:
- response = requests.get(request_url, params=parameters)
- response_json = response.json()
-
- response_tags = [
- tag_json["name"]
- for tag_json in response_json
- if tag_json["post_count"] >= minimum_post_count
- ]
-
- if not response_tags:
- break
-
- is_full = False
-
- for tag in response_tags:
- if tag in gold_only_tags:
- continue
-
- tags.add(tag)
-
- if len(tags) >= limit:
- is_full = True
- break
-
- if is_full:
- break
- else:
- parameters["page"] += 1
-
- return tags
-
-
-def download_tags(project_path, limit, minimum_post_count, is_overwrite):
- print(
- f"Start downloading tags ... (limit:{limit}, minimum_post_count:{minimum_post_count})"
- )
-
- log = {
- "date": time.strftime("%Y/%m/%d %H:%M:%S"),
- "limit": limit,
- "minimum_post_count": minimum_post_count,
- }
-
- system_tags = [
- "rating:general",
- "rating:sensitive",
- "rating:questionable",
- "rating:explicit",
- # 'score:very_bad',
- # 'score:bad',
- # 'score:average',
- # 'score:good',
- # 'score:very_good',
- ]
-
- category_definitions = [
- {
- "category_name": "General",
- "category": "general",
- "path": os.path.join(project_path, "tags-general.txt"),
- },
- # {
- # 'category_name': 'Artist',
- # 'category': 'artist',
- # 'path': os.path.join(path, 'tags-artist.txt'),
- # },
- # {
- # 'category_name': 'Copyright',
- # 'category': 'copyright',
- # 'path': os.path.join(path, 'tags-copyright.txt'),
- # },
- {
- "category_name": "Character",
- "category": "character",
- "path": os.path.join(project_path, "tags-character.txt"),
- },
- ]
-
- all_tags_path = os.path.join(project_path, "tags.txt")
-
- if not is_overwrite and os.path.exists(all_tags_path):
- raise Exception(f"Tags file is already exists : {all_tags_path}")
-
- dd.io.try_create_directory(os.path.dirname(all_tags_path))
- dd.io.serialize_as_json(log, os.path.join(project_path, "tags_log.json"))
-
- categories_for_web = []
- categories_for_web_path = os.path.join(project_path, "categories.json")
- tag_start_index = 0
-
- total_tags_count = 0
-
- with open(all_tags_path, "w") as all_tags_stream:
- for category_definition in category_definitions:
- category = category_definition["category"]
- category_tags_path = category_definition["path"]
-
- print(f"{category} tags are downloading ...")
- tags = download_category_tags(category, minimum_post_count, limit)
-
- tags = dd.extra.natural_sorted(tags)
- tag_count = len(tags)
- if tag_count == 0:
- print(f"{category} tags are not exists.")
- continue
- else:
- print(f"{tag_count} tags are downloaded.")
-
- with open(category_tags_path, "w") as category_tags_stream:
- for tag in tags:
- category_tags_stream.write(f"{tag}\n")
- all_tags_stream.write(f"{tag}\n")
-
- categories_for_web.append(
- {
- "name": category_definition["category_name"],
- "start_index": tag_start_index,
- }
- )
-
- tag_start_index += len(tags)
- total_tags_count += tag_count
-
- for tag in system_tags:
- all_tags_stream.write(f"{tag}\n")
-
- categories_for_web.append({"name": "System", "start_index": total_tags_count})
-
- dd.io.serialize_as_json(categories_for_web, categories_for_web_path)
-
- print(f"Total {total_tags_count} tags are downloaded.")
-
- print("All processes are complete.")
diff --git a/spaces/willdzierson/nlp_to_dates/README.md b/spaces/willdzierson/nlp_to_dates/README.md
deleted file mode 100644
index 777ff98457824e51037dec5bfaf953583b671144..0000000000000000000000000000000000000000
--- a/spaces/willdzierson/nlp_to_dates/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Nlp To Dates
-emoji: 📊
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/symbols.py b/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/symbols.py
deleted file mode 100644
index 053a7105f7ce95aa51614f6995399fa2172b3eb2..0000000000000000000000000000000000000000
--- a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/symbols.py
+++ /dev/null
@@ -1,76 +0,0 @@
-'''
-Defines the set of symbols used in text input to the model.
-'''
-
-# japanese_cleaners
-_pad = '_'
-_punctuation = ',.!?-'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
-
-
-'''# japanese_cleaners2
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
-'''
-
-
-'''# korean_cleaners
-_pad = '_'
-_punctuation = ',.!?…~'
-_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
-'''
-
-'''# chinese_cleaners
-_pad = '_'
-_punctuation = ',。!?—…'
-_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
-'''
-
-'''# zh_ja_mixture_cleaners
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ '
-'''
-
-'''# sanskrit_cleaners
-_pad = '_'
-_punctuation = '।'
-_letters = 'ँंःअआइईउऊऋएऐओऔकखगघङचछजझञटठडढणतथदधनपफबभमयरलळवशषसहऽािीुूृॄेैोौ्ॠॢ '
-'''
-
-'''# cjks_cleaners
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'NQabdefghijklmnopstuvwxyzʃʧʥʦɯɹəɥçɸɾβŋɦː⁼ʰ`^#*=→↓↑ '
-'''
-
-'''# thai_cleaners
-_pad = '_'
-_punctuation = '.!? '
-_letters = 'กขฃคฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลวศษสหฬอฮฯะัาำิีึืุูเแโใไๅๆ็่้๊๋์'
-'''
-
-'''# cjke_cleaners2
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'NQabdefghijklmnopstuvwxyzɑæʃʑçɯɪɔɛɹðəɫɥɸʊɾʒθβŋɦ⁼ʰ`^#*=ˈˌ→↓↑ '
-'''
-
-'''# shanghainese_cleaners
-_pad = '_'
-_punctuation = ',.!?…'
-_letters = 'abdfghiklmnopstuvyzøŋȵɑɔɕəɤɦɪɿʑʔʰ̩̃ᴀᴇ15678 '
-'''
-
-'''# chinese_dialect_cleaners
-_pad = '_'
-_punctuation = ',.!?~…─'
-_letters = '#Nabdefghijklmnoprstuvwxyzæçøŋœȵɐɑɒɓɔɕɗɘəɚɛɜɣɤɦɪɭɯɵɷɸɻɾɿʂʅʊʋʌʏʑʔʦʮʰʷˀː˥˦˧˨˩̥̩̃̚ᴀᴇ↑↓∅ⱼ '
-'''
-
-# Export all symbols:
-symbols = [_pad] + list(_punctuation) + list(_letters)
-
-# Special symbol ids
-SPACE_ID = symbols.index(" ")
diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/resnet_ibn_b.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/resnet_ibn_b.py
deleted file mode 100644
index 9881cc7d64e97a74bab35e6145197d6d740689ad..0000000000000000000000000000000000000000
--- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/resnet_ibn_b.py
+++ /dev/null
@@ -1,274 +0,0 @@
-"""
-Credit to https://github.com/XingangPan/IBN-Net.
-"""
-from __future__ import division, absolute_import
-import math
-import torch.nn as nn
-import torch.utils.model_zoo as model_zoo
-
-__all__ = ['resnet50_ibn_b']
-
-model_urls = {
- 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
- 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
- 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
-}
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(
- in_planes,
- out_planes,
- kernel_size=3,
- stride=stride,
- padding=1,
- bias=False
- )
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(inplanes, planes, stride)
- self.bn1 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self, inplanes, planes, stride=1, downsample=None, IN=False):
- super(Bottleneck, self).__init__()
- self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
- self.bn1 = nn.BatchNorm2d(planes)
- self.conv2 = nn.Conv2d(
- planes,
- planes,
- kernel_size=3,
- stride=stride,
- padding=1,
- bias=False
- )
- self.bn2 = nn.BatchNorm2d(planes)
- self.conv3 = nn.Conv2d(
- planes, planes * self.expansion, kernel_size=1, bias=False
- )
- self.bn3 = nn.BatchNorm2d(planes * self.expansion)
- self.IN = None
- if IN:
- self.IN = nn.InstanceNorm2d(planes * 4, affine=True)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- if self.IN is not None:
- out = self.IN(out)
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
- """Residual network + IBN layer.
-
- Reference:
- - He et al. Deep Residual Learning for Image Recognition. CVPR 2016.
- - Pan et al. Two at Once: Enhancing Learning and Generalization
- Capacities via IBN-Net. ECCV 2018.
- """
-
- def __init__(
- self,
- block,
- layers,
- num_classes=1000,
- loss='softmax',
- fc_dims=None,
- dropout_p=None,
- **kwargs
- ):
- scale = 64
- self.inplanes = scale
- super(ResNet, self).__init__()
- self.loss = loss
- self.feature_dim = scale * 8 * block.expansion
-
- self.conv1 = nn.Conv2d(
- 3, scale, kernel_size=7, stride=2, padding=3, bias=False
- )
- self.bn1 = nn.InstanceNorm2d(scale, affine=True)
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = self._make_layer(
- block, scale, layers[0], stride=1, IN=True
- )
- self.layer2 = self._make_layer(
- block, scale * 2, layers[1], stride=2, IN=True
- )
- self.layer3 = self._make_layer(block, scale * 4, layers[2], stride=2)
- self.layer4 = self._make_layer(block, scale * 8, layers[3], stride=2)
- self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
- self.fc = self._construct_fc_layer(
- fc_dims, scale * 8 * block.expansion, dropout_p
- )
- self.classifier = nn.Linear(self.feature_dim, num_classes)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- m.weight.data.normal_(0, math.sqrt(2. / n))
- elif isinstance(m, nn.BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
- elif isinstance(m, nn.InstanceNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def _make_layer(self, block, planes, blocks, stride=1, IN=False):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(
- self.inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False
- ),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks - 1):
- layers.append(block(self.inplanes, planes))
- layers.append(block(self.inplanes, planes, IN=IN))
-
- return nn.Sequential(*layers)
-
- def _construct_fc_layer(self, fc_dims, input_dim, dropout_p=None):
- """Constructs fully connected layer
-
- Args:
- fc_dims (list or tuple): dimensions of fc layers, if None, no fc layers are constructed
- input_dim (int): input dimension
- dropout_p (float): dropout probability, if None, dropout is unused
- """
- if fc_dims is None:
- self.feature_dim = input_dim
- return None
-
- assert isinstance(
- fc_dims, (list, tuple)
- ), 'fc_dims must be either list or tuple, but got {}'.format(
- type(fc_dims)
- )
-
- layers = []
- for dim in fc_dims:
- layers.append(nn.Linear(input_dim, dim))
- layers.append(nn.BatchNorm1d(dim))
- layers.append(nn.ReLU(inplace=True))
- if dropout_p is not None:
- layers.append(nn.Dropout(p=dropout_p))
- input_dim = dim
-
- self.feature_dim = fc_dims[-1]
-
- return nn.Sequential(*layers)
-
- def featuremaps(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.maxpool(x)
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
- return x
-
- def forward(self, x):
- f = self.featuremaps(x)
- v = self.avgpool(f)
- v = v.view(v.size(0), -1)
- if self.fc is not None:
- v = self.fc(v)
- if not self.training:
- return v
- y = self.classifier(v)
- if self.loss == 'softmax':
- return y
- elif self.loss == 'triplet':
- return y, v
- else:
- raise KeyError("Unsupported loss: {}".format(self.loss))
-
-
-def init_pretrained_weights(model, model_url):
- """Initializes model with pretrained weights.
-
- Layers that don't match with pretrained layers in name or size are kept unchanged.
- """
- pretrain_dict = model_zoo.load_url(model_url)
- model_dict = model.state_dict()
- pretrain_dict = {
- k: v
- for k, v in pretrain_dict.items()
- if k in model_dict and model_dict[k].size() == v.size()
- }
- model_dict.update(pretrain_dict)
- model.load_state_dict(model_dict)
-
-
-def resnet50_ibn_b(num_classes, loss='softmax', pretrained=False, **kwargs):
- model = ResNet(
- Bottleneck, [3, 4, 6, 3], num_classes=num_classes, loss=loss, **kwargs
- )
- if pretrained:
- init_pretrained_weights(model, model_urls['resnet50'])
- return model
diff --git a/spaces/xiaoguolizi/anime-ai-detect/README.md b/spaces/xiaoguolizi/anime-ai-detect/README.md
deleted file mode 100644
index 952c183fd69ccb1664b4236b6132fc6d0358c7de..0000000000000000000000000000000000000000
--- a/spaces/xiaoguolizi/anime-ai-detect/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Anime Ai Detect
-emoji: 🤖
-colorFrom: green
-colorTo: purple
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: true
-duplicated_from: saltacc/anime-ai-detect
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/xp3857/aa-pr-2/css.css b/spaces/xp3857/aa-pr-2/css.css
deleted file mode 100644
index 633c4d2ce8e6bd77293e8085cebc2f291a69f424..0000000000000000000000000000000000000000
--- a/spaces/xp3857/aa-pr-2/css.css
+++ /dev/null
@@ -1,112 +0,0 @@
-.app.svelte-p7tiy3.svelte-p7tiy3{
- background:None;
-}
-.unpadded_box.large.svelte-1vhybi6{
- background:#6fbcffa8;
- min-height:100%;
-}
-span.svelte-1l2rj76{
- color:white;!important;
-}
-div.svelte-1fwqiwq .block{
- background:#4d8df1;
-}
-.lg.svelte-1h4gtph{
- background:#4d8df1;
- color:white;
- height:100px;
-}
-#restart{
- position: relative;
- font-family: "Poppins",sans-serif;
- text-align: center;
- border-radius: 8px;
- background: #0063f787;
- border-style: solid;
- border-width: 1px;
- border-color: #ffffff;
- width: 100%;
- height: 50%;
- max-height: 200px;
- padding: 0px 10px;
- transform: translate(-50%,0%);
- left: 50%;
-}
-#head{
- color:white;
- margin-top:15px;
- margin-bottom:5px;
-}
-#cont{
- color: white;
- margin-top: 5px;
- margin-bottom: 15px;
- font-size: 1.1rem;
-}
-
-.lds-ellipsis {
- display: inline-block;
- position: relative;
- width: 80px;
- height: 80px;
-
-}
-.lds-ellipsis div {
- position: absolute;
- z-index:199999;
-
- top: 33px;
- width: 13px;
- height: 13px;
- border-radius: 50%;
- background: blue;
- animation-timing-function: cubic-bezier(0, 1, 1, 0);
-}
-.lds-ellipsis div:nth-child(1) {
- left: 8px;
- animation: lds-ellipsis1 0.6s infinite;
-}
-.lds-ellipsis div:nth-child(2) {
- left: 8px;
- animation: lds-ellipsis2 0.6s infinite;
-}
-.lds-ellipsis div:nth-child(3) {
- left: 32px;
- animation: lds-ellipsis2 0.6s infinite;
-}
-.lds-ellipsis div:nth-child(4) {
- left: 56px;
- animation: lds-ellipsis3 0.6s infinite;
-}
-@keyframes lds-ellipsis1 {
- 0% {
- transform: scale(0);
- }
- 100% {
- transform: scale(1);
- }
-}
-@keyframes lds-ellipsis3 {
- 0% {
- transform: scale(1);
- }
- 100% {
- transform: scale(0);
- }frames lds-ellipsis2 {
- 0% {
- transform: translate(0, 0);
- }
- 100% {
- transform: translate(24px, 0);
- }
-}
-
-}
-@keyframes lds-ellipsis2 {
- 0% {
- transform: translate(0, 0);
- }
- 100% {
- transform: translate(24px, 0);
- }
-}
\ No newline at end of file
diff --git a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/ops/__init__.py b/spaces/ygangang/CodeFormer/CodeFormer/basicsr/ops/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/yiningmao/metaphor-detection-baseline/scripts/run.sh b/spaces/yiningmao/metaphor-detection-baseline/scripts/run.sh
deleted file mode 100644
index fa86d54ca0189622e325bcddd5a9c3522bbb17da..0000000000000000000000000000000000000000
--- a/spaces/yiningmao/metaphor-detection-baseline/scripts/run.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-#!/bin/bash
-python main.py --data_dir data/VUA20 --task_name vua --model_type MELBERT --train_batch_size 32 --learning_rate 3e-5 --warmup_epoch 2
\ No newline at end of file
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/blip_2/configuration_blip_2.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/blip_2/configuration_blip_2.py
deleted file mode 100644
index 1b375e147f780b20866a46ea35542b7794148217..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/blip_2/configuration_blip_2.py
+++ /dev/null
@@ -1,355 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" BLIP-2 model configuration"""
-
-import os
-from typing import Union
-
-from ...configuration_utils import PretrainedConfig
-from ...models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
-from ...utils import logging
-from ..auto import CONFIG_MAPPING
-
-
-logger = logging.get_logger(__name__)
-
-BLIP_2_PRETRAINED_CONFIG_ARCHIVE_MAP = {
- "salesforce/blip2-opt-2.7b": "https://huggingface.co/salesforce/blip2-opt-2.7b/resolve/main/config.json",
-}
-
-
-class Blip2VisionConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`Blip2VisionModel`]. It is used to instantiate a
- BLIP-2 vision encoder according to the specified arguments, defining the model architecture. Instantiating a
- configuration defaults will yield a similar configuration to that of the BLIP-2
- [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
- Args:
- hidden_size (`int`, *optional*, defaults to 1408):
- Dimensionality of the encoder layers and the pooler layer.
- intermediate_size (`int`, *optional*, defaults to 6144):
- Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
- num_hidden_layers (`int`, *optional*, defaults to 39):
- Number of hidden layers in the Transformer encoder.
- num_attention_heads (`int`, *optional*, defaults to 16):
- Number of attention heads for each attention layer in the Transformer encoder.
- image_size (`int`, *optional*, defaults to 224):
- The size (resolution) of each image.
- patch_size (`int`, *optional*, defaults to 14):
- The size (resolution) of each patch.
- hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
- The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
- `"relu"`, `"selu"` and `"gelu_new"` ``"gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults
- to 1e-5): The epsilon used by the layer normalization layers.
- attention_dropout (`float`, *optional*, defaults to 0.0):
- The dropout ratio for the attention probabilities.
- initializer_range (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- qkv_bias (`bool`, *optional*, defaults to `True`):
- Whether to add a bias to the queries and values in the self-attention layers.
-
- Example:
-
- ```python
- >>> from transformers import Blip2VisionConfig, Blip2VisionModel
-
- >>> # Initializing a Blip2VisionConfig with Salesforce/blip2-opt-2.7b style configuration
- >>> configuration = Blip2VisionConfig()
-
- >>> # Initializing a Blip2VisionModel (with random weights) from the Salesforce/blip2-opt-2.7b style configuration
- >>> model = Blip2VisionModel(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
- ```"""
-
- model_type = "blip_2_vision_model"
-
- def __init__(
- self,
- hidden_size=1408,
- intermediate_size=6144,
- num_hidden_layers=39,
- num_attention_heads=16,
- image_size=224,
- patch_size=14,
- hidden_act="gelu",
- layer_norm_eps=1e-6,
- attention_dropout=0.0,
- initializer_range=1e-10,
- qkv_bias=True,
- **kwargs,
- ):
- super().__init__(**kwargs)
-
- self.hidden_size = hidden_size
- self.intermediate_size = intermediate_size
- self.num_hidden_layers = num_hidden_layers
- self.num_attention_heads = num_attention_heads
- self.patch_size = patch_size
- self.image_size = image_size
- self.initializer_range = initializer_range
- self.attention_dropout = attention_dropout
- self.layer_norm_eps = layer_norm_eps
- self.hidden_act = hidden_act
- self.qkv_bias = qkv_bias
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
- cls._set_token_in_kwargs(kwargs)
-
- config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
-
- # get the vision config dict if we are loading from Blip2Config
- if config_dict.get("model_type") == "blip-2":
- config_dict = config_dict["vision_config"]
-
- if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
- logger.warning(
- f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
- f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
- )
-
- return cls.from_dict(config_dict, **kwargs)
-
-
-class Blip2QFormerConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`Blip2QFormerModel`]. It is used to instantiate a
- BLIP-2 Querying Transformer (Q-Former) model according to the specified arguments, defining the model architecture.
- Instantiating a configuration with the defaults will yield a similar configuration to that of the BLIP-2
- [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) architecture. Configuration objects
- inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from
- [`PretrainedConfig`] for more information.
-
- Note that [`Blip2QFormerModel`] is very similar to [`BertLMHeadModel`] with interleaved cross-attention.
-
- Args:
- vocab_size (`int`, *optional*, defaults to 30522):
- Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by
- the `inputs_ids` passed when calling the model.
- hidden_size (`int`, *optional*, defaults to 768):
- Dimensionality of the encoder layers and the pooler layer.
- num_hidden_layers (`int`, *optional*, defaults to 12):
- Number of hidden layers in the Transformer encoder.
- num_attention_heads (`int`, *optional*, defaults to 12):
- Number of attention heads for each attention layer in the Transformer encoder.
- intermediate_size (`int`, *optional*, defaults to 3072):
- Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
- hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
- The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
- `"relu"`, `"silu"` and `"gelu_new"` are supported.
- hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
- The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
- attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
- The dropout ratio for the attention probabilities.
- max_position_embeddings (`int`, *optional*, defaults to 512):
- The maximum sequence length that this model might ever be used with. Typically set this to something large
- just in case (e.g., 512 or 1024 or 2048).
- initializer_range (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- layer_norm_eps (`float`, *optional*, defaults to 1e-12):
- The epsilon used by the layer normalization layers.
- position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
- Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
- positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
- [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
- For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
- with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
- cross_attention_frequency (`int`, *optional*, defaults to 2):
- The frequency of adding cross-attention to the Transformer layers.
- encoder_hidden_size (`int`, *optional*, defaults to 1408):
- The hidden size of the hidden states for cross-attention.
-
- Examples:
-
- ```python
- >>> from transformers import Blip2QFormerConfig, Blip2QFormerModel
-
- >>> # Initializing a BLIP-2 Salesforce/blip2-opt-2.7b style configuration
- >>> configuration = Blip2QFormerConfig()
-
- >>> # Initializing a model (with random weights) from the Salesforce/blip2-opt-2.7b style configuration
- >>> model = Blip2QFormerModel(configuration)
- >>> # Accessing the model configuration
- >>> configuration = model.config
- ```"""
- model_type = "blip_2_qformer"
-
- def __init__(
- self,
- vocab_size=30522,
- hidden_size=768,
- num_hidden_layers=12,
- num_attention_heads=12,
- intermediate_size=3072,
- hidden_act="gelu",
- hidden_dropout_prob=0.1,
- attention_probs_dropout_prob=0.1,
- max_position_embeddings=512,
- initializer_range=0.02,
- layer_norm_eps=1e-12,
- pad_token_id=0,
- position_embedding_type="absolute",
- cross_attention_frequency=2,
- encoder_hidden_size=1408,
- **kwargs,
- ):
- super().__init__(pad_token_id=pad_token_id, **kwargs)
-
- self.vocab_size = vocab_size
- self.hidden_size = hidden_size
- self.num_hidden_layers = num_hidden_layers
- self.num_attention_heads = num_attention_heads
- self.hidden_act = hidden_act
- self.intermediate_size = intermediate_size
- self.hidden_dropout_prob = hidden_dropout_prob
- self.attention_probs_dropout_prob = attention_probs_dropout_prob
- self.max_position_embeddings = max_position_embeddings
- self.initializer_range = initializer_range
- self.layer_norm_eps = layer_norm_eps
- self.position_embedding_type = position_embedding_type
- self.cross_attention_frequency = cross_attention_frequency
- self.encoder_hidden_size = encoder_hidden_size
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
- cls._set_token_in_kwargs(kwargs)
-
- config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
-
- # get the qformer config dict if we are loading from Blip2Config
- if config_dict.get("model_type") == "blip-2":
- config_dict = config_dict["qformer_config"]
-
- if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
- logger.warning(
- f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
- f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
- )
-
- return cls.from_dict(config_dict, **kwargs)
-
-
-class Blip2Config(PretrainedConfig):
- r"""
- [`Blip2Config`] is the configuration class to store the configuration of a [`Blip2ForConditionalGeneration`]. It is
- used to instantiate a BLIP-2 model according to the specified arguments, defining the vision model, Q-Former model
- and language model configs. Instantiating a configuration with the defaults will yield a similar configuration to
- that of the BLIP-2 [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
- Args:
- vision_config (`dict`, *optional*):
- Dictionary of configuration options used to initialize [`Blip2VisionConfig`].
- qformer_config (`dict`, *optional*):
- Dictionary of configuration options used to initialize [`Blip2QFormerConfig`].
- text_config (`dict`, *optional*):
- Dictionary of configuration options used to initialize any [`PretrainedConfig`].
- num_query_tokens (`int`, *optional*, defaults to 32):
- The number of query tokens passed through the Transformer.
-
- kwargs (*optional*):
- Dictionary of keyword arguments.
-
- Example:
-
- ```python
- >>> from transformers import (
- ... Blip2VisionConfig,
- ... Blip2QFormerConfig,
- ... OPTConfig,
- ... Blip2Config,
- ... Blip2ForConditionalGeneration,
- ... )
-
- >>> # Initializing a Blip2Config with Salesforce/blip2-opt-2.7b style configuration
- >>> configuration = Blip2Config()
-
- >>> # Initializing a Blip2ForConditionalGeneration (with random weights) from the Salesforce/blip2-opt-2.7b style configuration
- >>> model = Blip2ForConditionalGeneration(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
-
- >>> # We can also initialize a Blip2Config from a Blip2VisionConfig, Blip2QFormerConfig and any PretrainedConfig
-
- >>> # Initializing BLIP-2 vision, BLIP-2 Q-Former and language model configurations
- >>> vision_config = Blip2VisionConfig()
- >>> qformer_config = Blip2QFormerConfig()
- >>> text_config = OPTConfig()
-
- >>> config = Blip2Config.from_text_vision_configs(vision_config, qformer_config, text_config)
- ```"""
-
- model_type = "blip-2"
-
- def __init__(self, vision_config=None, qformer_config=None, text_config=None, num_query_tokens=32, **kwargs):
- super().__init__(**kwargs)
-
- if vision_config is None:
- vision_config = {}
- logger.info("vision_config is None. initializing the Blip2VisionConfig with default values.")
-
- if qformer_config is None:
- qformer_config = {}
- logger.info("qformer_config is None. Initializing the Blip2QFormerConfig with default values.")
-
- if text_config is None:
- text_config = {}
- logger.info("text_config is None. Initializing the text config with default values (`OPTConfig`).")
-
- self.vision_config = Blip2VisionConfig(**vision_config)
- self.qformer_config = Blip2QFormerConfig(**qformer_config)
- text_model_type = text_config["model_type"] if "model_type" in text_config else "opt"
- self.text_config = CONFIG_MAPPING[text_model_type](**text_config)
-
- self.tie_word_embeddings = self.text_config.tie_word_embeddings
- self.is_encoder_decoder = self.text_config.is_encoder_decoder
-
- self.num_query_tokens = num_query_tokens
- self.qformer_config.encoder_hidden_size = self.vision_config.hidden_size
- self.use_decoder_only_language_model = self.text_config.model_type in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
- self.initializer_factor = 1.0
- self.initializer_range = 0.02
-
- @classmethod
- def from_vision_qformer_text_configs(
- cls,
- vision_config: Blip2VisionConfig,
- qformer_config: Blip2QFormerConfig,
- text_config: PretrainedConfig,
- **kwargs,
- ):
- r"""
- Instantiate a [`Blip2Config`] (or a derived class) from a BLIP-2 vision model, Q-Former and language model
- configurations.
-
- Returns:
- [`Blip2Config`]: An instance of a configuration object
- """
-
- return cls(
- vision_config=vision_config.to_dict(),
- qformer_config=qformer_config.to_dict(),
- text_config=text_config.to_dict(),
- **kwargs,
- )
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deprecated/mctct/configuration_mctct.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deprecated/mctct/configuration_mctct.py
deleted file mode 100644
index e91104112b686bf9ce76febbfae8a0a2ac6da5f6..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deprecated/mctct/configuration_mctct.py
+++ /dev/null
@@ -1,185 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""M-CTC-T model configuration"""
-
-from ....configuration_utils import PretrainedConfig
-from ....utils import logging
-
-
-logger = logging.get_logger(__name__)
-
-MCTCT_PRETRAINED_CONFIG_ARCHIVE_MAP = {
- "speechbrain/m-ctc-t-large": "https://huggingface.co/speechbrain/m-ctc-t-large/resolve/main/config.json",
- # See all M-CTC-T models at https://huggingface.co/models?filter=mctct
-}
-
-
-class MCTCTConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`MCTCTModel`]. It is used to instantiate an
- M-CTC-T model according to the specified arguments, defining the model architecture. Instantiating a configuration
- with the defaults will yield a similar configuration to that of the M-CTC-T
- [speechbrain/m-ctc-t-large](https://huggingface.co/speechbrain/m-ctc-t-large) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
-
- Args:
- vocab_size (`int`, *optional*, defaults to 8065):
- Vocabulary size of the M-CTC-T model. Defines the number of different tokens that can be represented by the
- `inputs_ids` passed when calling [`MCTCTModel`].
- hidden_size (`int`, *optional*, defaults to 1536):
- Dimension of the encoder layers and the pooler layer.
- num_hidden_layers (`int`, *optional*, defaults to 36):
- Number of hidden layers in the Transformer encoder.
- intermediate_size (`int`, *optional*, defaults to 6144):
- Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
- num_attention_heads (`int`, *optional*, defaults to 4):
- Number of attention heads for each attention layer in the Transformer encoder.
- attention_head_dim (`int`, *optional*, defaults to 384):
- Dimensions of each attention head for each attention layer in the Transformer encoder.
- max_position_embeddings (`int`, *optional*, defaults to 920):
- The maximum sequence length that this model might ever be used with (after log-mel spectrogram extraction).
- layer_norm_eps (`float`, *optional*, defaults to 1e-05):
- The epsilon used by the layer normalization layers.
- layerdrop (`float`, *optional*, defaults to 0.3):
- The probability of dropping an encoder layer during training. The default 0.3 value is used in the original
- implementation.
- hidden_act (`str` or `function`, *optional*, defaults to `"relu"`):
- The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
- `"relu"`, `"selu"` and `"gelu_new"` are supported.
- initializer_range (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- hidden_dropout_prob (`float`, *optional*, defaults to 0.3):
- The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
- attention_probs_dropout_prob (`float`, *optional*, defaults to 0.3):
- The dropout ratio for the attention probabilities.
- pad_token_id (`int`, *optional*, defaults to 1):
- The tokenizer index of the pad token.
- bos_token_id (`int`, *optional*, defaults to 0):
- The tokenizer index of the bos token.
- eos_token_id (`int`, *optional*, defaults to 2):
- The tokenizer index of the eos token.
- conv_glu_dim (`int`, *optional*, defaults to 1):
- The dimension of the output of the `Conv1dSubsampler` layer in which GLU is applied on. Though the original
- Flashlight code uses the value of 2, here it's adapted to 1 due to transposition differences.
- conv_dropout (`int`, *optional*, defaults to 0.3):
- The probability of randomly dropping the `Conv1dSubsampler` layer during training.
- num_conv_layers (`int`, *optional*, defaults to 1):
- Number of convolution layers before applying transformer encoder layers.
- conv_kernel (`Sequence[int]`, *optional*, defaults to `(7,)`):
- The kernel size of the 1D convolution applied before transformer layers. `len(conv_kernel)` must be equal
- to `num_conv_layers`.
- conv_stride (`Sequence[int]`, *optional*, defaults to `(3,)`):
- The stride length of the 1D convolution applied before transformer layers. `len(conv_stride)` must be equal
- to `num_conv_layers`.
- input_feat_per_channel (`int`, *optional*, defaults to 80):
- Feature dimensions of the channels of the input to the Conv1D layer.
- input_channels (`int`, *optional*, defaults to 1):
- Number of input channels of the input to the Conv1D layer.
- conv_channels (`List[int]`, *optional*):
- Channel sizes of intermediate Conv1D layers.
- ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
- Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
- instance of [`MCTCTForCTC`].
- ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
- Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
- occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
- of [`MCTCTForCTC`].
-
- Example:
-
- ```python
- >>> from transformers import MCTCTConfig, MCTCTModel
-
- >>> # Initializing a M-CTC-T mctct-large style configuration
- >>> configuration = MCTCTConfig()
-
- >>> # Initializing a model (with random weights) from the mctct-large style configuration
- >>> model = MCTCTModel(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
- ```"""
- model_type = "mctct"
-
- def __init__(
- self,
- vocab_size=8065,
- hidden_size=1536,
- num_hidden_layers=36,
- intermediate_size=6144,
- num_attention_heads=4,
- attention_head_dim=384,
- max_position_embeddings=920,
- layer_norm_eps=1e-5,
- layerdrop=0.3,
- hidden_act="relu",
- initializer_range=0.02,
- hidden_dropout_prob=0.3,
- attention_probs_dropout_prob=0.3,
- pad_token_id=1,
- bos_token_id=0,
- eos_token_id=2,
- conv_glu_dim=1,
- conv_dropout=0.3,
- num_conv_layers=1,
- conv_kernel=(7,),
- conv_stride=(3,),
- input_feat_per_channel=80,
- input_channels=1,
- conv_channels=None,
- ctc_loss_reduction="sum",
- ctc_zero_infinity=False,
- **kwargs,
- ):
- super().__init__(**kwargs, pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id)
- self.vocab_size = vocab_size
- self.hidden_size = hidden_size
- self.num_hidden_layers = num_hidden_layers
- self.intermediate_size = intermediate_size
- self.num_attention_heads = num_attention_heads
- self.attention_head_dim = attention_head_dim
- self.max_position_embeddings = max_position_embeddings
- self.layer_norm_eps = layer_norm_eps
- self.layerdrop = layerdrop
- self.hidden_act = hidden_act
- self.initializer_range = initializer_range
- self.hidden_dropout_prob = hidden_dropout_prob
- self.attention_probs_dropout_prob = attention_probs_dropout_prob
- self.pad_token_id = pad_token_id
- self.bos_token_id = bos_token_id
- self.eos_token_id = eos_token_id
- self.conv_glu_dim = conv_glu_dim
- self.conv_dropout = conv_dropout
- self.num_conv_layers = num_conv_layers
- self.input_feat_per_channel = input_feat_per_channel
- self.input_channels = input_channels
- self.conv_channels = conv_channels
- self.ctc_loss_reduction = ctc_loss_reduction
- self.ctc_zero_infinity = ctc_zero_infinity
-
- # prevents config testing fail with exporting to json
- self.conv_kernel = list(conv_kernel)
- self.conv_stride = list(conv_stride)
-
- if len(self.conv_kernel) != self.num_conv_layers:
- raise ValueError(
- "Configuration for convolutional module is incorrect. "
- "It is required that `len(config.conv_kernel)` == `config.num_conv_layers` "
- f"but is `len(config.conv_kernel) = {len(self.conv_kernel)}`, "
- f"`config.num_conv_layers = {self.num_conv_layers}`."
- )
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/jukebox/convert_jukebox.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/jukebox/convert_jukebox.py
deleted file mode 100644
index b56a25c57c70d113bfa12003fa92a86e272f8e86..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/jukebox/convert_jukebox.py
+++ /dev/null
@@ -1,279 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Convert Jukebox checkpoints"""
-
-import argparse
-import json
-import os
-from pathlib import Path
-
-import requests
-import torch
-
-from transformers import JukeboxConfig, JukeboxModel
-from transformers.utils import logging
-
-
-logging.set_verbosity_info()
-logger = logging.get_logger(__name__)
-
-
-PREFIX = "https://openaipublic.azureedge.net/jukebox/models/"
-MODEL_MAPPING = {
- "jukebox-1b-lyrics": [
- "5b/vqvae.pth.tar",
- "5b/prior_level_0.pth.tar",
- "5b/prior_level_1.pth.tar",
- "1b_lyrics/prior_level_2.pth.tar",
- ],
- "jukebox-5b-lyrics": [
- "5b/vqvae.pth.tar",
- "5b/prior_level_0.pth.tar",
- "5b/prior_level_1.pth.tar",
- "5b_lyrics/prior_level_2.pth.tar",
- ],
-}
-
-
-def replace_key(key):
- if key.endswith(".model.1.bias") and len(key.split(".")) > 10:
- key = key.replace(".model.1.bias", ".conv1d_1.bias")
- elif key.endswith(".model.1.weight") and len(key.split(".")) > 10:
- key = key.replace(".model.1.weight", ".conv1d_1.weight")
- elif key.endswith(".model.3.bias") and len(key.split(".")) > 10:
- key = key.replace(".model.3.bias", ".conv1d_2.bias")
- elif key.endswith(".model.3.weight") and len(key.split(".")) > 10:
- key = key.replace(".model.3.weight", ".conv1d_2.weight")
-
- if "conditioner_blocks.0." in key:
- key = key.replace("conditioner_blocks.0", "conditioner_blocks")
-
- if "prime_prior" in key:
- key = key.replace("prime_prior", "encoder")
-
- if ".emb." in key and "total" not in key and "absolute" not in key and "relative" not in key:
- key = key.replace(".emb.", ".")
-
- if key.endswith("k"): # replace vqvae.X.k with vqvae.X.codebook
- return key.replace(".k", ".codebook")
- if "y_emb." in key:
- return key.replace("y_emb.", "metadata_embedding.")
-
- if "x_emb.emb." in key:
- key = key.replace("0.x_emb.emb", "embed_tokens")
-
- if "prime_state_ln" in key:
- return key.replace("prime_state_ln", "encoder.final_layer_norm")
- if ".ln" in key:
- return key.replace(".ln", ".layer_norm")
- if "_ln" in key:
- return key.replace("_ln", "_layer_norm")
-
- if "prime_state_proj" in key:
- return key.replace("prime_state_proj", "encoder.proj_in")
- if "prime_x_out" in key:
- return key.replace("prime_x_out", "encoder.lm_head")
- if "prior.x_out" in key:
- return key.replace("x_out", "fc_proj_out")
- if "x_emb" in key:
- return key.replace("x_emb", "embed_tokens")
-
- return key
-
-
-def fix_jukebox_keys(state_dict, model_state_dict, key_prefix, mapping):
- new_dict = {}
- import re
-
- re_encoder_block_conv_in = re.compile(r"encoders.(\d*).level_blocks.(\d*).model.(\d*).(\d).(bias|weight)")
- re_encoder_block_resnet = re.compile(
- r"encoders.(\d*).level_blocks.(\d*).model.(\d*).(\d).model.(\d*).model.(\d*).(bias|weight)"
- )
- re_encoder_block_proj_out = re.compile(r"encoders.(\d*).level_blocks.(\d*).model.(\d*).(bias|weight)")
-
- re_decoder_block_conv_out = re.compile(r"decoders.(\d*).level_blocks.(\d*).model.(\d*).(\d).(bias|weight)")
- re_decoder_block_resnet = re.compile(
- r"decoders.(\d*).level_blocks.(\d*).model.(\d*).(\d).model.(\d*).model.(\d*).(bias|weight)"
- )
- re_decoder_block_proj_in = re.compile(r"decoders.(\d*).level_blocks.(\d*).model.(\d*).(bias|weight)")
-
- re_prior_cond_conv_out = re.compile(r"conditioner_blocks.(\d*).cond.model.(\d*).(\d).(bias|weight)")
- re_prior_cond_resnet = re.compile(
- r"conditioner_blocks.(\d*).cond.model.(\d*).(\d).model.(\d*).model.(\d*).(bias|weight)"
- )
- re_prior_cond_proj_in = re.compile(r"conditioner_blocks.(\d*).cond.model.(\d*).(bias|weight)")
-
- for original_key, value in state_dict.items():
- # rename vqvae.encoder keys
- if re_encoder_block_conv_in.fullmatch(original_key):
- regex_match = re_encoder_block_conv_in.match(original_key)
- groups = regex_match.groups()
- block_index = int(groups[2]) * 2 + int(groups[3])
- re_new_key = f"encoders.{groups[0]}.level_blocks.{groups[1]}.downsample_block.{block_index}.{groups[-1]}"
- key = re_encoder_block_conv_in.sub(re_new_key, original_key)
-
- elif re_encoder_block_resnet.fullmatch(original_key):
- regex_match = re_encoder_block_resnet.match(original_key)
- groups = regex_match.groups()
- block_index = int(groups[2]) * 2 + int(groups[3])
- conv_index = {"1": 1, "3": 2}[groups[-2]]
- prefix = f"encoders.{groups[0]}.level_blocks.{groups[1]}.downsample_block.{block_index}."
- resnet_block = f"resnet_block.{groups[-3]}.conv1d_{conv_index}.{groups[-1]}"
- re_new_key = prefix + resnet_block
- key = re_encoder_block_resnet.sub(re_new_key, original_key)
-
- elif re_encoder_block_proj_out.fullmatch(original_key):
- regex_match = re_encoder_block_proj_out.match(original_key)
- groups = regex_match.groups()
- re_new_key = f"encoders.{groups[0]}.level_blocks.{groups[1]}.proj_out.{groups[-1]}"
- key = re_encoder_block_proj_out.sub(re_new_key, original_key)
-
- # rename vqvae.decoder keys
- elif re_decoder_block_conv_out.fullmatch(original_key):
- regex_match = re_decoder_block_conv_out.match(original_key)
- groups = regex_match.groups()
- block_index = int(groups[2]) * 2 + int(groups[3]) - 2
- re_new_key = f"decoders.{groups[0]}.level_blocks.{groups[1]}.upsample_block.{block_index}.{groups[-1]}"
- key = re_decoder_block_conv_out.sub(re_new_key, original_key)
-
- elif re_decoder_block_resnet.fullmatch(original_key):
- regex_match = re_decoder_block_resnet.match(original_key)
- groups = regex_match.groups()
- block_index = int(groups[2]) * 2 + int(groups[3]) - 2
- conv_index = {"1": 1, "3": 2}[groups[-2]]
- prefix = f"decoders.{groups[0]}.level_blocks.{groups[1]}.upsample_block.{block_index}."
- resnet_block = f"resnet_block.{groups[-3]}.conv1d_{conv_index}.{groups[-1]}"
- re_new_key = prefix + resnet_block
- key = re_decoder_block_resnet.sub(re_new_key, original_key)
-
- elif re_decoder_block_proj_in.fullmatch(original_key):
- regex_match = re_decoder_block_proj_in.match(original_key)
- groups = regex_match.groups()
- re_new_key = f"decoders.{groups[0]}.level_blocks.{groups[1]}.proj_in.{groups[-1]}"
- key = re_decoder_block_proj_in.sub(re_new_key, original_key)
-
- # rename prior cond.model to upsampler.upsample_block and resnet
- elif re_prior_cond_conv_out.fullmatch(original_key):
- regex_match = re_prior_cond_conv_out.match(original_key)
- groups = regex_match.groups()
- block_index = int(groups[1]) * 2 + int(groups[2]) - 2
- re_new_key = f"conditioner_blocks.upsampler.upsample_block.{block_index}.{groups[-1]}"
- key = re_prior_cond_conv_out.sub(re_new_key, original_key)
-
- elif re_prior_cond_resnet.fullmatch(original_key):
- regex_match = re_prior_cond_resnet.match(original_key)
- groups = regex_match.groups()
- block_index = int(groups[1]) * 2 + int(groups[2]) - 2
- conv_index = {"1": 1, "3": 2}[groups[-2]]
- prefix = f"conditioner_blocks.upsampler.upsample_block.{block_index}."
- resnet_block = f"resnet_block.{groups[-3]}.conv1d_{conv_index}.{groups[-1]}"
- re_new_key = prefix + resnet_block
- key = re_prior_cond_resnet.sub(re_new_key, original_key)
-
- elif re_prior_cond_proj_in.fullmatch(original_key):
- regex_match = re_prior_cond_proj_in.match(original_key)
- groups = regex_match.groups()
- re_new_key = f"conditioner_blocks.upsampler.proj_in.{groups[-1]}"
- key = re_prior_cond_proj_in.sub(re_new_key, original_key)
-
- # keep original key
- else:
- key = original_key
-
- key = replace_key(key)
-
- if f"{key_prefix}.{key}" not in model_state_dict or key is None:
- print(f"failed converting {original_key} to {key}, does not match")
-
- # handle missmatched shape
- elif value.shape != model_state_dict[f"{key_prefix}.{key}"].shape:
- val = model_state_dict[f"{key_prefix}.{key}"]
- print(f"{original_key}-> {key} : \nshape {val.shape} and { value.shape}, do not match")
- key = original_key
-
- mapping[key] = original_key
- new_dict[key] = value
-
- return new_dict
-
-
-@torch.no_grad()
-def convert_openai_checkpoint(model_name=None, pytorch_dump_folder_path=None):
- """
- Copy/paste/tweak model's weights to our Jukebox structure.
- """
- for file in MODEL_MAPPING[model_name]:
- if not os.path.isfile(f"{pytorch_dump_folder_path}/{file.split('/')[-1]}"):
- r = requests.get(f"{PREFIX}{file}", allow_redirects=True)
- os.makedirs(f"{pytorch_dump_folder_path}/", exist_ok=True)
- open(f"{pytorch_dump_folder_path}/{file.split('/')[-1]}", "wb").write(r.content)
-
- model_to_convert = MODEL_MAPPING[model_name.split("/")[-1]]
-
- config = JukeboxConfig.from_pretrained(model_name)
- model = JukeboxModel(config)
-
- weight_dict = []
- mapping = {}
- for i, dict_name in enumerate(model_to_convert):
- old_dic = torch.load(f"{pytorch_dump_folder_path}/{dict_name.split('/')[-1]}")["model"]
-
- new_dic = {}
- for k in old_dic.keys():
- if k.endswith(".b"):
- new_dic[k.replace("b", "bias")] = old_dic[k]
- elif k.endswith(".w"):
- new_dic[k.replace("w", "weight")] = old_dic[k]
- elif "level_2" not in dict_name and "cond.model." in k:
- new_dic[k.replace(".blocks.", ".model.")] = old_dic[k]
- else:
- new_dic[k] = old_dic[k]
-
- key_prefix = "vqvae" if i == 0 else f"priors.{3 - i}"
- new_dic = fix_jukebox_keys(new_dic, model.state_dict(), key_prefix, mapping)
- weight_dict.append(new_dic)
-
- vqvae_state_dict = weight_dict.pop(0)
- model.vqvae.load_state_dict(vqvae_state_dict)
- for i in range(len(weight_dict)):
- model.priors[i].load_state_dict(weight_dict[2 - i])
-
- Path(pytorch_dump_folder_path).mkdir(exist_ok=True)
- with open(f"{pytorch_dump_folder_path}/mapping.json", "w") as txtfile:
- json.dump(mapping, txtfile)
-
- print(f"Saving model {model_name} to {pytorch_dump_folder_path}")
- model.save_pretrained(pytorch_dump_folder_path)
-
- return weight_dict
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- # Required parameters
- parser.add_argument(
- "--model_name",
- default="jukebox-5b-lyrics",
- type=str,
- help="Name of the model you'd like to convert.",
- )
- parser.add_argument(
- "--pytorch_dump_folder_path",
- default="jukebox-5b-lyrics-converted",
- type=str,
- help="Path to the output PyTorch model directory.",
- )
- args = parser.parse_args()
- convert_openai_checkpoint(args.model_name, args.pytorch_dump_folder_path)
diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/config/root_cfg.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/config/root_cfg.py
deleted file mode 100644
index 33d1d4bd2d9ddf31d55c655c49d13a8b7ac7b376..0000000000000000000000000000000000000000
--- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/config/root_cfg.py
+++ /dev/null
@@ -1,14 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from itertools import count
-
-from detectron2.config import LazyCall as L
-
-from .dir1.dir1_a import dir1a_dict, dir1a_str
-
-dir1a_dict.a = "modified"
-
-# modification above won't affect future imports
-from .dir1.dir1_b import dir1b_dict, dir1b_str
-
-
-lazyobj = L(count)(x=dir1a_str, y=dir1b_str)
diff --git a/spaces/zhanghaohui/szu-gpt-academic/docs/README_RS.md b/spaces/zhanghaohui/szu-gpt-academic/docs/README_RS.md
deleted file mode 100644
index 5ba5fcccc30db520d38e21950e2f7cfc03d324c5..0000000000000000000000000000000000000000
--- a/spaces/zhanghaohui/szu-gpt-academic/docs/README_RS.md
+++ /dev/null
@@ -1,278 +0,0 @@
-> **Note**
->
-> Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным.
->
-# GPT Академическая оптимизация (GPT Academic)
-
-**Если вам нравится этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные языковые ярлыки или функциональные плагины, не стесняйтесь открывать issue или pull request.
-Чтобы перевести этот проект на произвольный язык с помощью GPT, ознакомьтесь и запустите [`multi_language.py`](multi_language.py) (экспериментальный).
-
-> **Примечание**
->
-> 1. Обратите внимание, что только функциональные плагины (кнопки), помеченные **красным цветом**, поддерживают чтение файлов, некоторые плагины находятся в **выпадающем меню** в области плагинов. Кроме того, мы с наивысшим приоритетом рады и обрабатываем pull requests для любых новых плагинов!
->
-> 2. В каждом файле проекта функциональность описана в документе самоанализа [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). С каждой итерацией выполнения версии вы можете в любое время вызвать повторное создание отчета о самоанализе этого проекта, щелкнув соответствующий функциональный плагин и вызвав GPT. Вопросы сборки описаны в [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Метод установки](#installation).
->
-> 3. Этот проект совместим и поощряет использование китайских языковых моделей chatglm и RWKV, пангу и т. Д. Поддержка нескольких api-key, которые могут существовать одновременно, может быть указан в файле конфигурации, например `API_KEY="openai-key1,openai-key2,api2d-key3"`. Если требуется временно изменить `API_KEY`, введите временный `API_KEY` в области ввода и нажмите клавишу Enter, чтобы он вступил в силу.
-
-> **Примечание**
->
-> При установке зависимостей строго выбирайте версии, **указанные в файле requirements.txt**.
->
-> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`## Задание
-
-Вы профессиональный переводчик научных статей.
-
-Переведите этот файл в формате Markdown на русский язык. Не изменяйте существующие команды Markdown, ответьте только переведенными результатами.
-
-## Результат
-
-Функция | Описание
---- | ---
-Однокнопочный стиль | Поддержка однокнопочного стиля и поиска грамматических ошибок в научных статьях
-Однокнопочный перевод на английский и китайский | Однокнопочный перевод на английский и китайский
-Однокнопочное объяснение кода | Показ кода, объяснение его, генерация кода, комментирование кода
-[Настройка быстрых клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настройки быстрых клавиш
-Модульный дизайн | Поддержка пользовательских функциональных плагинов мощных [функциональных плагинов](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), плагины поддерживают [горячую замену](https://github.com/binary-husky/chatgpt_academic/wiki/Function-Plug-in-Guide)
-[Анализ своей программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Однокнопочный просмотр](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academicProject-Self-analysis-Report) исходного кода этого проекта
-[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Однокнопочный анализ дерева других проектов Python/C/C++/Java/Lua/...
-Чтение статей, [перевод](https://www.bilibili.com/video/BV1KT411x7Wn) статей | [Функциональный плагин] Однокнопочное чтение полного текста научных статей и генерация резюме
-Полный перевод [LaTeX](https://www.bilibili.com/video/BV1nk4y1Y7Js/) и совершенствование | [Функциональный плагин] Однокнопочный перевод или совершенствование LaTeX статьи
-Автоматическое комментирование | [Функциональный плагин] Однокнопочное автоматическое генерирование комментариев функций
-[Перевод](https://www.bilibili.com/video/BV1yo4y157jV/) Markdown на английский и китайский | [Функциональный плагин] Вы видели обе версии файлов [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) для этих 5 языков?
-Отчет о чат-анализе | [Функциональный плагин] После запуска будет автоматически сгенерировано сводное извещение
-Функция перевода полного текста [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлечение заголовка и резюме [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) и перевод всего документа (многопоточность)
-[Arxiv Helper](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи на arxiv и одним щелчком мыши переведите резюме и загрузите PDF
-[Google Scholar Integration Helper](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] При заданном любом URL страницы поиска в Google Scholar позвольте gpt вам помочь [написать обзор](https://www.bilibili.com/video/BV1GP411U7Az/)
-Сбор Интернет-информации + GPT | [Функциональный плагин] Однокнопочный [запрос информации из Интернета GPT](https://www.bilibili.com/video/BV1om4y127ck), затем ответьте на вопрос, чтобы информация не устарела никогда
-Отображение формул / изображений / таблиц | Может одновременно отображать формулы в [формате Tex и рендеринге](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), поддерживает формулы, подсвечивает код
-Поддержка функций с многопоточностью | Поддержка многопоточного вызова chatgpt, однокнопочная обработка [больших объемов текста](https://www.bilibili.com/video/BV1FT411H7c5/) или программ
-Темная тема gradio для запуска приложений | Добавьте ```/?__theme=dark``` после URL в браузере, чтобы переключиться на темную тему
-[Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) | Они одновременно обслуживаются GPT3.5, GPT4, [Clear ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS)
-Подключение нескольких новых моделей LLM, поддержка деплоя[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Подключение интерфейса Newbing (новый Bing), подключение поддержки [LLaMA](https://github.com/facebookresearch/llama), поддержка [RWKV](https://github.com/BlinkDL/ChatRWKV) и [Pangu α](https://openi.org.cn/pangu/)
-Больше новых функций (генерация изображения и т. д.) | См. на конце этого файла…- All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to liberate the clipboard
-
-
-
-
-- Revision/Correction
-
-
-
-
-- If the output contains formulas, they will be displayed in both tex and rendered form for easy copying and reading
-
-
-
-
-- Don't feel like looking at project code? Show the entire project directly in chatgpt
-
-
-
-
-- Mixing multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
-
-
-
-
----
-# Installation
-## Installation-Method 1: Run directly (Windows, Linux or MacOS)
-
-1. Download the project
-```sh
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-```
-
-2. Configure API_KEY
-
-In `config.py`, configure API KEY and other settings, [special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. When the program is running, it will first check whether there is a secret configuration file named `config_private.py` and use the configuration in it to replace the same name in` config.py`. Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Priority of read: `environment variable`>`config_private.py`>`config.py`)
-
-
-3. Install dependencies
-```sh
-# (Option I: If familiar with Python)(Python version 3.9 or above, the newer the better), note: use the official pip source or the aliyun pip source, temporary switching source method: python -m pip install -r requirements.txt - i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: If unfamiliar with Python)Use Anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # create an Anaconda environment
-conda activate gptac_venv # activate Anaconda environment
-python -m pip install -r requirements.txt # This step is the same as the pip installation
-```
-
- If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, click here to expand
-
-
-[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, you need to install more dependencies (prerequisites: familiar with Python + have used Pytorch + computer configuration is strong):
-```sh
-# [Optional step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM note: If you encounter the "Call ChatGLM fail cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installation above is torch+cpu version, and cuda is used Need to uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) Modify to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# [Optional step II] Support Fudan MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, you must be in the project root path
-
-# [Optional step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently, all supported models are as follows (the jittorllms series currently only supports the docker solution):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. Run
-```sh
-python main.py
-```5. Testing Function Plugin
-```
-- Testing function plugin template function (requires GPT to answer what happened in history today), you can use this function as a template to implement more complex functions
- Click "[Function plugin Template Demo] On this day in history"
-```
-
-## Installation - Method 2: Using Docker
-
-1. ChatGPT only (recommended for most people)
-
-``` sh
-git clone https://github.com/binary-husky/chatgpt_academic.git # download the project
-cd chatgpt_academic # enter the path
-nano config.py # edit config.py with any text editor to configure "Proxy", "API_KEY", and "WEB_PORT" (eg 50923)
-docker build -t gpt-academic . # install
-
-# (Last step-Option 1) In a Linux environment, using `--net=host` is more convenient and faster
-docker run --rm -it --net=host gpt-academic
-# (Last step-Option 2) In macOS/windows environment, only -p option can be used to expose the port on the container (eg 50923) to the port on the host
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker)
-
-``` sh
-# Edit docker-compose.yml, delete solutions 1 and 3, and keep solution 2. Modify the configuration of solution 2 in docker-compose.yml, refer to the comments in it
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + PanGu + RWKV (requires familiarity with Docker)
-``` sh
-# Edit docker-compose.yml, delete solutions 1 and 2, and keep solution 3. Modify the configuration of solution 3 in docker-compose.yml, refer to the comments in it
-docker-compose up
-```
-
-
-## Installation Method 3: Other Deployment Methods
-
-1. How to use reverse proxy URL/Microsoft Azure API
-Configure API_URL_REDIRECT according to the instructions in `config.py`.
-
-2. Remote Cloud Server Deployment (Requires Knowledge and Experience of Cloud Servers)
-Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-3. Using WSL2 (Windows Subsystem for Linux subsystem)
-Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-4. How to run at the secondary URL (such as `http://localhost/subpath`)
-Please visit [FastAPI Operation Instructions](docs/WithFastapi.md)
-
-5. Using docker-compose to run
-Please read docker-compose.yml and follow the prompts to operate.
-
----
-# Advanced Usage
-## Customize new convenient buttons / custom function plugins
-
-1. Customize new convenient buttons (academic shortcuts)
-Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, both prefixes and suffixes can be hot-modified without having to restart the program.)
-For example:
-```
-"Super English to Chinese": {
- # Prefix, will be added before your input. For example, describe your requirements, such as translation, code interpretation, polishing, etc.
- "Prefix": "Please translate the following content into Chinese, and then explain each proper noun that appears in the text with a markdown table:\n\n",
-
- # Suffix, will be added after your input. For example, with the prefix, you can enclose your input content in quotes.
- "Suffix": "",
-},
-```
-
-
-
-
-2. Custom function plugin
-
-Write powerful function plugins to perform any task you can and can't imagine.
-The difficulty of debugging and writing plugins in this project is very low. As long as you have a certain knowledge of python, you can implement your own plugin function by imitating the template we provide.
-Please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) for details.
-
----
-# Latest Update
-## New feature dynamic
-
-1. Сохранение диалогов. Вызовите "Сохранить текущий диалог" в разделе функций-плагина, чтобы сохранить текущий диалог как файл HTML, который можно прочитать и восстановить. Кроме того, вызовите «Загрузить архив истории диалога» в меню функций-плагина, чтобы восстановить предыдущую сессию. Совет: если нажать кнопку "Загрузить исторический архив диалога" без указания файла, можно просмотреть кэш исторических файлов HTML. Щелкните "Удалить все локальные записи истории диалогов", чтобы удалить все файловые кэши HTML.
-
-2. Создание отчетов. Большинство плагинов создают рабочий отчет после завершения выполнения.
-
-3. Модульный дизайн функций, простой интерфейс, но сильный функционал.
-
-4. Это проект с открытым исходным кодом, который может «сам переводить себя».
-
-5. Перевод других проектов с открытым исходным кодом - это не проблема.
-
-6. Мелкие функции декорирования [live2d](https://github.com/fghrsh/live2d_demo) (по умолчанию отключены, нужно изменить `config.py`).
-
-7. Поддержка большой языковой модели MOSS.
-
-8. Генерация изображений с помощью OpenAI.
-
-9. Анализ и подведение итогов аудиофайлов с помощью OpenAI.
-
-10. Полный цикл проверки правописания с использованием LaTeX.
-
-## Версии:
-- Версия 3.5 (Todo): использование естественного языка для вызова функций-плагинов проекта (высокий приоритет)
-- Версия 3.4 (Todo): улучшение многопоточной поддержки локальных больших моделей чата.
-- Версия 3.3: добавлена функция объединения интернет-информации.
-- Версия 3.2: функции-плагины поддерживают большое количество параметров (сохранение диалогов, анализирование любого языка программирования и одновременное запрос LLM-групп).
-- Версия 3.1: поддержка одновременного запроса нескольких моделей GPT! Поддержка api2d, сбалансированное распределение нагрузки по нескольким ключам api.
-- Версия 3.0: поддержка chatglm и других небольших LLM.
-- Версия 2.6: перестройка структуры плагинов, улучшение интерактивности, добавлено больше плагинов.
-- Версия 2.5: автоматическое обновление для решения проблемы длинного текста и переполнения токенов при обработке больших проектов.
-- Версия 2.4: (1) добавлена функция полного перевода PDF; (2) добавлена функция переключения положения ввода; (3) добавлена опция вертикального макета; (4) оптимизация многопоточности плагинов.
-- Версия 2.3: улучшение многопоточной интерактивности.
-- Версия 2.2: функции-плагины поддерживают горячую перезагрузку.
-- Версия 2.1: раскрывающийся макет.
-- Версия 2.0: использование модульных функций-плагинов.
-- Версия 1.0: базовые функции.
-
-gpt_academic Разработчик QQ-группы-2: 610599535
-
-- Известные проблемы
- - Некоторые плагины перевода в браузерах мешают работе фронтенда этого программного обеспечения
- - Высокая или низкая версия gradio может вызвать множество исключений
-
-## Ссылки и учебные материалы
-
-```
-Мы использовали многие концепты кода из других отличных проектов, включая:
-
-# Проект 1: Qinghua ChatGLM-6B:
-https://github.com/THUDM/ChatGLM-6B
-
-# Проект 2: Qinghua JittorLLMs:
-https://github.com/Jittor/JittorLLMs
-
-# Проект 3: Edge-GPT:
-https://github.com/acheong08/EdgeGPT
-
-# Проект 4: Chuanhu ChatGPT:
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Проект 5: ChatPaper:
-https://github.com/kaixindelele/ChatPaper
-
-# Больше:
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/spaces/zlc99/M4Singer/modules/parallel_wavegan/utils/__init__.py b/spaces/zlc99/M4Singer/modules/parallel_wavegan/utils/__init__.py
deleted file mode 100644
index e8fa95a020706b5412c3959fbf6e5980019c0d5f..0000000000000000000000000000000000000000
--- a/spaces/zlc99/M4Singer/modules/parallel_wavegan/utils/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .utils import * # NOQA
diff --git a/spaces/zlc99/M4Singer/tasks/tts/pe.py b/spaces/zlc99/M4Singer/tasks/tts/pe.py
deleted file mode 100644
index 3880c80d0820c36e044c00bd38a07fd3cce73323..0000000000000000000000000000000000000000
--- a/spaces/zlc99/M4Singer/tasks/tts/pe.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import matplotlib
-matplotlib.use('Agg')
-
-import torch
-import numpy as np
-import os
-
-from tasks.base_task import BaseDataset
-from tasks.tts.fs2 import FastSpeech2Task
-from modules.fastspeech.pe import PitchExtractor
-import utils
-from utils.indexed_datasets import IndexedDataset
-from utils.hparams import hparams
-from utils.plot import f0_to_figure
-from utils.pitch_utils import norm_interp_f0, denorm_f0
-
-
-class PeDataset(BaseDataset):
- def __init__(self, prefix, shuffle=False):
- super().__init__(shuffle)
- self.data_dir = hparams['binary_data_dir']
- self.prefix = prefix
- self.hparams = hparams
- self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
- self.indexed_ds = None
-
- # pitch stats
- f0_stats_fn = f'{self.data_dir}/train_f0s_mean_std.npy'
- if os.path.exists(f0_stats_fn):
- hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = np.load(f0_stats_fn)
- hparams['f0_mean'] = float(hparams['f0_mean'])
- hparams['f0_std'] = float(hparams['f0_std'])
- else:
- hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = None, None
-
- if prefix == 'test':
- if hparams['num_test_samples'] > 0:
- self.avail_idxs = list(range(hparams['num_test_samples'])) + hparams['test_ids']
- self.sizes = [self.sizes[i] for i in self.avail_idxs]
-
- def _get_item(self, index):
- if hasattr(self, 'avail_idxs') and self.avail_idxs is not None:
- index = self.avail_idxs[index]
- if self.indexed_ds is None:
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
- return self.indexed_ds[index]
-
- def __getitem__(self, index):
- hparams = self.hparams
- item = self._get_item(index)
- max_frames = hparams['max_frames']
- spec = torch.Tensor(item['mel'])[:max_frames]
- # mel2ph = torch.LongTensor(item['mel2ph'])[:max_frames] if 'mel2ph' in item else None
- f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams)
- pitch = torch.LongTensor(item.get("pitch"))[:max_frames]
- # print(item.keys(), item['mel'].shape, spec.shape)
- sample = {
- "id": index,
- "item_name": item['item_name'],
- "text": item['txt'],
- "mel": spec,
- "pitch": pitch,
- "f0": f0,
- "uv": uv,
- # "mel2ph": mel2ph,
- # "mel_nonpadding": spec.abs().sum(-1) > 0,
- }
- return sample
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- id = torch.LongTensor([s['id'] for s in samples])
- item_names = [s['item_name'] for s in samples]
- text = [s['text'] for s in samples]
- f0 = utils.collate_1d([s['f0'] for s in samples], 0.0)
- pitch = utils.collate_1d([s['pitch'] for s in samples])
- uv = utils.collate_1d([s['uv'] for s in samples])
- mels = utils.collate_2d([s['mel'] for s in samples], 0.0)
- mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
- # mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \
- # if samples[0]['mel2ph'] is not None else None
- # mel_nonpaddings = utils.collate_1d([s['mel_nonpadding'].float() for s in samples], 0.0)
-
- batch = {
- 'id': id,
- 'item_name': item_names,
- 'nsamples': len(samples),
- 'text': text,
- 'mels': mels,
- 'mel_lengths': mel_lengths,
- 'pitch': pitch,
- # 'mel2ph': mel2ph,
- # 'mel_nonpaddings': mel_nonpaddings,
- 'f0': f0,
- 'uv': uv,
- }
- return batch
-
-
-class PitchExtractionTask(FastSpeech2Task):
- def __init__(self):
- super().__init__()
- self.dataset_cls = PeDataset
-
- def build_tts_model(self):
- self.model = PitchExtractor(conv_layers=hparams['pitch_extractor_conv_layers'])
-
- # def build_scheduler(self, optimizer):
- # return torch.optim.lr_scheduler.StepLR(optimizer, hparams['decay_steps'], gamma=0.5)
- def _training_step(self, sample, batch_idx, _):
- loss_output = self.run_model(self.model, sample)
- total_loss = sum([v for v in loss_output.values() if isinstance(v, torch.Tensor) and v.requires_grad])
- loss_output['batch_size'] = sample['mels'].size()[0]
- return total_loss, loss_output
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- outputs['losses'] = {}
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=True)
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = utils.tensors_to_scalars(outputs)
- if batch_idx < hparams['num_valid_plots']:
- self.plot_pitch(batch_idx, model_out, sample)
- return outputs
-
- def run_model(self, model, sample, return_output=False, infer=False):
- f0 = sample['f0']
- uv = sample['uv']
- output = model(sample['mels'])
- losses = {}
- self.add_pitch_loss(output, sample, losses)
- if not return_output:
- return losses
- else:
- return losses, output
-
- def plot_pitch(self, batch_idx, model_out, sample):
- gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
- self.logger.experiment.add_figure(
- f'f0_{batch_idx}',
- f0_to_figure(gt_f0[0], None, model_out['f0_denorm_pred'][0]),
- self.global_step)
-
- def add_pitch_loss(self, output, sample, losses):
- # mel2ph = sample['mel2ph'] # [B, T_s]
- mel = sample['mels']
- f0 = sample['f0']
- uv = sample['uv']
- # nonpadding = (mel2ph != 0).float() if hparams['pitch_type'] == 'frame' \
- # else (sample['txt_tokens'] != 0).float()
- nonpadding = (mel.abs().sum(-1) > 0).float() # sample['mel_nonpaddings']
- # print(nonpadding[0][-8:], nonpadding.shape)
- self.add_f0_loss(output['pitch_pred'], f0, uv, losses, nonpadding=nonpadding)
\ No newline at end of file
diff --git a/spaces/zomehwh/vits-models/text/symbols.py b/spaces/zomehwh/vits-models/text/symbols.py
deleted file mode 100644
index edfbd24247be8c757275ce80b9ec27a0ffa808f3..0000000000000000000000000000000000000000
--- a/spaces/zomehwh/vits-models/text/symbols.py
+++ /dev/null
@@ -1,39 +0,0 @@
-'''
-Defines the set of symbols used in text input to the model.
-'''
-
-'''# japanese_cleaners
-_pad = '_'
-_punctuation = ',.!?-'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
-'''
-
-'''# japanese_cleaners2
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
-'''
-
-'''# korean_cleaners
-_pad = '_'
-_punctuation = ',.!?…~'
-_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
-'''
-
-'''# chinese_cleaners
-_pad = '_'
-_punctuation = ',。!?—…'
-_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
-'''
-
-# zh_ja_mixture_cleaners
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ '
-
-
-# Export all symbols:
-symbols = [_pad] + list(_punctuation) + list(_letters)
-
-# Special symbol ids
-SPACE_ID = symbols.index(" ")
\ No newline at end of file