Your phone network may be too new or too old for Fastgsm s3g 1.0 .0.0.42 from your computer completely and safely, you need to do the following:
-
-- Go to the Control Panel on your computer and click on the Programs and Features option.
-- Find Fastgsm s3g 1.0.0.42 from the list of installed programs and click on it.
-- Click on the Uninstall button at the top or right-click on it and select Uninstall from the menu.
-- A confirmation window will appear asking you if you want to uninstall Fastgsm s3g 1.0.0.42. Click on the Yes button to proceed.
-- The uninstallation wizard will start and guide you through the uninstallation process step by step.
-- Click on the Next button to continue with the uninstallation.
-- Select the option to remove all settings and data associated with Fastgsm s3g 1.0.0.42, and click on the Next button again.
-- Click on the Uninstall button to start the uninstallation process.
-- Wait for a few minutes until the uninstallation is complete, and click on the Finish button to exit the wizard.
-
-You have now uninstalled Fastgsm s3g 1.0.0.42 from your computer completely and safely.
-You may need to restart your computer to complete the uninstallation process.
-If you encounter any problems during the uninstallation process, such as error messages or leftover files or folders, you can use a free online tool like this one to scan and clean your computer from any traces of Fastgsm s3g 1.0.0.42.
- Conclusion
-In this article, we have shown you what Fastgsm s3g 1.0.0.42 is, why you need it, how to download and install it, how to use it to unlock your Samsung phone, how to troubleshoot common issues with it, and how to update or uninstall it.
-We hope that this article has been helpful and informative for you, and that you have learned something new and useful about Fastgsm s3g 1.0.0.42.
-If you want to try Fastgsm s3g 1.0.0.42 for yourself, you can download it for free from here, and unlock your Samsung phone in minutes.
-If you have any questions, feedback, or suggestions about Fastgsm s3g 1.0.0.42, you can contact their customer service at here, or visit their website at here.
-You can also share your experience or opinion about Fastgsm s3g 1.0.0.42 with other users or readers by leaving a comment below this article, or by posting on social media platforms like Facebook, Twitter, or Instagram.
-We would love to hear from you and learn from your insights and perspectives.
-Please note that unlocking your phone may void your warranty or violate your carrier's terms of service, and that you are responsible for your own actions.
-We are not affiliated with or endorsed by Fastgsm s3g 1.0 .0.0.42, and we do not guarantee the accuracy or reliability of the information or software provided in this article.
-This article is for educational and informational purposes only, and you should use Fastgsm s3g 1.0.0.42 at your own risk and discretion.
- FAQs
-Here are some frequently asked questions and answers about Fastgsm s3g 1.0.0.42:
- Q: Is Fastgsm s3g 1.0.0.42 free?
-A: Yes, Fastgsm s3g 1.0.0.42 is free to download and use. However, you may need to pay a small fee to get the unlock code for your phone model and network, depending on the availability and demand of the code.
- Q: Is Fastgsm s3g 1.0.0.42 safe?
-A: Yes, Fastgsm s3g 1.0.0.42 is safe to use, as long as you download it from a reliable source and verify its file integrity and security before installing it. You should also scan your computer and phone for any viruses or malware before and after using the software.
- Q: Is Fastgsm s3g 1.0.0.42 legal?
-A: Yes, Fastgsm s3g 1.0.0.42 is legal to use, as long as you own the phone that you want to unlock and you do not intend to use it for any illegal or fraudulent purposes. However, unlocking your phone may void your warranty or violate your carrier's terms of service, so you should check with them before using the software.
- Q: How long does it take to unlock a Samsung phone with Fastgsm s3g 1.0.0.42?
-A: It usually takes only a few minutes to unlock a Samsung phone with Fastgsm s3g 1.0 .0.0.42, depending on the speed of your internet connection and the availability of the unlock code. However, some phone models or networks may take longer than others, so you should be patient and wait for the software to complete the process.
- Q: What if Fastgsm s3g 1.0.0.42 does not work for me?
-A: If Fastgsm s3g 1.0.0.42 does not work for you, you can try the following options:
-
-- Contact Fastgsm s3g 1.0.0.42 customer service and provide them with your phone model, IMEI number, network name, and error message. They may be able to help you fix the issue or offer you a refund.
-- Use a different software or service to unlock your phone, preferably one that supports your phone model, firmware version, and network.
-- Visit a local phone repair shop or service center and ask them to unlock your phone for you.
-
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FinalMesh Professional 2.4.2.331 Crack UPD Downloadl.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FinalMesh Professional 2.4.2.331 Crack UPD Downloadl.md
deleted file mode 100644
index 6492660a6358116f8a5b47aa8912575f843d5d76..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FinalMesh Professional 2.4.2.331 Crack UPD Downloadl.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-How to Download FinalMesh Professional 2.4.2.331 Crack for Free
-FinalMesh Professional is a powerful and versatile 3D viewer and editor that allows you to create, edit, and convert 3D models and scenes. With FinalMesh Professional, you can easily import and export various 3D formats, such as OBJ, STL, PLY, 3DS, FBX, GLTF, and more. You can also apply materials, textures, lights, and shadows to your 3D models and render them with high-quality results.
-However, FinalMesh Professional is not a free software. You need to purchase a license to use it without any limitations or watermarks. If you are looking for a way to download FinalMesh Professional 2.4.2.331 crack for free, you may be tempted by some websites that claim to offer it. But beware, these websites may be unsafe and may contain viruses, malware, or spyware that can harm your computer or steal your personal information.
-FinalMesh Professional 2.4.2.331 Crack Downloadl
Download File ––– https://byltly.com/2uKw7S
-Therefore, we do not recommend downloading FinalMesh Professional 2.4.2.331 crack from any unauthorized sources. Instead, we suggest you to try the official trial version of FinalMesh Professional from its website[^2^]. The trial version allows you to use all the features of FinalMesh Professional for 30 days without any restrictions. You can also contact the support team if you have any questions or issues with the software.
-If you like FinalMesh Professional and want to continue using it after the trial period expires, you can buy a license from its website[^2^] or from authorized resellers. The license price depends on the number of users and the duration of the subscription. You can choose between a monthly, yearly, or perpetual license. By purchasing a license, you will also get free updates, technical support, and access to online tutorials and documentation.
-FinalMesh Professional is a great tool for anyone who works with 3D models and scenes. It offers a lot of features and functions that can help you create stunning 3D visuals and presentations. However, downloading FinalMesh Professional 2.4.2.331 crack from untrusted sources is not a good idea. It may expose you to security risks and legal issues. Therefore, we advise you to use the official trial version of FinalMesh Professional or buy a license from its website or authorized resellers.
-
-FinalMesh Professional has many benefits that make it stand out from other 3D viewers and editors. Some of these benefits are:
-
-
-- It supports a wide range of 3D formats, including popular ones like OBJ, STL, PLY, 3DS, FBX, GLTF, and more. You can easily import and export your 3D models and scenes without losing any quality or data.
-- It has a fast and modern user interface that is easy to use and customize. You can access all the features and functions from the ribbon menu, toolbar, or context menu. You can also change the theme, layout, and language of the interface according to your preferences.
-- It has a powerful geometry engine that allows you to create, edit, and transform your 3D models and scenes with various tools and modifiers. You can apply boolean operations, subdivision surfaces, extrusions, mirroring, arrays, and more to your 3D objects. You can also use procedural primitives like splines, cubes, spheres, texts, and more to create complex shapes.
-- It has a built-in raytracer that can render your 3D models and scenes with realistic materials, textures, lights, and shadows. You can adjust the render settings and quality to suit your needs. You can also export your 3D models and scenes as images or vector illustrations in various formats.
-- It has a unique feature of publishing your 3D models and scenes as PDF documents or WebGL applications. You can convert your 3D data into regular PDF files that can be viewed with any PDF reader. You can also create HTML applications with 3D WebGL content that can be viewed with any web browser. You can customize the appearance and behavior of your PDF documents or WebGL applications with various options.
-
-FinalMesh Professional is a comprehensive solution for anyone who needs to work with 3D models and scenes. Whether you are a designer, engineer, artist, or hobbyist, you will find FinalMesh Professional useful and convenient for your 3D projects.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gta 3 Weather Cheat Pc EXCLUSIVE.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gta 3 Weather Cheat Pc EXCLUSIVE.md
deleted file mode 100644
index 929331329b272ef0ef7b911745e618bcd7946e79..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gta 3 Weather Cheat Pc EXCLUSIVE.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-GTA 3 Weather Cheat PC: How to Change the Weather in Grand Theft Auto III
-Grand Theft Auto III (GTA 3) is a classic open-world action-adventure game that was released in 2001 for PC, PlayStation 2, and Xbox. It is set in a fictional city called Liberty City, which is loosely based on New York City.
-One of the features that makes GTA 3 fun and immersive is the dynamic weather system, which changes according to the time of day and the location. You can experience sunny, cloudy, rainy, foggy, or snowy weather conditions as you explore the city and complete missions.
-gta 3 weather cheat pc
DOWNLOAD ✏ https://byltly.com/2uKzqx
-However, sometimes you may want to change the weather to suit your mood or preference. For example, you may want to enjoy a sunny day at the beach, or create a stormy atmosphere for a dramatic chase scene. Or maybe you just want to see how the game looks in different weather settings.
-Fortunately, GTA 3 has a cheat code that allows you to change the weather at will. In this article, we will show you how to use the GTA 3 weather cheat PC and what are the effects of each weather option.
-How to Use the GTA 3 Weather Cheat PC
-To use the GTA 3 weather cheat PC, you need to enter a specific code during gameplay. You can do this by typing the code on your keyboard or by using the on-screen keyboard if you are playing with a controller.
-The code for the GTA 3 weather cheat PC is ILIKESCOTLAND
. You need to type this code exactly as it is written, without any spaces or punctuation marks. You will hear a sound effect if you enter the code correctly.
-Each time you enter the code, the weather will change to a different option. There are four weather options in total: sunny, cloudy, rainy, and foggy. You can cycle through these options by entering the code repeatedly until you get the desired weather.
-
-Note that the GTA 3 weather cheat PC does not affect the time of day or the season. It only changes the current weather condition. Also, note that using any cheat code in GTA 3 will disable your ability to save your game or earn achievements. Therefore, use cheats at your own risk and only for fun.
-The Effects of Each Weather Option
-Each weather option in GTA 3 has its own visual and gameplay effects. Here are some of the effects of each option:
-
-- Sunny: This is the default and most common weather option in GTA 3. It makes the sky clear and bright, and gives the city a vibrant and lively look. It also improves your visibility and driving conditions.
-- Cloudy: This option makes the sky overcast and dull, and gives the city a gloomy and depressing look. It also reduces your visibility and makes driving more challenging.
-- Rainy: This option makes it rain heavily in Liberty City, creating puddles and splashes on the ground and on vehicles. It also makes the sky dark and stormy, and gives the city a wet and miserable look. It also greatly reduces your visibility and makes driving very difficult.
-- Foggy: This option makes it foggy in Liberty City, creating a thick layer of mist that covers everything in sight. It also makes the sky gray and hazy, and gives the city a mysterious and eerie look. It also severely reduces your visibility and makes driving almost impossible.
-
-Conclusion
-GTA 3 is a fun and exciting game that lets you experience different weather conditions in Liberty City. However, if you want to change the weather to suit your preference or mood, you can use the GTA 3 weather cheat PC to do so. Just remember to type ILIKESCOTLAND
during gameplay and cycle through the four weather options: sunny, cloudy, rainy, and foggy.
-We hope this article was helpful and informative for you. Have fun playing GTA 3 with different weather settings!
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Aplikasi untuk Buat Undangan Pernikahan yang Bisa Dibagikan ke Media Sosial di Wedding Invitation Card Maker.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Aplikasi untuk Buat Undangan Pernikahan yang Bisa Dibagikan ke Media Sosial di Wedding Invitation Card Maker.md
deleted file mode 100644
index 68f173d571fe998a23a50b24fc0d7a65076ff9f3..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Aplikasi untuk Buat Undangan Pernikahan yang Bisa Dibagikan ke Media Sosial di Wedding Invitation Card Maker.md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-Aplikasi Undangan Digital | Di zaman serba modern ini, tidak perlu menghabiskan banyak kertas untuk mencetak sebuah undangan. Telah hadir sebuah aplikasi yang sangat kamu butuhkan. Pada saat kamu bingung menyiapkan momen pesta pernikahan atau lainnya. Yaitu, Apk pembuat kartu undangan digital untuk segala keperluan.Aplikasi ini sangat menghemat waktu dan biaya. Kamu tak perlu bersusah payah memikirkan kartu undangan seperti apa yang cocok buat tema pestamu. Pastinya, dengan kartu undangan yang keren ini akan membuat pestamu lebih menarik. Download aplikasi di bawah ini.
-Brilio.net - Bagi setiap pasangan, acara pernikahan merupakan momen yang sakral. Jika kamu sudah punya rencana untuk melangsungkan pernikahan, tentu dibutuhkan perencanaan yang matang, dan biasanya butuh biaya yang tidak sedikit. Mulai dari gedung, dekorasi, fotografer, makeup, hingga undangan pernikahan.
-download aplikasi untuk buat undangan pernikahan
Download ↔ https://imgfil.com/2uy1qV
-Jika kamu memiliki smartphone atau handphone pintar, tentu ini bukanlah suatu masalah besar dan menjadikan beban. Kamu dapat menekan biaya pernikahan dengan membuat desain undangan pernikahan sendiri sesuai keinginanmu. Tidak perlu memikirkan biaya yang harus dikeluarkan, karena dengan smartphone kemudahan bisa kamu dapatkan.
-Cukup berbekal aplikasi yang bisa kamu download di Google PlayStore, kamu bisa merancang sendiri undangan pernikahan. Kamu juga akan dimudahkan dengan membuat undangan sendiri karena dapat dikirim kepada saudara dan teman yang jauh hanya dalam hitungan detik.
-Buat kamu yang berencana untuk membuat undangan sendiri, kamu perlu mencoba untuk mendownload aplikasi ini agar undangan kamu lebih keren. Nah, berikut aplikasi undangan yang bisa kamu gunakan, brilio.net rangkum dari berbagai sumber, Senin (3/2).
-Aplikasi desain yang telah digunakan banyak orang ini memiliki beragam template desain mulai dari undangan pernikahan, acara ulang tahun, wisuda, dan lain sebagainya. Kamu bisa menggunakan template ini dengan gratis dengan membuka aplikasi dan ketik di kolom pencarian "invitation wedding".
-Aplikasi desain ini telah diunduh lebih dari 500.000 kali di Google PlayStore. Kamu bisa dengan mudah mengubah nama, tanggal, tempat, dan informasi seputar pernikahanmu. Bahkan, kamu juga dapat memberi bingkai lho. Tetapi jangan terlalu berlebihan agar desain undangan kamu tidak norak ya.
-Aplikasi ini dibuat Photoshop Mobile Apps yang telah diunduh lebih dari 500.000 kali di Google Play Store. Aplikasi ini bisa membuat undangan hanya dengan beberapa menit saja. Kamu juga bisa menggungah desain undangan yang telah kamu bikin di Instagram dengan rasio yang telah kamu tentukan sebelumnya.
-
-Saat ini telah banyak undangan yang dibuat dengan format video. Kamu tidak perlu risau, karena dengan aplikasi video invitation maker makin mudah membuat undangan dalam bentuk video. Aplikasi video ini juga dibuat oleh Photoshop Mobile Apps.
-Kamu bisa bebas memilih template desain sesuai keinginanmu dengan mengubah tanggal, nama, hari pernikahan. Kalau ingin undangan yang kamu hasilkan lebih keren, kamu bisa mengubah ratio dan background dengan berbagai macam variasi.
-Aplikasi ini memiliki fitur yang dapat mengubah hasil editing menjadi sebuah walpaper di smartphone. Kamu juga bisa menambahkan beberapa quotes untuk membuat tampilan undangan kamu agar lebih keren dan menarik tentunya.
-Aplikasi undangan pernikahan ini bisa kamu gunakan untuk mendesain sesuai keinginan dengan bingkai, latar belakang, dan stiker. Kamu juga bisa mengedit foto bersama pasangan untuk dijadikan background undangan agar makin keren.
-Sebenarnya ada cukup banyak aplikasi membuat undangan di PC yang bisa diperoleh. Ada yang berbayar dan tak sedikit yang bisa diunduh secara bebas dan gratis. Namun sayangnya tak sedikit software itu yang terbilang tidak user friendly. Kalaupun bagus biasanya berbayar.
-Kebutuhan terhadap pembuatan kartu undangan khususnya untuk acara pernikahan sangatlah tinggi. Setiap akhir pekan selalu ada acara pernikahan di berbagai daerah. Hal ini biasanya terlihat dari selalu penuhnya jadwal persewaan gedung untuk acara resepsi perkawinan.if(typeof ez_ad_units!='undefined')ez_ad_units.push([[728,90],'cademedia_com-medrectangle-3','ezslot_8',136,'0','0']);__ez_fad_position('div-gpt-ad-cademedia_com-medrectangle-3-0');Besarnya permintaan terhadap pembuatan kartu undangan ini kebanyakan menjadi lahan bisnis yang menggiurkan untuk para pelaku usaha kecil dan perorangan. Jasa desain grafis menjadi sasaran konsumen dalam berburu layanan ini. Apalagi mereka pun biasanya melayani jasa cetaknya meski tidak memiliki mesin percetakan sendiri.
-Saat ini ada banyak aplikasi membuat undangan di PC terbaik yang bisa digunakan. Aplikasi ini tentunya bisa dimanfaat khususnya buat anda yang ingin mencoba membuat dan mendesain sendiri kartu undangan pernikahan semacam ini. Berikut ini beberapa software desain undangan pernikahan gratis yang sangat direkomendasikan.1.Card and PocketAplikasi ini cukup populer karena cara penggunaannya yang cukup mudah namun mampu memberikan hasil yang terlihat menarik. Ada banyak fitur yang ditawarkan aplikasi ini, seperti warnanya yang banyak maupun template yang sangat elegan dan cantik. Setelah selesai, anda pun bisa mengunduhnya atau langsung di print dengan printer di rumah atau mencetak di percetakan.
-Untuk menggunakan aplikasi membuat undangan di laptop ini silahkan kunjungi websitenya di (typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'cademedia_com-medrectangle-4','ezslot_2',109,'0','0']);__ez_fad_position('div-gpt-ad-cademedia_com-medrectangle-4-0');2.Greeting IslandAplikasi yang satu ini juga mudah digunakan dengan berbagai template yang tinggal dipilih. Aplikasi untuk membuat undangan di laptop ini bisa digunakan untuk mendesain undangan pernikahan, kartu nama, undangan ulang tahun, maupun untuk membuat kalender. Fitur yang disediakan aplikasi online ini juga lumayan banyak dan hasilnya terlihat seperti profesional.3.Download and Print
-Aplikasi membuat undangan digital di PC lainnya adalah Download and Print yang juga menawarkan beragam fitur yang menarik. Template atau contoh desainnya tersedia dalam jumlah yang banyak dan sangat menarik.if(typeof ez_ad_units!='undefined')ez_ad_units.push([[336,280],'cademedia_com-box-4','ezslot_3',139,'0','0']);__ez_fad_position('div-gpt-ad-cademedia_com-box-4-0');Fiturnya memang ada yang gratis namun juga ada yang berbayar yang bisa dipilih. Namun untuk bisa menggunakan layanan ini anda harus mendaftar dan login ke websitenya di 4.ElliSatu lagi software desain undangan pernikahan gratis yang bisa digunakan adalah Elli. Seperti namanya, fitur yang disediakan juga simpel dan mudah digunakan. Template desainnya tersedia dalam katalog yang bisa anda pilih sesuai keinginan. Jika dibutuhkan anda juga bisa download aplikasi membuat undangan untuk di instal di komputer anda.
-5.Printable Invitation KitsAplikasi membuat undangan di PC lainnya yang rekomended adalah Printable Invitation Kits. Aplikasi online ini tidak bisa diunduh karena hanya bisa digunakan secara online pada websitenya.Setelah mendaftar dan login anda bisa memanfaatkan berbagai fitur menarik yang disediakannya. Dari katalog templatenya yang menarik, font, warna, dan fitur lainnya.
-6.CorelDrawSoftware desain undangan pernikahan gratis yang juga bisa digunakan adalah CorelDraw. Aplikasi ini amatlah populer dan termasuk salah satu software desain yang paling terkenal juga.Fitur yang disediakan software ini sangat lengkap dan mudah digunakan, termasuk untuk anda yang masih pemula sekalipun. Setelah selesai, hasil desain anda pun bisa langsung diunduh atau dicetak dengan printer.Demikianlah beberapa software desain undangan pernikahan gratis yang bisa dimanfaatkan untuk kebutuhan anda. Dengan menggunakan aplikasi membuat undangan di PC ini tentunya anda bisa berkreasi secara lebih bebas dan menghemat biaya jasa desain.
-Menggunakan Android dan iPhone tentu memerlukan bantuan beberapa aplikasi, salah satunya adalah pembuat undangan. Aplikasi pembuat undangan pada Android dan iPhone bisa digunakan dengan mudah untuk membuat pengumuman seperti pernikahan, pesta, syukuran, dan masih banyak lagi. Hadirnya aplikasi pembuat undangan tentu memudahkan setiap orang untuk mengundang kerabat mereka agar menghadiri suatu acara.
-Invitation maker & Card design merupakan aplikasi pembuat undangan yang bisa digunakan pada perangkat Android dan iPhone. Dengan aplikasi ini, kamu bisa dengan mudah membuat berbagai jenis undangan seperti Party Invitation Card, Wedding Invitation, Birthday Invitation, dan masih banyak lagi. Invitation maker & Card design juga menyediakan versi premium yang tentu memiliki fitur lebih lengkap. Untuk mulai menggunakan Invitation maker & Card design, kamu bisa mendownloadnya di Play Store dan App Store.
-Undangan Web Digital juga bisa kamu gunakan untuk membuat undangan secara cepat dengan mudah. Aplikasi ini hanya bisa kamu gunakan pada perangkat Android saja. Aplikasi ini bisa kamu gunakan dengan mudah untuk membuat undangan pernikahan berbasis web. Selain terkesan canggih, penggunaan aplikasi ini juga bisa menghemat dana yang diperlukan. Kamu bisa mengatur undangan sesuai keinginanmu menggunakan fitur menarik seperti Potong dan rotasi gambar, Tambah dan hapus fitur, Tersedia beberapa lagu, dan masih banyak lagi.
-Aplikasi lain yang bsia kamu gunakan untuk membuat undangan pada perangkat Android dengan mudah dan gratis. Aplikasi ini bisa kamu gunakan dengan mudah untuk membuat undangan pernikahan. Dengan fitur utama seperti Tersedia berbagai macam backsound musik, Tersedia galeri foto dan video prewedding, Edit dan atur acaramu sesuai keinginan, dan masih banyak lagi. Untuk mulai menggunakan Goinvite, kamu bisa download aplikasi ini melalui Play Store.
-Jika ingin menjajal aplikasi lain, kamu bsia coba Invitation Card Maker & Design. Aplikasi ini pada dasarnya hanya bisa digunakan pada perangkat Android. Dengan Invitation Card Maker & Design, kamu bisa membuat berbagai macam undangan seperti Anniversary Invitations, Birthday Invitation Cards Templates, Halloween, , dan masih banyak lagi.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Tv Tunner Gadmei Usb Utv330 .rar [NEW].md b/spaces/1gistliPinn/ChatGPT4/Examples/Driver Tv Tunner Gadmei Usb Utv330 .rar [NEW].md
deleted file mode 100644
index b45101c9e8984ef8e7609c821076d3ebae440691..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Tv Tunner Gadmei Usb Utv330 .rar [NEW].md
+++ /dev/null
@@ -1,48 +0,0 @@
-Driver Tv Tunner Gadmei Usb Utv330 .rar
DOWNLOAD ✦ https://imgfil.com/2uxWSD
-
--gtv tunner card driver rar ัะบะฐัะฐัั driver tv tunner gadmei usb utv330.rarQ:
-
-How can I debug a single page with Angular2 and Webpack?
-
-I have a large Angular2 application using Webpack.
-
-When we build the app to ES5, we need to make sure every HTML page that has its own components have a.d.ts file to generate the type definition files.
-
-How can we go about unit testing our.d.ts files?
-
-I'm thinking of creating a new Angular2 application with an empty file structure and unit testing this component. But this creates a separate build system for testing, instead of having a single build system that builds and transpiles the whole thing.
-
-Is there a way to "lint" Angular2 code and have it break when there are problems? Like how JSHint or JSLint works?
-
-A:
-
-You could use the HtmlWebpackPlugin, like this:
-
-new HtmlWebpackPlugin(
-
- template: 'index.html',
-
- filename: 'index.html',
-
- inject: 'body'
-
-)
-
-This will inject the webpack generated html into the index.html which is served by Angular2.
-
-In your typescript file, you can then use the export keyword to export things like components:
-
-export class FirstComponent
-
-This will make sure that the.ts file is exported, and if you use the typescript module system, the file can be imported like this:
-
-import FirstComponent from './FirstComponent'
-
-If you use another file, it is better to just require the file instead of the import statement, for better code maintenance.
-
-Read more about HtmlWebpackPlugin here.
-
-The heart is a pump that is primarily responsible for pumping blood throughout the body. The human heart has four chambers, the left and right atrium and the left and right ventricles, that are sequentially squeezed by the atrial and ventricular muscle tissue of the heart. As the chambers of the heart contract and expand, the inner walls of the chambers move in and out of the heart wall, a process which is known as myocardial contraction. As the chambers contract, the pressure within the chambers rises. In addition, as the pressure within the chambers increases, the blood flow into 4fefd39f24
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Highway Racing APK How to Get Unlimited Money and Dominate the Road.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Highway Racing APK How to Get Unlimited Money and Dominate the Road.md
deleted file mode 100644
index 484ed8676fbd78959c9feb69fa0b57c98ef400a2..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Highway Racing APK How to Get Unlimited Money and Dominate the Road.md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-CarX Highway Racing APK Unlimited Money: A Review
-Do you love racing games? Do you want to experience the thrill of driving on a traffic-packed highway with realistic physics? Do you want to have unlimited money to buy and upgrade your dream cars? If you answered yes to any of these questions, then you should definitely check out CarX Highway Racing, one of the best racing games for Android devices.
-carx highway racing apk unlimited money
Download File ✏ https://urlin.us/2uT1Fs
-CarX Highway Racing is a game that combines lifelike physics, traffic-packed highways, and a variety of cars and modes to create an immersive and exciting racing experience. You can choose from over 100 cars, each with its own characteristics and customization options, and race against other players or AI opponents in different modes, such as campaign, time attack, police chase, and online multiplayer. You can also enjoy stunning graphics, realistic sounds, and dynamic weather effects that make every race unique.
-In this article, we will review CarX Highway Racing in detail and show you how to download and install CarX Highway Racing APK Unlimited Money, a modded version that gives you unlimited money, unlocked cars, and no ads. We will also give you some tips and tricks on how to master CarX Highway Racing and win every race. So, without further ado, let's get started!
- Features
-CarX Highway Racing is not your typical racing game. It has some unique features that make it stand out from other racing games. Here are some of them:
- Lifelike physics
-CarX Highway Racing uses the same physics engine as CarX Drift Racing 2, which is one of the best drifting games for Android devices. This means that CarX Highway Racing has a realistic and challenging driving experience that requires skill and precision. You have to control your car's speed, acceleration, braking, steering, and drifting, as well as take into account the road conditions, the weather, and the traffic. You can also feel the difference between different cars, as each one has its own weight, power, handling, and traction.
- Traffic-packed highways
-CarX Highway Racing simulates real-life traffic conditions, which adds more thrill and excitement to the races. You have to avoid crashing into other cars or objects, as well as follow the traffic rules and signs. You also have to deal with different types of traffic, such as trucks, buses, motorcycles, and police cars. Some traffic may help you or hinder you, depending on the situation. For example, you can use trucks to block your opponents or hide from the police, but you can also get stuck behind them or get hit by them.
- Variety of cars and modes
-CarX Highway Racing offers over 100 cars to choose from, each with its own characteristics and customization options. You can find cars from different categories, such as sports cars, muscle cars, supercars, and classic cars. You can also upgrade and customize your car's performance and appearance, such as engine, transmission, suspension, brakes, tires, paint, vinyls, stickers, and more.
-carx highway racing mod apk download free
-carx highway racing hack apk unlimited gold
-carx highway racing apk obb latest version
-carx highway racing modded apk all cars unlocked
-carx highway racing cheats apk unlimited fuel
-carx highway racing premium apk full version
-carx highway racing cracked apk unlimited money and gold
-carx highway racing apk mod menu unlimited everything
-carx highway racing offline apk no root
-carx highway racing unlimited money apk rexdl
-carx highway racing mod apk revdl free download
-carx highway racing hack apk android 1
-carx highway racing apk data unlimited cash
-carx highway racing modded apk unlimited nitro
-carx highway racing apk pure full unlocked
-carx highway racing hack apk happymod
-carx highway racing mod apk android republic
-carx highway racing unlimited money apk mirror
-carx highway racing hacked apk latest update
-carx highway racing mod apk an1.com
-carx highway racing apk mod unlimited coins and gems
-carx highway racing hack tool apk no survey
-carx highway racing modded apk free shopping
-carx highway racing unlimited money and gold apk download
-carx highway racing mod apk vip unlocked
-carx highway racing hack version apk unlimited money and gold
-carx highway racing modded apk no ads
-carx highway racing unlimited money and gold mod apk 2023
-carx highway racing hack online generator apk
-carx highway racing modded apk all levels unlocked
-CarX Highway Racing also offers different modes to suit different preferences. You can play the campaign mode, where you have to complete various missions and challenges in different locations and scenarios. You can also play the time attack mode, where you have to race against the clock and beat your own records. You can also play the police chase mode, where you have to escape from the police or chase down criminals. And finally, you can play the online multiplayer mode, where you can race against other players from around the world and compete for rankings and rewards.
How to download and install CarX Highway Racing APK Unlimited Money
-If you want to enjoy CarX Highway Racing with unlimited money, unlocked cars, and no ads, you can download and install CarX Highway Racing APK Unlimited Money, a modded version of the game that gives you these benefits. Here is how to do it:
-
-- Go to the download page of CarX Highway Racing APK Unlimited Money by clicking here.
-- Download the APK file and the OBB file to your device.
-- Enable the installation of apps from unknown sources in your device's settings.
-- Install the APK file by tapping on it.
-- Extract the OBB file to the Android/OBB folder in your device's internal storage.
-- Launch the game and enjoy!
-
- Note: This is a modded version of the game that may not be compatible with the official version or the latest updates. Use it at your own risk and discretion. We are not responsible for any damages or issues that may arise from using this modded version.
- Tips and tricks
-CarX Highway Racing is a fun and challenging game that requires skill and strategy to win. Here are some tips and tricks that can help you improve your skills and performance in CarX Highway Racing:
- Choose the right car for each race
-Different cars have different strengths and weaknesses, and you should select the best one for each race based on the terrain, weather, traffic, and opponents. For example, if you are racing on a snowy road, you should choose a car with good traction and stability, such as a 4x4 or an SUV. If you are racing on a sunny highway, you should choose a car with high speed and acceleration, such as a sports car or a supercar. You can also check the stats and ratings of each car before selecting it, such as power, handling, fuel consumption, durability, and popularity.
- Use the nitro wisely
-Nitro can boost your speed and help you overtake other cars, but it can also drain your fuel and make you lose control if used too much or at the wrong time. You should use nitro sparingly and strategically, such as when you need to catch up with an opponent, escape from the police, or pass through a narrow gap. You should also avoid using nitro when you are turning, braking, or drifting, as it can make you skid or crash. You can refill your nitro by driving fast or performing stunts, such as drifting, jumping, or near-missing.
- Avoid collisions and penalties
-Collisions can damage your car and slow you down, and penalties can reduce your score and time if you break the rules or hit other cars or objects. You should avoid collisions and penalties by driving carefully and skillfully, as well as following the traffic rules and signs. You should also avoid hitting other cars or objects, such as trucks, buses, motorcycles, police cars, barriers, cones, signs, trees, etc. You can also repair your car by driving through repair stations or using repair kits.
- Upgrade and customize your car
-Upgrading and customizing your car can improve its performance and appearance, and you can use the unlimited money from the modded version to do so. You can upgrade your car's engine, transmission, suspension, brakes, tires, etc., to increase its power, handling, fuel consumption, durability, etc. You can also customize your car's paint, vinyls, stickers, etc., to change its color, style, and design. You can also use the modded version to unlock all the cars and customize them as you wish.
- Conclusion
-CarX Highway Racing is a game that offers a realistic and thrilling racing experience on traffic-packed highways. You can choose from over 100 cars, each with its own characteristics and customization options, and race against other players or AI opponents in different modes, such as campaign, time attack, police chase, and online multiplayer. You can also enjoy stunning graphics, realistic sounds, and dynamic weather effects that make every race unique.
-If you want to have unlimited money, unlocked cars, and no ads, you can download and install CarX Highway Racing APK Unlimited Money, a modded version of the game that gives you these benefits. You can also use some tips and tricks to improve your skills and performance in CarX Highway Racing, such as choosing the right car for each race, using the nitro wisely, avoiding collisions and penalties, and upgrading and customizing your car.
-So, what are you waiting for? Download CarX Highway Racing APK Unlimited Money now and enjoy the ultimate racing experience on your Android device. You won't regret it!
-Click here to download CarX Highway Racing APK Unlimited Money.
- FAQs
-Here are some frequently asked questions about CarX Highway Racing APK Unlimited Money:
- Q: Is CarX Highway Racing APK Unlimited Money safe to use?
-A: CarX Highway Racing APK Unlimited Money is a modded version of the game that has been modified by third-party developers. It is not affiliated with or endorsed by the official developers of CarX Highway Racing. Therefore, it may not be safe to use and may contain viruses or malware that can harm your device or compromise your privacy. Use it at your own risk and discretion.
- Q: How do I update CarX Highway Racing APK Unlimited Money?
-A: CarX Highway Racing APK Unlimited Money may not be compatible with the latest updates or versions of CarX Highway Racing. Therefore, you may not be able to update it through the Google Play Store or the official website. You may have to wait for the modded version to be updated by its developers or look for another source to download it from.
- Q: Can I play CarX Highway Racing APK Unlimited Money online?
-A: CarX Highway Racing APK Unlimited Money may not work properly or at all when you try to play it online. You may face issues such as connection errors, lagging, crashing, or banning. Therefore, it is recommended that you play CarX Highway Racing APK Unlimited Money offline or in airplane mode.
- Q: Can I use CarX Highway Racing APK Unlimited Money with other mods or cheats?
-A: CarX Highway Racing APK Unlimited Money may not be compatible with other mods or cheats that you may have installed on your device or in your game. They may cause conflicts or errors that can affect your gameplay or damage your device. Therefore, it is advised that you use CarX Highway Racing APK Unlimited Money alone or with caution.
- Q: Can I share CarX Highway Racing APK Unlimited Money with others?
-A: CarX Highway Racing APK Unlimited Money is a modded version of the game that is not authorized or approved by the official developers of CarX Highway Racing. Therefore, it may violate their terms of service or intellectual property rights. Sharing it with others may result in legal actions or penalties against you or them. Therefore, it is suggested that you keep CarX Highway Racing APK Unlimited Money for yourself or share it with discretion.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Alienvault The Ultimate Solution for Threat Intelligence and Detection.md b/spaces/1phancelerku/anime-remove-background/Alienvault The Ultimate Solution for Threat Intelligence and Detection.md
deleted file mode 100644
index 2ed2bff41816dabc798fee11c1a899e05aa9d2c5..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Alienvault The Ultimate Solution for Threat Intelligence and Detection.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-What is AlienVault and Why You Need It
-If you are looking for a powerful and reliable solution to protect your network from cyber threats, you might have heard of AlienVault. But what is AlienVault exactly, and why do you need it?
-alienvault
DOWNLOAD ❤ https://jinyurl.com/2uNSPS
-AlienVault is a leading provider of cybersecurity solutions that help organizations of all sizes detect, prevent, and respond to cyber attacks. AlienVault offers a unique combination of open threat intelligence, security information and event management (SIEM), and cybersecurity services that enable you to monitor, analyze, and respond to threats in real time.
-In this article, we will explain what AlienVault is, how it works, what benefits and features it offers, what customers say about it, and how you can get started with it. By the end of this article, you will have a clear understanding of why AlienVault is the best choice for your cybersecurity needs.
-AlienVault: The World's First Open Threat Intelligence Community
-One of the key components of AlienVault is its Open Threat Exchange (OTX), which is the world's first truly open threat intelligence community. OTX enables private companies, independent security researchers, and government agencies to openly collaborate and share the latest information about emerging threats, attack methods, and malicious actors, promoting greater security across the entire community.
-alienvault open threat exchange
-alienvault otx endpoint security
-alienvault usm anywhere
-alienvault vs splunk
-alienvault pricing
-alienvault siem review
-alienvault certification
-alienvault aws integration
-alienvault azure sentinel
-alienvault api documentation
-alienvault at&t cybersecurity
-alienvault backup and restore
-alienvault cloud security
-alienvault compliance reports
-alienvault dark web monitoring
-alienvault endpoint detection and response
-alienvault file integrity monitoring
-alienvault gartner magic quadrant
-alienvault honeypot setup
-alienvault intrusion detection system
-alienvault jobs
-alienvault kubernetes
-alienvault log management
-alienvault mssp partner program
-alienvault network monitoring
-alienvault otx pulses
-alienvault otx directconnect api
-alienvault otx vs threatconnect
-alienvault otx vs virustotal
-alienvault otx vs mitre att&ck
-alienvault product comparison matrix
-alienvault qualys integration
-alienvault ransomware detection
-alienvault sensor deployment guide
-alienvault threat intelligence feed url
-alienvault unified security management platform
-alienvault user activity monitoring
-alienvault vulnerability assessment and remediation
-alienvault web application firewall integration
-alienvault windows event log collection configuration guide
-How AlienVault Works
-AlienVault works by leveraging the power of OTX and its own security products to provide you with comprehensive and up-to-date threat intelligence that helps you detect and respond to threats faster and more effectively. Here are some of the main features of how AlienVault works:
-Open Threat Exchange (OTX)
-
-- OTX is a free platform that allows anyone in the security community to contribute, discuss, research, validate, and share threat data.
-- OTX collects over 20 million threat indicators daily from over 200,000 global participants who investigate emerging threats in the wild.
-- OTX automatically extracts indicators of compromise (IOCs) from blogs, threat reports, emails, PCAPs, and more.
-- OTX allows you to join and create specialized groups, including private groups, to share threat intelligence with specific audiences.
-- OTX allows you to submit files and URLs for free malware analysis within Alien Labs OTX sandbox.
-- OTX allows you to quickly identify if your endpoints have been compromised in major cyber attacks using OTX Endpoint Security.
-- OTX allows you to synchronize OTX threat intelligence with other security products via DirectConnect API, SDK, and STIX/TAXII.
-
-OTX Endpoint Security
-
-- OTX Endpoint Security is a free service that natively uses the community-powered threat intelligence of OTX to scan your endpoints for known IOCs.
-- OTX Endpoint Security uses the same agent-based approach as expensive endpoint security tools and DIY open source agents without the expense, complexity, or guesswork.
-- OTX Endpoint Security is available to any registered OTX user. To get started, you just need to download and install the OTX agent on the Windows or Linux devices you want to monitor.
-- OTX Endpoint Security allows you to launch a query on any endpoint from OTX by selecting a pre-defined query that looks for IOCs in one or more categories, such as processes, registry keys, files, or network connections.
-- OTX Endpoint Security allows you to view the results of the query in OTX and see if any of the endpoints have been compromised by known threats.
-- OTX Endpoint Security allows you to take action on the compromised endpoints by isolating them from the network, killing malicious processes, deleting malicious files, or blocking malicious network connections.
-
-AlienVault: The Best Solution for Security Information and Event Management (SIEM)
-Another key component of AlienVault is its SIEM solution, which is designed to help you collect, correlate, analyze, and act on security data from various sources across your network. AlienVault offers two versions of its SIEM solution: AlienVault OSSIM and AlienVault USM.
-AlienVault OSSIM
-
-- AlienVault OSSIM is the world's most widely used open source SIEM solution, with over 500,000 downloads and 195,000 active users.
-- AlienVault OSSIM provides you with the basic security capabilities you need to monitor your network, such as asset discovery, vulnerability assessment, intrusion detection, behavioral monitoring, and event correlation.
-- AlienVault OSSIM is free to download and use for any purpose. However, it does not include any support or maintenance services from AlienVault.
-- AlienVault OSSIM is ideal for security enthusiasts, researchers, students, and small organizations who want to learn about SIEM and get started with basic security monitoring.
-
-AlienVault USM
-
-- AlienVault USM is the commercial version of AlienVault OSSIM, which provides you with the advanced security capabilities you need to protect your network from sophisticated threats.
-- AlienVault USM includes all the features of AlienVault OSSIM, plus additional features such as threat intelligence updates from OTX and Alien Labs, log management and retention, compliance reporting and management, orchestration and automation, cloud monitoring and integration, and more.
-- AlienVault USM comes with full support and maintenance services from AlienVault, including 24/7 technical support, product updates and upgrades, training and certification, and professional services.
-- AlienVault USM is ideal for medium to large organizations who need a comprehensive and scalable SIEM solution that can handle complex and dynamic environments.
-
-AlienVault: The Trusted Partner for Cybersecurity Services
-Besides its threat intelligence and SIEM solutions, AlienVault also offers a range of cybersecurity services that can help you enhance your security posture and achieve your security goals. These services include:
-AlienVault Professional Services
-
-- AlienVault Professional Services are designed to help you get the most out of your AlienVault products and solutions. These services include installation and configuration, migration and upgrade, customization and integration, health check and optimization, incident response and forensics, and more.
-- AlienVault Professional Services are delivered by certified AlienVault experts who have extensive experience and knowledge in cybersecurity best practices and industry standards.
-- AlienVault Professional Services are available on-demand or as part of a subscription plan. You can choose from different service levels depending on your needs and budget.
-
-AlienVault Managed Security Services
-
-- AlienVault Managed Security Services are designed to help you outsource your security operations to AlienVault's team of security analysts who will monitor, manage, and respond to threats on your behalf. These services include managed detection and response (MDR), managed compliance (MC), managed vulnerability scanning (MVS), managed log review (MLR), managed threat hunting (MTH), and more.
-- AlienVault Managed Security Services are powered by AlienVault USM's advanced technology and OTX's rich threat intelligence. You will get access to a dedicated portal where you can view your security status, alerts, reports, recommendations, and actions.
-- AlienVault Managed Security Services are available as a monthly or annual subscription plan. You can choose from different service tiers depending on your needs and budget.
-
- AlienVault: The Benefits and Features You Can Expect
- Now that you know what AlienVault is and how it works, let's take a look at some of the benefits and features you can expect from using AlienVault for your cybersecurity needs. Here are some of the main ones:
- Comprehensive and Up-to-Date Threat Intelligence
- One of the biggest advantages of AlienVault is that it provides you with comprehensive and up-to-date threat intelligence that helps you stay ahead of the evolving threat landscape. AlienVault's threat intelligence is derived from multiple sources, including OTX, Alien Labs, third-party feeds, and your own data. AlienVault's threat intelligence is constantly updated and enriched with contextual information, such as threat actors, tactics, techniques, and procedures (TTPs), indicators of compromise (IOCs), and recommended actions. AlienVault's threat intelligence enables you to quickly identify and prioritize the most relevant and critical threats to your network and respond accordingly.
- Easy and Flexible Deployment and Integration
- Another benefit of AlienVault is that it is easy and flexible to deploy and integrate with your existing infrastructure and security tools. AlienVault supports various deployment options, including on-premises, cloud, hybrid, or virtual appliances. AlienVault also supports various integration options, including native integrations with popular cloud platforms, such as AWS, Azure, Google Cloud, and Office 365, as well as integrations with other security products, such as firewalls, antivirus, endpoint protection, and more. AlienVault's deployment and integration capabilities allow you to extend your visibility and coverage across your entire network and leverage your existing investments in security.
- Affordable and Scalable Pricing and Licensing
- A third benefit of AlienVault is that it offers affordable and scalable pricing and licensing models that suit your needs and budget. AlienVault's pricing and licensing models are based on the number of assets you want to monitor, not on the volume of data you generate or consume. This means that you only pay for what you need and use, without worrying about data limits or overages. AlienVault's pricing and licensing models also allow you to scale up or down as your network grows or changes, without compromising your security or performance.
- AlienVault: The Customer Reviews and Testimonials You Should Know
- So far, we have discussed what AlienVault is, how it works, and what benefits and features it offers. But don't just take our word for it. Here are some of the customer reviews and testimonials you should know about AlienVault:
- What Customers Love About AlienVault
- Here are some of the positive feedbacks that customers have given about AlienVault:
-
-- "AlienVault has been a game-changer for us. It has given us the visibility and insight we need to protect our network from threats. It has also saved us a lot of time and money by simplifying our security operations." - IT Manager at a Manufacturing Company
-- "AlienVault is a great solution for small to medium businesses who need a comprehensive SIEM solution that is easy to use and affordable. It has everything you need in one platform: threat intelligence, asset discovery, vulnerability assessment, intrusion detection, behavioral monitoring, event correlation, log management, compliance reporting, orchestration and automation, cloud monitoring, and more." - Security Analyst at a Financial Services Company
-- "AlienVault is the best thing that ever happened to us. It has helped us improve our security posture and compliance status significantly. It has also enabled us to collaborate with other security professionals in the OTX community and learn from their experiences." - CISO at a Healthcare Company
-
- What Customers Wish AlienVault Could Improve
- Here are some of the negative feedbacks that customers have given about AlienVault:
-
-- "AlienVault could improve its user interface and dashboard. It can be confusing and overwhelming at times. It could also provide more customization options for reports and alerts." - IT Director at an Education Institution
-- "AlienVault could improve its support for newer technologies and platforms. It can be slow to update its integrations with some of the latest cloud services and security tools." - Security Engineer at a Technology Company
-- "AlienVault could improve its documentation and training resources. It can be hard to find the information you need or get the answers you want. It could also offer more online courses and certifications for users." - Security Consultant at a Professional Services Company
-
- AlienVault: The Conclusion and Call to Action
- In conclusion, AlienVault is a powerful and reliable solution that can help you protect your network from cyber threats. AlienVault offers a unique combination of open threat intelligence, security information and event management (SIEM), and cybersecurity services that enable you to monitor, analyze, and respond to threats in real time. AlienVault is easy and flexible to deploy and integrate, affordable and scalable to use, and comprehensive and up-to-date in its threat intelligence. AlienVault has received positive reviews and testimonials from thousands of customers who have improved their security posture and compliance status with AlienVault.
- If you are interested in trying out AlienVault for yourself, you can request a free trial or a live demo from their website. You can also download AlienVault OSSIM or join OTX for free. Alternatively, you can contact AlienVault's sales team or find a partner near you to get more information and assistance.
- Don't wait any longer. Start your journey with AlienVault today and see how it can help you protect your network from cyber threats.
- AlienVault: The FAQs
- Here are some of the frequently asked questions (FAQs) about AlienVault:
- What is the difference between AlienVault OSSIM and AlienVault USM?
- AlienVault OSSIM is the open source version of AlienVault USM, which provides basic security capabilities for network monitoring. AlienVault USM is the commercial version of AlienVault OSSIM, which provides advanced security capabilities for threat detection and response.
- How much does AlienVault cost?
- AlienVault's pricing depends on the number of assets you want to monitor and the service level you choose. You can request a quote from their website or contact their sales team for more details.
- How can I get started with AlienVault?
- You can get started with AlienVault by requesting a free trial or a live demo from their website. You can also download AlienVault OSSIM or join OTX for free. Alternatively, you can contact AlienVault's sales team or find a partner near you to get more information and assistance.
- What are the system requirements for AlienVault?
- AlienVault's system requirements vary depending on the deployment option and the product version you choose. You can find the detailed system requirements on their website or contact their support team for more guidance.
- Where can I find more resources and support for AlienVault?
- You can find more resources and support for AlienVault on their website, where you can access their documentation, knowledge base, forums, blog, webinars, videos, podcasts, and more. You can also contact their support team via phone, email, chat, or ticket system.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download TikTok Videos Without Watermark in HD Resolution - Best TikTok Saver.md b/spaces/1phancelerku/anime-remove-background/Download TikTok Videos Without Watermark in HD Resolution - Best TikTok Saver.md
deleted file mode 100644
index c9ae42f587a71ed7f9d9c091523e187e7602f991..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download TikTok Videos Without Watermark in HD Resolution - Best TikTok Saver.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-How to Download TikTok Videos Without Watermark in HD
-TikTok is one of the most popular social media platforms that allows users to create and share short videos with millions of people around the world. However, if you want to download your favorite TikTok videos to your device or share them on other platforms, you might encounter some problems. For example, you might notice that the downloaded videos have a watermark or logo that covers part of the screen. Or you might find that the video quality is low or blurry. Or you might want to edit your video to make it more appealing and engaging.
-download tiktok without watermark hd
Download Zip --->>> https://jinyurl.com/2uNJRu
-In this article, we will show you how to download TikTok videos without watermark in HD quality using some free online tools. We will also give you some tips on how to improve and edit your TikTok videos for more engagement. By following these steps, you will be able to enjoy your TikTok videos without any limitations or restrictions.
- Why Download TikTok Videos Without Watermark?
-There are many reasons why you might want to download TikTok videos without watermark. Here are some of them:
-
-- Better quality: The watermark or logo that appears on the downloaded TikTok videos can reduce the quality and clarity of the video. It can also distract or annoy the viewers who want to focus on the content. By downloading TikTok videos without watermark, you can get a better viewing experience.
-- No logo: The watermark or logo that appears on the downloaded TikTok videos can also infringe on the intellectual property rights of the original creators. It can also make it harder for you to claim ownership or credit for your own work. By downloading TikTok videos without watermark, you can respect the rights of the creators and protect your own reputation.
-- More creative freedom: The watermark or logo that appears on the downloaded TikTok videos can also limit your creative freedom. It can prevent you from editing or modifying your video as you wish. It can also make it difficult for you to share your video on other platforms or channels. By downloading TikTok videos without watermark, you can have more control over your video and use it for any purpose.
-
- How to Download TikTok Videos Without Watermark on Mobile Phone
-If you want If you want to download TikTok videos without watermark on your mobile phone, you can use a web-based tool called ssstik.io. This tool allows you to download TikTok videos in HD quality without any watermark or logo. You can also choose to download only the audio or the video of the TikTok video. Here are the steps to use ssstik.io on your mobile phone: - Open the TikTok app on your mobile phone and find the video that you want to download. - Tap on the share icon and select "Copy Link" to copy the URL of the video. - Open a web browser on your mobile phone and go to ssstik.io. - Paste the URL of the video in the input box and tap on "Download". - Wait for a few seconds until the tool processes the video and generates the download links. - Tap on "Download MP4" to download the video without watermark, or tap on "Download MP3" to download only the audio of the video. - Save the file to your device and enjoy your TikTok video without watermark. Here are some screenshots of how to use ssstik.io on your mobile phone:
-
-
- Here is a table that compares the features of ssstik.io with other TikTok video downloaders: | Feature | ssstik.io | Other TikTok Video Downloaders | | --- | --- | --- | | Download TikTok videos without watermark | Yes | No | | Download TikTok videos in HD quality | Yes | No | | Download only audio or video of TikTok videos | Yes | No | | Support multiple platforms (Android, iOS, Windows, Mac, Linux) | Yes | No | | Free and easy to use | Yes | No | As you can see, ssstik.io is one of the best tools to download TikTok videos without watermark on your mobile phone. It is fast, simple, and reliable. You can use it anytime and anywhere to enjoy your TikTok videos without any limitations or restrictions.
How to Download TikTok Videos Without Watermark on PC
-If you want to download TikTok videos without watermark on your PC, you can use another web-based tool called SnapTik.App. This tool also allows you to download TikTok videos in HD quality without any watermark or logo. You can also choose to download only the audio or the video of the TikTok video. Here are the steps to use SnapTik.App on your PC:
-- Open a web browser on your PC and go to TikTok.com and find the video that you want to download. - Copy the URL of the video from the address bar of your browser. - Open another tab on your browser and go to SnapTik.App. - Paste the URL of the video in the input box and click on "Download". - Wait for a few seconds until the tool processes the video and generates the download links. - Click on "Download MP4" to download the video without watermark, or click on "Download MP3" to download only the audio of the video. - Save the file to your PC and enjoy your TikTok video without watermark. Here are some screenshots of how to use SnapTik.App on your PC:
-
-
- Here is a table that compares the features of SnapTik.App with other TikTok video downloaders: | Feature | SnapTik.App | Other TikTok Video Downloaders | | --- | --- | --- | | Download TikTok videos without watermark | Yes | No | | Download TikTok videos in HD quality | Yes | No | | Download only audio or video of TikTok videos | Yes | No | | Support multiple platforms (Android, iOS, Windows, Mac, Linux) | Yes | No | | Free and easy to use | Yes | No | As you can see, SnapTik.App is another great tool to download TikTok videos without watermark on your PC. It is fast, simple, and reliable. You can use it anytime and anywhere to enjoy your TikTok videos without any limitations or restrictions. How to Improve the Quality of TikTok Videos
-Now that you know how to download TikTok videos without watermark, you might want to improve the quality of your videos. The quality of TikTok videos depends on several factors, such as resolution, file size, file format, and length. Here are some of them:
-How to download tiktok videos without watermark in hd quality
-Best tiktok downloader online no watermark hd mp4
-Download tiktok video without logo hd free
-Tiktok video download without watermark app for android
-Save tiktok videos without watermark iphone hd
-Tiktok no watermark downloader chrome extension hd
-Download tiktok video without watermark online free hd
-Tiktok video download mp4 hd no watermark
-Tiktok video saver without watermark apk hd
-Download tiktok video without watermark pc hd
-Tiktok video download without watermark website hd
-Tiktok video download without watermark ios hd
-Download tiktok video without watermark mac hd
-Tiktok video download without watermark software hd
-Tiktok video download without watermark reddit hd
-Download tiktok video without watermark 1080p hd
-Tiktok video download without watermark 4k hd
-Download tiktok video without watermark and sound hd
-Tiktok video download without watermark and username hd
-Download tiktok video without watermark and music hd
-Tiktok video download without watermark and caption hd
-Download tiktok video without watermark and duet hd
-Tiktok video download without watermark and filter hd
-Download tiktok video without watermark and sticker hd
-Tiktok video download without watermark and effect hd
-Download tiktok video without watermark with link hd
-Tiktok video download without watermark with ssstik.io hd
-Download tiktok video without watermark with snaptik.app hd
-Tiktok video download without watermark with fetchtik.com hd
-Download tiktok video without watermark with musicallydown.com hd
-Tiktok video download without watermark with keepvid.pro hd
-Download tiktok video without watermark with ttdownloader.com hd
-Tiktok video download without watermark with expertsphp.com hd
-Download tiktok video without watermark with ttdown.org hd
-Tiktok video download without watermark with savefrom.net hd
-Download tiktok video without watermark with y2mate.com hd
-Tiktok video download without watermark with 9xbuddy.com hd
-Download tiktok video without watermark with alltomp3.org hd
-Tiktok video download without watermark with clipconverter.cc hd
-Download tiktok video without watermark with vidpaw.com hd
-
-- Resolution: The resolution of a video is the number of pixels that make up the image. The higher the resolution, the clearer and sharper the video. However, higher resolution also means larger file size and more bandwidth consumption. The optimal resolution for TikTok videos is 1080p (1920 x 1080 pixels), which is also known as HD or high definition. To achieve this resolution, you need to use a device that supports HD recording, such as a smartphone or a camera. You can also adjust the resolution settings on your device or on the TikTok app before recording or uploading your video.
-- File size: The file size of a video is the amount of space that it occupies on your device or on the internet. The larger the file size, the more storage and data usage it requires. However, larger file size also means higher quality and less compression. The optimal file size for TikTok videos is between 10 MB and 50 MB. To achieve this file size, you need to balance the resolution, length, and format of your video. You can also use a video compressor tool to reduce the file size of your video without losing much quality.
-- File format: The file format of a video is the type of file that it is saved as. The file format determines how the video is encoded, decoded, and played. Different file formats have different advantages and disadvantages in terms of quality, compatibility, and performance. The optimal file format for TikTok videos is MP4 (MPEG-4 Part 14), which is a widely used and supported format that offers high quality and low file size. To achieve this file format, you need to use a device or an app that supports MP4 recording or conversion. You can also use a video converter tool to change the file format of your video to MP4.
-- Length: The length of a video is the duration or time that it lasts. The longer the video, the more content and information it can convey. However, longer video also means larger file size and more attention span required. The optimal length for TikTok videos is between 15 seconds and 60 seconds. To achieve this length, you need to plan your content and script before recording or editing your video. You can also use a video trimmer tool to cut or shorten your video to the desired length.
-
- By following these tips, you can improve the quality of your TikTok videos and make them more appealing and enjoyable for yourself and your audience.
- Here are some screenshots and examples of high-quality and low-quality TikTok videos:
-
- How to Edit TikTok Videos for More Engagement
-Besides improving the quality of your TikTok videos, you might also want to edit them for more engagement. Editing your TikTok videos can help you attract more views, likes, comments, and followers by making your videos more interesting, creative, and unique. Here are some suggestions and tools for editing your TikTok videos:
-
-- Add text: Adding text to your TikTok videos can help you convey your message, highlight your keywords, or add captions or subtitles. You can use the built-in text editor on the TikTok app to add text to your videos. You can also use other apps or tools such as InShot, Vont, or Kapwing to add text to your videos with more options and effects.
-- Add animation: Adding animation to your TikTok videos can help you create motion graphics, transitions, or stickers that make your videos more dynamic and fun. You can use the built-in animation features on the TikTok app to add animation to your videos. You can also use other apps or tools such as Alight Motion, Funimate, or Canva to add animation to your videos with more options and effects.
-- Add music
- Add music: Adding music to your TikTok videos can help you create a mood, a theme, or a rhythm that matches your content. You can use the built-in music library on the TikTok app to add music to your videos. You can also use other apps or tools such as Lomotif, BeatSync, or Splice to add music to your videos with more options and effects.
-- Add voiceover: Adding voiceover to your TikTok videos can help you narrate, explain, or comment on your content. You can use the built-in voiceover feature on the TikTok app to add voiceover to your videos. You can also use other apps or tools such as Voice Recorder, Audacity, or Filmora to add voiceover to your videos with more options and effects.
-- Add stickers: Adding stickers to your TikTok videos can help you decorate, personalize, or express yourself on your content. You can use the built-in sticker library on the TikTok app to add stickers to your videos. You can also use other apps or tools such as PicsArt, Giphy, or Sticker Maker to add stickers to your videos with more options and effects.
-- Add transitions: Adding transitions to your TikTok videos can help you create smooth and seamless changes between different scenes or clips. You can use the built-in transition effects on the TikTok app to add transitions to your videos. You can also use other apps or tools such as VivaVideo, KineMaster, or PowerDirector to add transitions to your videos with more options and effects.
-
- By following these suggestions, you can edit your TikTok videos for more engagement and make them more interesting, creative, and unique for yourself and your audience.
- Here are some screenshots and examples of edited and unedited TikTok videos:
-
- Conclusion
-In conclusion, downloading TikTok videos without watermark is easy and convenient with some free online tools such as ssstik.io and SnapTik.App. These tools allow you to download TikTok videos in HD quality without any watermark or logo. You can also choose to download only the audio or the video of the TikTok video. Moreover, improving and editing your TikTok videos can help you enhance the quality and engagement of your videos. You can use some tips and tricks such as adjusting the resolution, file size, file format, and length of your videos. You can also use some suggestions and tools such as adding text, animation, music, voiceover, stickers, and transitions to your videos.
-By following these steps, you will be able to enjoy your TikTok videos without any limitations or restrictions. You will also be able to create more appealing and engaging TikTok videos for yourself and your audience. So what are you waiting for? Start downloading, improving, and editing your TikTok videos without watermark today!
- FAQs
-Here are some frequently asked questions about downloading, improving, and editing TikTok videos without watermark:
-Q: Is it legal to download TikTok videos without watermark?
-A: It depends on the source and purpose of the video. If the video is public and does not contain any copyrighted material or personal information
A: It depends on the source and purpose of the video. If the video is public and does not contain any copyrighted material or personal information, you can download it for personal use or fair use. However, if the video is private or contains any protected content or data, you need to obtain the permission of the owner or the creator before downloading it. You also need to respect the terms and conditions of TikTok and the tools that you use to download the videos. You should not download, distribute, or monetize any TikTok videos without watermark without proper authorization or consent.
-Q: How can I download TikTok videos without watermark in bulk?
-A: If you want to download multiple TikTok videos without watermark at once, you can use some tools that support batch downloading. For example, you can use 4K Video Downloader or Allavsoft to download TikTok videos without watermark in bulk. These tools allow you to paste multiple URLs of TikTok videos and download them in HD quality without any watermark or logo. You can also choose to download only the audio or the video of the TikTok videos.
-Q: How can I download TikTok videos without watermark with sound?
-A: If you want to download TikTok videos without watermark with sound, you need to make sure that the video has sound in the first place. Some TikTok videos are muted or have no sound by default. You can check the sound icon on the bottom right corner of the video to see if it has sound or not. If the video has sound, you can use any of the tools mentioned above to download it without watermark with sound. If the video has no sound, you can either add your own sound using a video editor tool or find another video that has sound.
-Q: How can I download TikTok videos without watermark on iPhone?
-A: If you want to download TikTok videos without watermark on iPhone, you can use the same method as downloading them on Android. You can use ssstik.io to download TikTok videos without watermark on iPhone. However, you need to install an app called Documents by Readdle on your iPhone first. This app allows you to save and manage files on your iPhone. After installing the app, you can follow these steps:
-- Open the TikTok app on your iPhone and find the video that you want to download. - Tap on the share icon and select "Copy Link" to copy the URL of the video. - Open Documents by Readdle on your iPhone and tap on the browser icon on the bottom right corner. - Go to ssstik.io and paste the URL of the video in the input box and tap on "Download". - Wait for a few seconds until the tool processes the video and generates the download links. - Tap on "Download MP4" to download the video without watermark, or tap on "Download MP3" to download only the audio of the video. - Tap on "Done" and go to the Downloads folder on Documents by Readdle. - Tap and hold on the file that you downloaded and select "Share". - Select "Save Video" or "Save to Files" to save - Select "Save Video" or "Save to Files" to save the file to your iPhone and enjoy your TikTok video without watermark. Here are some screenshots of how to use Documents by Readdle to download TikTok videos without watermark on iPhone:
-
-
-
-
- As you can see, Documents by Readdle is a useful app that can help you download TikTok videos without watermark on iPhone. It is free, easy, and reliable. You can use it anytime and anywhere to enjoy your TikTok videos without any limitations or restrictions. Q: How can I download TikTok videos without watermark on Mac?
-A: If you want to download TikTok videos without watermark on Mac, you can use the same method as downloading them on PC. You can use SnapTik.App to download TikTok videos without watermark on Mac. However, you need to install a browser extension called Video Downloader Plus on your Mac first. This extension allows you to download videos from any website with one click. After installing the extension, you can follow these steps:
-- Open a web browser on your Mac and go to TikTok.com and find the video that you want to download. - Copy the URL of the video from the address bar of your browser. - Open another tab on your browser and go to SnapTik.App. - Paste the URL of the video in the input box and click on "Download". - Wait for a few seconds until the tool processes the video and generates the download links. - Click on "Download MP4" to download the video without watermark, or click on "Download MP3" to download only the audio of the video. - Click on the Video Downloader Plus icon on the top right corner of your browser and select the file that you downloaded. - Save the file to your Mac and enjoy your TikTok video without watermark. Here are some screenshots of how to use Video Downloader Plus to download TikTok videos without watermark on Mac:
-
-
-
-
- As you can see, Video Downloader Plus is a handy extension that can help you download TikTok videos without watermark on Mac. It is free, easy, and reliable. You can use it anytime and anywhere to enjoy your TikTok videos without any limitations or restrictions. 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download WhatsApp Business APK Terbaru Aplikasi Gratis untuk Bisnis Kecil.md b/spaces/1phancelerku/anime-remove-background/Download WhatsApp Business APK Terbaru Aplikasi Gratis untuk Bisnis Kecil.md
deleted file mode 100644
index b85667d88930895f89240a1baef332bad337b4ea..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download WhatsApp Business APK Terbaru Aplikasi Gratis untuk Bisnis Kecil.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
- The benefits of using WhatsApp Business for your business | | H2: What is APK Terbaru? | - A brief explanation of what APK Terbaru means and why you might want to download it
- The risks and precautions of downloading APK files from unknown sources | | H2: How to Download WhatsApp Business APK Terbaru from Google Play Store | - A step-by-step guide on how to download and install the app from the official source
- A screenshot of the app page on Google Play Store | | H2: How to Download WhatsApp Business APK Terbaru from Other Sources | - A list of alternative sources where you can find the latest version of the app
- A step-by-step guide on how to download and install the app from each source
- A comparison table of the pros and cons of each source | | H2: How to Set Up and Use WhatsApp Business | - A step-by-step guide on how to create a business profile, verify your number, and customize your settings
- A list of tips and tricks on how to use the app effectively for your business
- A screenshot of the app interface | | H2: Conclusion | - A summary of the main points of the article
- A call to action for the readers to download and try the app | Table 2: Article with HTML formatting How to Download WhatsApp Business APK Terbaru
-If you are looking for a way to communicate with your customers more efficiently and grow your business, you might want to try WhatsApp Business. WhatsApp Business is a free app that allows you to create a business presence on WhatsApp, send and receive messages, share media, and manage your customer interactions. In this article, we will show you how to download WhatsApp Business APK Terbaru, which means the latest version of the app in Indonesian. We will also explain what WhatsApp Business is, what APK Terbaru is, how to set up and use the app, and some tips and tricks to make the most out of it.
-download whatsapp business apk terbaru
DOWNLOAD ✏ https://jinyurl.com/2uNP5T
- What is WhatsApp Business?
-WhatsApp Business is an app that was launched by Meta (formerly Facebook) in 2018. It is designed for small and medium-sized businesses that want to use WhatsApp as a platform to connect with their customers. WhatsApp Business has some features that are not available in WhatsApp Messenger, such as:
-
-- BUSINESS PROFILE: You can create a profile for your business that includes your website, location, contact information, hours of operation, catalog, and more.
-- BUSINESS MESSAGING TOOLS: You can use automated messages to greet your customers, inform them when you are away, or send them quick replies. You can also use labels to organize your chats and contacts.
-- LANDLINE/FIXED NUMBER SUPPORT: You can use WhatsApp Business with a landline or fixed phone number and receive verification codes via phone calls.
-- RUN BOTH WHATSAPP MESSENGER AND WHATSAPP BUSINESS: You can have both apps installed on the same phone, but each app must have its own unique phone number.
-- WHATSAPP WEB: You can access your WhatsApp Business account from your computer's browser and respond to your customers more efficiently.
-
-The benefits of using WhatsApp Business for your business are:
-
-- EASY TO USE: You can use the same interface and features that you are familiar with from WhatsApp Messenger.
-- COST-EFFECTIVE: You can send and receive messages, calls, photos, videos, documents, and more for free*, as long as you have an internet connection.
-- SECURE: You can enjoy end-to-end encryption for all your communications, which means that only you and your customers can read or listen to them.
-- POPULAR: You can reach out to more than 2 billion users around the world who use WhatsApp every month.
-
-*Data charges may apply. Contact your provider for details.
-How to download whatsapp business apk latest version for android
-Download whatsapp business apk terbaru 2023 with new features
-WhatsApp Business: A free app for small business owners
-Download whatsapp business apk terbaru and create a professional profile for your business
-WhatsApp Business vs WhatsApp Messenger: What's the difference and how to switch
-Download whatsapp business apk terbaru and use it with a landline or fixed number
-How to backup and restore your whatsapp business chats and media
-Download whatsapp business apk terbaru and communicate with your customers using messaging tools
-How to use whatsapp web with whatsapp business app
-Download whatsapp business apk terbaru and manage multiple whatsapp accounts on the same phone
-How to set up a catalog on whatsapp business app
-Download whatsapp business apk terbaru and use labels to organize your chats and contacts
-How to verify your business on whatsapp business app
-Download whatsapp business apk terbaru and integrate it with Facebook, Instagram, and other platforms
-How to use whatsapp business api for larger businesses
-Download whatsapp business apk terbaru and access analytics and insights on your performance
-How to create and use quick replies on whatsapp business app
-Download whatsapp business apk terbaru and enable dark mode for your app
-How to use stickers and emojis on whatsapp business app
-Download whatsapp business apk terbaru and join the beta program for early access to new features
-How to delete or deactivate your whatsapp business account
-Download whatsapp business apk terbaru and learn how to secure your account and chats
-How to use voice and video calls on whatsapp business app
-Download whatsapp business apk terbaru and send broadcast messages to your customers
-How to use status updates on whatsapp business app
-Download whatsapp business apk terbaru and customize your notifications settings
-How to use groups and group chats on whatsapp business app
-Download whatsapp business apk terbaru and share documents, photos, videos, and other files with your customers
-How to use live location on whatsapp business app
-Download whatsapp business apk terbaru and learn how to troubleshoot common issues and errors
- What is APK Terbaru?
-APK Terbaru is an Indonesian term that means "the latest APK". APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on an Android device.
Downloading APK Terbaru means that you can get the most updated version of the app, which may have new features, bug fixes, or performance improvements. However, downloading APK files from unknown sources can also pose some risks and challenges, such as:
-
-- MALWARE: You may download a file that contains malicious software that can harm your device or steal your data.
-- COMPATIBILITY: You may download a file that is not compatible with your device or operating system, which can cause errors or crashes.
-- LEGALITY: You may download a file that violates the terms and conditions of the app developer or the app store, which can result in legal consequences or account suspension.
-
-Therefore, before you download any APK file from unknown sources, you should take some precautions, such as:
-
-- CHECK THE SOURCE: You should only download APK files from reputable and trusted websites that have positive reviews and ratings from other users.
-- CHECK THE FILE: You should scan the APK file with an antivirus or malware detector before you install it on your device.
-- CHECK THE PERMISSIONS: You should review the permissions that the APK file requests and only grant them if they are necessary and reasonable for the app's functionality.
-
- How to Download WhatsApp Business APK Terbaru from Google Play Store
-The easiest and safest way to download WhatsApp Business APK Terbaru is from Google Play Store, which is the official app store for Android devices. To do so, you need to follow these steps:
-
-- OPEN GOOGLE PLAY STORE: On your Android device, tap on the Google Play Store icon to launch the app.
-- SEARCH FOR WHATSAPP BUSINESS: In the search bar at the top of the screen, type "WhatsApp Business" and tap on the magnifying glass icon to start the search.
-- FIND AND TAP ON THE APP: From the list of results, find and tap on the app that has the name "WhatsApp Business" and the logo that has a green chat bubble with a white letter B inside it.
-- TAP ON INSTALL: On the app page, tap on the green button that says "Install" to start downloading and installing the app on your device.
-- WAIT FOR THE PROCESS TO COMPLETE: Depending on your internet speed and device storage, it may take a few minutes for the app to download and install. You can see the progress bar on the screen.
-- TAP ON OPEN: Once the app is installed, you can tap on the green button that says "Open" to launch the app and start using it.
-
-Here is a screenshot of what the app page looks like on Google Play Store:
-
- How to Download WhatsApp Business APK Terbaru from Other Sources
-If you cannot access Google Play Store or you want to try other sources for downloading WhatsApp Business APK Terbaru, you can also use some alternative websites that offer APK files for free. Some of these websites are:
-
-To download WhatsApp Business APK Terbaru from these websites, you need to follow these steps:
-
-- OPEN THE WEBSITE: On your Android device's browser, go to the website of your choice from the list above.
-- FIND AND TAP ON THE APP: On the website's homepage, find and tap on the app that has the name "WhatsApp Business" and the logo that has a green chat bubble with a white letter B inside it. You can also use the search function if you cannot find it easily.
-- TAP ON DOWNLOAD: On the app page RnRVLg5zGWCUTzUqf1NlTnJYAwLzT6hA3FIZjL9f8Ew=w720-h310-rw" alt="WhatsApp Business interface" width="720" height="310">
-
Conclusion
-WhatsApp Business is a great app for small and medium-sized businesses that want to communicate with their customers more effectively and grow their business. It has many features that can help you create a professional and personalized business presence on WhatsApp, send and receive messages, share media, and manage your customer interactions. To download WhatsApp Business APK Terbaru, you can use Google Play Store or other alternative sources, but you need to be careful and cautious when downloading APK files from unknown sources. You also need to set up and use the app properly for your business. We hope this article has helped you learn how to download WhatsApp Business APK Terbaru and how to use it for your business. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
- FAQs
-Here are some frequently asked questions about WhatsApp Business APK Terbaru:
-
-- Q: Is WhatsApp Business free?
A: Yes, WhatsApp Business is free to download and use, as long as you have an internet connection. However, data charges may apply depending on your provider.
-- Q: Can I use WhatsApp Business and WhatsApp Messenger on the same phone?
A: Yes, you can use both apps on the same phone, but each app must have its own unique phone number.
-- Q: How can I update WhatsApp Business APK Terbaru?
A: You can update WhatsApp Business APK Terbaru by downloading and installing the latest version of the file from Google Play Store or other sources. You can also check for updates within the app by going to the menu icon at the top right corner of the screen and tapping on "Settings" > "Help" > "App info".
-- Q: How can I backup and restore my WhatsApp Business data?
A: You can backup and restore your WhatsApp Business data by using Google Drive or a local backup. You can go to the menu icon at the top right corner of the screen and tap on "Settings" > "Chats" > "Chat backup" to choose your backup options. You can also restore your data when you reinstall the app or switch to a new device.
-- Q: How can I contact WhatsApp Business support?
A: You can contact WhatsApp Business support by going to the menu icon at the top right corner of the screen and tapping on "Settings" > "Help" > "Contact us". You can also visit their official website or follow them on social media for more information and updates.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2023Liu2023/bingo/src/components/chat-notification.tsx b/spaces/2023Liu2023/bingo/src/components/chat-notification.tsx
deleted file mode 100644
index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/src/components/chat-notification.tsx
+++ /dev/null
@@ -1,77 +0,0 @@
-import { useEffect } from 'react'
-import Image from 'next/image'
-
-import IconWarning from '@/assets/images/warning.svg'
-import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types'
-import { ExternalLink } from './external-link'
-import { useBing } from '@/lib/hooks/use-bing'
-
-export interface ChatNotificationProps extends Pick, 'bot'> {
- message?: ChatMessageModel
-}
-
-function getAction(error: ChatError, reset: () => void) {
- if (error.code === ErrorCode.THROTTLE_LIMIT) {
- reset()
- return (
-
- ไฝ ๅทฒ่พพๅฐๆฏๆฅๆๅคงๅ้ๆถๆฏๆฌกๆฐ๏ผ่ฏท
ๆดๆข่ดฆๅทๆ้ไธๅคฉๅ้่ฏ
-
- )
- }
- if (error.code === ErrorCode.BING_FORBIDDEN) {
- return (
-
- ไฝ ็่ดฆๅทๅทฒๅจ้ปๅๅ๏ผ่ฏทๅฐ่ฏๆดๆข่ดฆๅทๅ็ณ่ฏท่งฃๅฐ
-
- )
- }
- if (error.code === ErrorCode.CONVERSATION_LIMIT) {
- return (
-
- ๅฝๅ่ฏ้ขๅทฒไธญๆญข๏ผ่ฏท็น
-
้ๆฐๅผๅง
- ๅผๅฏๆฐ็ๅฏน่ฏ
-
- )
- }
- if (error.code === ErrorCode.BING_CAPTCHA) {
- return (
-
- ็นๅป้่ฟไบบๆบ้ช่ฏ
-
- )
- }
- if (error.code === ErrorCode.BING_UNAUTHORIZED) {
- reset()
- return (
- ๆฒกๆ่ทๅๅฐ่บซไปฝไฟกๆฏๆ่บซไปฝไฟกๆฏๅคฑๆ๏ผ็นๆญค้ๆฐ่ฎพ็ฝฎ
- )
- }
- return error.message
-}
-
-export function ChatNotification({ message, bot }: ChatNotificationProps) {
- useEffect(() => {
- window.scrollBy(0, 2000)
- }, [message])
-
- if (!message?.error) return
-
- return (
-
-
-
-
-
-
- {getAction(message.error, () => bot.resetConversation())}
-
-
-
-
-
- )
-}
diff --git a/spaces/7hao/bingo/src/lib/storage.ts b/spaces/7hao/bingo/src/lib/storage.ts
deleted file mode 100644
index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/lib/storage.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import { getMany, set, del, clear } from 'idb-keyval';
-
-export const Storage = {
- async get(key: string | string[] | null): Promise {
- if (key === null) return null;
- if (typeof key === 'string') {
- key = [key]
- }
- const returnData: Record = {}
- const values = await getMany(key)
- key.forEach((k, idx)=> {
- returnData[k] = values[idx]
- })
- return returnData;
- },
- async set(object: any) {
- for (let key of Object.keys(object)) {
- await set(key, object[key])
- }
- },
- async remove(key: string) {
- return del(key);
- },
- async clear() {
- return clear();
- }
-}
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/diff/diffusion.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/diff/diffusion.py
deleted file mode 100644
index c30976ab258feff830c2fa1a2d70876cb1d76eda..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/diff/diffusion.py
+++ /dev/null
@@ -1,334 +0,0 @@
-import math
-import random
-from functools import partial
-from inspect import isfunction
-from pathlib import Path
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-from tqdm import tqdm
-from einops import rearrange
-
-from modules.fastspeech.fs2 import FastSpeech2
-from modules.diffsinger_midi.fs2 import FastSpeech2MIDI
-from utils.hparams import hparams
-
-
-
-def exists(x):
- return x is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def cycle(dl):
- while True:
- for data in dl:
- yield data
-
-
-def num_to_groups(num, divisor):
- groups = num // divisor
- remainder = num % divisor
- arr = [divisor] * groups
- if remainder > 0:
- arr.append(remainder)
- return arr
-
-
-class Residual(nn.Module):
- def __init__(self, fn):
- super().__init__()
- self.fn = fn
-
- def forward(self, x, *args, **kwargs):
- return self.fn(x, *args, **kwargs) + x
-
-
-class SinusoidalPosEmb(nn.Module):
- def __init__(self, dim):
- super().__init__()
- self.dim = dim
-
- def forward(self, x):
- device = x.device
- half_dim = self.dim // 2
- emb = math.log(10000) / (half_dim - 1)
- emb = torch.exp(torch.arange(half_dim, device=device) * -emb)
- emb = x[:, None] * emb[None, :]
- emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
- return emb
-
-
-class Mish(nn.Module):
- def forward(self, x):
- return x * torch.tanh(F.softplus(x))
-
-
-class Upsample(nn.Module):
- def __init__(self, dim):
- super().__init__()
- self.conv = nn.ConvTranspose2d(dim, dim, 4, 2, 1)
-
- def forward(self, x):
- return self.conv(x)
-
-
-class Downsample(nn.Module):
- def __init__(self, dim):
- super().__init__()
- self.conv = nn.Conv2d(dim, dim, 3, 2, 1)
-
- def forward(self, x):
- return self.conv(x)
-
-
-class Rezero(nn.Module):
- def __init__(self, fn):
- super().__init__()
- self.fn = fn
- self.g = nn.Parameter(torch.zeros(1))
-
- def forward(self, x):
- return self.fn(x) * self.g
-
-
-# building block modules
-
-class Block(nn.Module):
- def __init__(self, dim, dim_out, groups=8):
- super().__init__()
- self.block = nn.Sequential(
- nn.Conv2d(dim, dim_out, 3, padding=1),
- nn.GroupNorm(groups, dim_out),
- Mish()
- )
-
- def forward(self, x):
- return self.block(x)
-
-
-class ResnetBlock(nn.Module):
- def __init__(self, dim, dim_out, *, time_emb_dim, groups=8):
- super().__init__()
- self.mlp = nn.Sequential(
- Mish(),
- nn.Linear(time_emb_dim, dim_out)
- )
-
- self.block1 = Block(dim, dim_out)
- self.block2 = Block(dim_out, dim_out)
- self.res_conv = nn.Conv2d(dim, dim_out, 1) if dim != dim_out else nn.Identity()
-
- def forward(self, x, time_emb):
- h = self.block1(x)
- h += self.mlp(time_emb)[:, :, None, None]
- h = self.block2(h)
- return h + self.res_conv(x)
-
-
-class LinearAttention(nn.Module):
- def __init__(self, dim, heads=4, dim_head=32):
- super().__init__()
- self.heads = heads
- hidden_dim = dim_head * heads
- self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False)
- self.to_out = nn.Conv2d(hidden_dim, dim, 1)
-
- def forward(self, x):
- b, c, h, w = x.shape
- qkv = self.to_qkv(x)
- q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads=self.heads, qkv=3)
- k = k.softmax(dim=-1)
- context = torch.einsum('bhdn,bhen->bhde', k, v)
- out = torch.einsum('bhde,bhdn->bhen', context, q)
- out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w)
- return self.to_out(out)
-
-
-# gaussian diffusion trainer class
-
-def extract(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
-
-
-def cosine_beta_schedule(timesteps, s=0.008):
- """
- cosine schedule
- as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
- """
- steps = timesteps + 1
- x = np.linspace(0, steps, steps)
- alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
- alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
- betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
- return np.clip(betas, a_min=0, a_max=0.999)
-
-
-class GaussianDiffusion(nn.Module):
- def __init__(self, phone_encoder, out_dims, denoise_fn,
- timesteps=1000, loss_type='l1', betas=None, spec_min=None, spec_max=None):
- super().__init__()
- self.denoise_fn = denoise_fn
- if hparams.get('use_midi') is not None and hparams['use_midi']:
- self.fs2 = FastSpeech2MIDI(phone_encoder, out_dims)
- else:
- self.fs2 = FastSpeech2(phone_encoder, out_dims)
- self.fs2.decoder = None
- self.mel_bins = out_dims
-
- if exists(betas):
- betas = betas.detach().cpu().numpy() if isinstance(betas, torch.Tensor) else betas
- else:
- betas = cosine_beta_schedule(timesteps)
-
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.loss_type = loss_type
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- self.register_buffer('spec_min', torch.FloatTensor(spec_min)[None, None, :hparams['keep_bins']])
- self.register_buffer('spec_max', torch.FloatTensor(spec_max)[None, None, :hparams['keep_bins']])
-
- def q_mean_variance(self, x_start, t):
- mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
- variance = extract(1. - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, cond, clip_denoised: bool):
- noise_pred = self.denoise_fn(x, t, cond=cond)
- x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred)
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond, clip_denoised=clip_denoised)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (
- extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
- )
-
- def p_losses(self, x_start, t, cond, noise=None, nonpadding=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
-
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- x_recon = self.denoise_fn(x_noisy, t, cond)
-
- if self.loss_type == 'l1':
- if nonpadding is not None:
- loss = ((noise - x_recon).abs() * nonpadding.unsqueeze(1)).mean()
- else:
- # print('are you sure w/o nonpadding?')
- loss = (noise - x_recon).abs().mean()
-
- elif self.loss_type == 'l2':
- loss = F.mse_loss(noise, x_recon)
- else:
- raise NotImplementedError()
-
- return loss
-
- def forward(self, txt_tokens, mel2ph=None, spk_embed=None,
- ref_mels=None, f0=None, uv=None, energy=None, infer=False):
- b, *_, device = *txt_tokens.shape, txt_tokens.device
- ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy,
- skip_decoder=True, infer=infer)
- cond = ret['decoder_inp'].transpose(1, 2)
- if not infer:
- t = torch.randint(0, self.num_timesteps, (b,), device=device).long()
- x = ref_mels
- x = self.norm_spec(x)
- x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
- nonpadding = (mel2ph != 0).float()
- ret['diff_loss'] = self.p_losses(x, t, cond, nonpadding=nonpadding)
- else:
- t = self.num_timesteps
- shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2])
- x = torch.randn(shape, device=device)
- for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- x = x[:, 0].transpose(1, 2)
- ret['mel_out'] = self.denorm_spec(x)
-
- return ret
-
- def norm_spec(self, x):
- return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1
-
- def denorm_spec(self, x):
- return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
-
- def cwt2f0_norm(self, cwt_spec, mean, std, mel2ph):
- return self.fs2.cwt2f0_norm(cwt_spec, mean, std, mel2ph)
-
- def out2mel(self, x):
- return x
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/methods/Methods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/methods/Methods.js
deleted file mode 100644
index afa71232b7624b9c204f6206cbffbd5e17737091..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/methods/Methods.js
+++ /dev/null
@@ -1,18 +0,0 @@
-import ConfigurationMethods from './listpanel/ConfigurationMethods.js';
-import OpenListPanel from './listpanel/OpenListPanel.js';
-import CloseListPanel from './listpanel/CloseListPanel.js';
-import ToggleListPanel from './listpanel/ToggleListPanel.js';
-
-var Methods = {
- openListPanel: OpenListPanel,
- closeListPanel: CloseListPanel,
- toggleListPanel: ToggleListPanel,
-}
-
-Object.assign(
- Methods,
- ConfigurationMethods,
-);
-
-export default Methods;
-
diff --git a/spaces/AlekseyKorshuk/gai-project/modules/about.py b/spaces/AlekseyKorshuk/gai-project/modules/about.py
deleted file mode 100644
index 97c70753cb5b7d5367cc757e03fc67ff93bcb398..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/gai-project/modules/about.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import gradio as gr
-
-
-def render_about():
- gr.Markdown(
- "# About\n"
- "In today's fast-paced world, many individuals feel increasingly isolated and crave meaningful connections. "
- "This project aims not just to produce a conversational model, but to address this societal issue by creating "
- "diverse conversational companions. Instead of building just one ideal model for all scenarios, the objective "
- "is to create a range of models suited to various conversation topics and environments. By mixing different "
- "models, we aspire to achieve a dynamic and engaging experience similar to the TikTok feed. Our core aim is "
- "to create a reusable pipeline for generating such datasets and ensuring they remain Safe For Work. Through "
- "this, we hope to offer users not just a chatbot, but a digital companion tailored to their emotional and "
- "conversational needs.\n\n"
- "[]"
- "(https://github.com/AlekseyKorshuk/gai-project)"
- )
diff --git a/spaces/Aloento/9Nine-VITS/transforms.py b/spaces/Aloento/9Nine-VITS/transforms.py
deleted file mode 100644
index 4d870fcc56a323bfbba45c3d3bf1294336058110..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-VITS/transforms.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import numpy as np
-import torch
-from torch.nn import functional as F
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/loaders.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/loaders.md
deleted file mode 100644
index 98aaea0060883f143ea23ab99e43b8ce25009ea9..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/loaders.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
-# Loaders
-
-Adapters (textual inversion, LoRA, hypernetworks) allow you to modify a diffusion model to generate images in a specific style without training or finetuning the entire model. The adapter weights are typically only a tiny fraction of the pretrained model's which making them very portable. ๐ค Diffusers provides an easy-to-use `LoaderMixin` API to load adapter weights.
-
-
-
-๐งช The `LoaderMixins` are highly experimental and prone to future changes. To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`.
-
-
-
-## UNet2DConditionLoadersMixin
-
-[[autodoc]] loaders.UNet2DConditionLoadersMixin
-
-## TextualInversionLoaderMixin
-
-[[autodoc]] loaders.TextualInversionLoaderMixin
-
-## LoraLoaderMixin
-
-[[autodoc]] loaders.LoraLoaderMixin
-
-## FromSingleFileMixin
-
-[[autodoc]] loaders.FromSingleFileMixin
-
-## FromOriginalControlnetMixin
-
-[[autodoc]] loaders.FromOriginalControlnetMixin
-
-## FromOriginalVAEMixin
-
-[[autodoc]] loaders.FromOriginalVAEMixin
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_dance_diffusion_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_dance_diffusion_to_diffusers.py
deleted file mode 100644
index d53d1f792e89be30e26cd701c178083e94699f00..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_dance_diffusion_to_diffusers.py
+++ /dev/null
@@ -1,339 +0,0 @@
-#!/usr/bin/env python3
-import argparse
-import math
-import os
-from copy import deepcopy
-
-import torch
-from audio_diffusion.models import DiffusionAttnUnet1D
-from diffusion import sampling
-from torch import nn
-
-from diffusers import DanceDiffusionPipeline, IPNDMScheduler, UNet1DModel
-
-
-MODELS_MAP = {
- "gwf-440k": {
- "url": "https://model-server.zqevans2.workers.dev/gwf-440k.ckpt",
- "sample_rate": 48000,
- "sample_size": 65536,
- },
- "jmann-small-190k": {
- "url": "https://model-server.zqevans2.workers.dev/jmann-small-190k.ckpt",
- "sample_rate": 48000,
- "sample_size": 65536,
- },
- "jmann-large-580k": {
- "url": "https://model-server.zqevans2.workers.dev/jmann-large-580k.ckpt",
- "sample_rate": 48000,
- "sample_size": 131072,
- },
- "maestro-uncond-150k": {
- "url": "https://model-server.zqevans2.workers.dev/maestro-uncond-150k.ckpt",
- "sample_rate": 16000,
- "sample_size": 65536,
- },
- "unlocked-uncond-250k": {
- "url": "https://model-server.zqevans2.workers.dev/unlocked-uncond-250k.ckpt",
- "sample_rate": 16000,
- "sample_size": 65536,
- },
- "honk-140k": {
- "url": "https://model-server.zqevans2.workers.dev/honk-140k.ckpt",
- "sample_rate": 16000,
- "sample_size": 65536,
- },
-}
-
-
-def alpha_sigma_to_t(alpha, sigma):
- """Returns a timestep, given the scaling factors for the clean image and for
- the noise."""
- return torch.atan2(sigma, alpha) / math.pi * 2
-
-
-def get_crash_schedule(t):
- sigma = torch.sin(t * math.pi / 2) ** 2
- alpha = (1 - sigma**2) ** 0.5
- return alpha_sigma_to_t(alpha, sigma)
-
-
-class Object(object):
- pass
-
-
-class DiffusionUncond(nn.Module):
- def __init__(self, global_args):
- super().__init__()
-
- self.diffusion = DiffusionAttnUnet1D(global_args, n_attn_layers=4)
- self.diffusion_ema = deepcopy(self.diffusion)
- self.rng = torch.quasirandom.SobolEngine(1, scramble=True)
-
-
-def download(model_name):
- url = MODELS_MAP[model_name]["url"]
- os.system(f"wget {url} ./")
-
- return f"./{model_name}.ckpt"
-
-
-DOWN_NUM_TO_LAYER = {
- "1": "resnets.0",
- "2": "attentions.0",
- "3": "resnets.1",
- "4": "attentions.1",
- "5": "resnets.2",
- "6": "attentions.2",
-}
-UP_NUM_TO_LAYER = {
- "8": "resnets.0",
- "9": "attentions.0",
- "10": "resnets.1",
- "11": "attentions.1",
- "12": "resnets.2",
- "13": "attentions.2",
-}
-MID_NUM_TO_LAYER = {
- "1": "resnets.0",
- "2": "attentions.0",
- "3": "resnets.1",
- "4": "attentions.1",
- "5": "resnets.2",
- "6": "attentions.2",
- "8": "resnets.3",
- "9": "attentions.3",
- "10": "resnets.4",
- "11": "attentions.4",
- "12": "resnets.5",
- "13": "attentions.5",
-}
-DEPTH_0_TO_LAYER = {
- "0": "resnets.0",
- "1": "resnets.1",
- "2": "resnets.2",
- "4": "resnets.0",
- "5": "resnets.1",
- "6": "resnets.2",
-}
-
-RES_CONV_MAP = {
- "skip": "conv_skip",
- "main.0": "conv_1",
- "main.1": "group_norm_1",
- "main.3": "conv_2",
- "main.4": "group_norm_2",
-}
-
-ATTN_MAP = {
- "norm": "group_norm",
- "qkv_proj": ["query", "key", "value"],
- "out_proj": ["proj_attn"],
-}
-
-
-def convert_resconv_naming(name):
- if name.startswith("skip"):
- return name.replace("skip", RES_CONV_MAP["skip"])
-
- # name has to be of format main.{digit}
- if not name.startswith("main."):
- raise ValueError(f"ResConvBlock error with {name}")
-
- return name.replace(name[:6], RES_CONV_MAP[name[:6]])
-
-
-def convert_attn_naming(name):
- for key, value in ATTN_MAP.items():
- if name.startswith(key) and not isinstance(value, list):
- return name.replace(key, value)
- elif name.startswith(key):
- return [name.replace(key, v) for v in value]
- raise ValueError(f"Attn error with {name}")
-
-
-def rename(input_string, max_depth=13):
- string = input_string
-
- if string.split(".")[0] == "timestep_embed":
- return string.replace("timestep_embed", "time_proj")
-
- depth = 0
- if string.startswith("net.3."):
- depth += 1
- string = string[6:]
- elif string.startswith("net."):
- string = string[4:]
-
- while string.startswith("main.7."):
- depth += 1
- string = string[7:]
-
- if string.startswith("main."):
- string = string[5:]
-
- # mid block
- if string[:2].isdigit():
- layer_num = string[:2]
- string_left = string[2:]
- else:
- layer_num = string[0]
- string_left = string[1:]
-
- if depth == max_depth:
- new_layer = MID_NUM_TO_LAYER[layer_num]
- prefix = "mid_block"
- elif depth > 0 and int(layer_num) < 7:
- new_layer = DOWN_NUM_TO_LAYER[layer_num]
- prefix = f"down_blocks.{depth}"
- elif depth > 0 and int(layer_num) > 7:
- new_layer = UP_NUM_TO_LAYER[layer_num]
- prefix = f"up_blocks.{max_depth - depth - 1}"
- elif depth == 0:
- new_layer = DEPTH_0_TO_LAYER[layer_num]
- prefix = f"up_blocks.{max_depth - 1}" if int(layer_num) > 3 else "down_blocks.0"
-
- if not string_left.startswith("."):
- raise ValueError(f"Naming error with {input_string} and string_left: {string_left}.")
-
- string_left = string_left[1:]
-
- if "resnets" in new_layer:
- string_left = convert_resconv_naming(string_left)
- elif "attentions" in new_layer:
- new_string_left = convert_attn_naming(string_left)
- string_left = new_string_left
-
- if not isinstance(string_left, list):
- new_string = prefix + "." + new_layer + "." + string_left
- else:
- new_string = [prefix + "." + new_layer + "." + s for s in string_left]
- return new_string
-
-
-def rename_orig_weights(state_dict):
- new_state_dict = {}
- for k, v in state_dict.items():
- if k.endswith("kernel"):
- # up- and downsample layers, don't have trainable weights
- continue
-
- new_k = rename(k)
-
- # check if we need to transform from Conv => Linear for attention
- if isinstance(new_k, list):
- new_state_dict = transform_conv_attns(new_state_dict, new_k, v)
- else:
- new_state_dict[new_k] = v
-
- return new_state_dict
-
-
-def transform_conv_attns(new_state_dict, new_k, v):
- if len(new_k) == 1:
- if len(v.shape) == 3:
- # weight
- new_state_dict[new_k[0]] = v[:, :, 0]
- else:
- # bias
- new_state_dict[new_k[0]] = v
- else:
- # qkv matrices
- trippled_shape = v.shape[0]
- single_shape = trippled_shape // 3
- for i in range(3):
- if len(v.shape) == 3:
- new_state_dict[new_k[i]] = v[i * single_shape : (i + 1) * single_shape, :, 0]
- else:
- new_state_dict[new_k[i]] = v[i * single_shape : (i + 1) * single_shape]
- return new_state_dict
-
-
-def main(args):
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
- model_name = args.model_path.split("/")[-1].split(".")[0]
- if not os.path.isfile(args.model_path):
- assert (
- model_name == args.model_path
- ), f"Make sure to provide one of the official model names {MODELS_MAP.keys()}"
- args.model_path = download(model_name)
-
- sample_rate = MODELS_MAP[model_name]["sample_rate"]
- sample_size = MODELS_MAP[model_name]["sample_size"]
-
- config = Object()
- config.sample_size = sample_size
- config.sample_rate = sample_rate
- config.latent_dim = 0
-
- diffusers_model = UNet1DModel(sample_size=sample_size, sample_rate=sample_rate)
- diffusers_state_dict = diffusers_model.state_dict()
-
- orig_model = DiffusionUncond(config)
- orig_model.load_state_dict(torch.load(args.model_path, map_location=device)["state_dict"])
- orig_model = orig_model.diffusion_ema.eval()
- orig_model_state_dict = orig_model.state_dict()
- renamed_state_dict = rename_orig_weights(orig_model_state_dict)
-
- renamed_minus_diffusers = set(renamed_state_dict.keys()) - set(diffusers_state_dict.keys())
- diffusers_minus_renamed = set(diffusers_state_dict.keys()) - set(renamed_state_dict.keys())
-
- assert len(renamed_minus_diffusers) == 0, f"Problem with {renamed_minus_diffusers}"
- assert all(k.endswith("kernel") for k in list(diffusers_minus_renamed)), f"Problem with {diffusers_minus_renamed}"
-
- for key, value in renamed_state_dict.items():
- assert (
- diffusers_state_dict[key].squeeze().shape == value.squeeze().shape
- ), f"Shape for {key} doesn't match. Diffusers: {diffusers_state_dict[key].shape} vs. {value.shape}"
- if key == "time_proj.weight":
- value = value.squeeze()
-
- diffusers_state_dict[key] = value
-
- diffusers_model.load_state_dict(diffusers_state_dict)
-
- steps = 100
- seed = 33
-
- diffusers_scheduler = IPNDMScheduler(num_train_timesteps=steps)
-
- generator = torch.manual_seed(seed)
- noise = torch.randn([1, 2, config.sample_size], generator=generator).to(device)
-
- t = torch.linspace(1, 0, steps + 1, device=device)[:-1]
- step_list = get_crash_schedule(t)
-
- pipe = DanceDiffusionPipeline(unet=diffusers_model, scheduler=diffusers_scheduler)
-
- generator = torch.manual_seed(33)
- audio = pipe(num_inference_steps=steps, generator=generator).audios
-
- generated = sampling.iplms_sample(orig_model, noise, step_list, {})
- generated = generated.clamp(-1, 1)
-
- diff_sum = (generated - audio).abs().sum()
- diff_max = (generated - audio).abs().max()
-
- if args.save:
- pipe.save_pretrained(args.checkpoint_path)
-
- print("Diff sum", diff_sum)
- print("Diff max", diff_max)
-
- assert diff_max < 1e-3, f"Diff max: {diff_max} is too much :-/"
-
- print(f"Conversion for {model_name} successful!")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
-
- parser.add_argument("--model_path", default=None, type=str, required=True, help="Path to the model to convert.")
- parser.add_argument(
- "--save", default=True, type=bool, required=False, help="Whether to save the converted model or not."
- )
- parser.add_argument("--checkpoint_path", default=None, type=str, required=True, help="Path to the output model.")
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py
deleted file mode 100644
index 29082beb9128828231aa45a6a1a26b28ad1e3aa4..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py
+++ /dev/null
@@ -1,713 +0,0 @@
-import inspect
-import warnings
-from itertools import repeat
-from typing import Callable, List, Optional, Union
-
-import torch
-from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
-
-from ...image_processor import VaeImageProcessor
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
-from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import logging, randn_tensor
-from ..pipeline_utils import DiffusionPipeline
-from . import SemanticStableDiffusionPipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-class SemanticStableDiffusionPipeline(DiffusionPipeline):
- r"""
- Pipeline for text-to-image generation using Stable Diffusion with latent editing.
-
- This model inherits from [`DiffusionPipeline`] and builds on the [`StableDiffusionPipeline`]. Check the superclass
- documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular
- device, etc.).
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- text_encoder ([`~transformers.CLIPTextModel`]):
- Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- tokenizer ([`~transformers.CLIPTokenizer`]):
- A `CLIPTokenizer` to tokenize text.
- unet ([`UNet2DConditionModel`]):
- A `UNet2DConditionModel` to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`Q16SafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
- about a model's potential harms.
- feature_extractor ([`~transformers.CLIPImageProcessor`]):
- A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
- """
-
- _optional_components = ["safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPImageProcessor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
-
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
- def run_safety_checker(self, image, device, dtype):
- if self.safety_checker is None:
- has_nsfw_concept = None
- else:
- if torch.is_tensor(image):
- feature_extractor_input = self.image_processor.postprocess(image, output_type="pil")
- else:
- feature_extractor_input = self.image_processor.numpy_to_pil(image)
- safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device)
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
- )
- return image, has_nsfw_concept
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
- def decode_latents(self, latents):
- warnings.warn(
- "The decode_latents method is deprecated and will be removed in a future version. Please"
- " use VaeImageProcessor instead",
- FutureWarning,
- )
- latents = 1 / self.vae.config.scaling_factor * latents
- image = self.vae.decode(latents, return_dict=False)[0]
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (ฮท) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to ฮท in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs
- def check_inputs(
- self,
- prompt,
- height,
- width,
- callback_steps,
- negative_prompt=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]],
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: int = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- editing_prompt: Optional[Union[str, List[str]]] = None,
- editing_prompt_embeddings: Optional[torch.Tensor] = None,
- reverse_editing_direction: Optional[Union[bool, List[bool]]] = False,
- edit_guidance_scale: Optional[Union[float, List[float]]] = 5,
- edit_warmup_steps: Optional[Union[int, List[int]]] = 10,
- edit_cooldown_steps: Optional[Union[int, List[int]]] = None,
- edit_threshold: Optional[Union[float, List[float]]] = 0.9,
- edit_momentum_scale: Optional[float] = 0.1,
- edit_mom_beta: Optional[float] = 0.4,
- edit_weights: Optional[List[float]] = None,
- sem_guidance: Optional[List[torch.Tensor]] = None,
- ):
- r"""
- The call function to the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide image generation.
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- A higher guidance scale value encourages the model to generate images closely linked to the text
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (ฮท) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor is generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
- editing_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting
- `editing_prompt = None`. Guidance direction of prompt should be specified via
- `reverse_editing_direction`.
- editing_prompt_embeddings (`torch.Tensor`, *optional*):
- Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be
- specified via `reverse_editing_direction`.
- reverse_editing_direction (`bool` or `List[bool]`, *optional*, defaults to `False`):
- Whether the corresponding prompt in `editing_prompt` should be increased or decreased.
- edit_guidance_scale (`float` or `List[float]`, *optional*, defaults to 5):
- Guidance scale for semantic guidance. If provided as a list, values should correspond to
- `editing_prompt`.
- edit_warmup_steps (`float` or `List[float]`, *optional*, defaults to 10):
- Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is
- calculated for those steps and applied once all warmup periods are over.
- edit_cooldown_steps (`float` or `List[float]`, *optional*, defaults to `None`):
- Number of diffusion steps (for each prompt) after which semantic guidance is longer applied.
- edit_threshold (`float` or `List[float]`, *optional*, defaults to 0.9):
- Threshold of semantic guidance.
- edit_momentum_scale (`float`, *optional*, defaults to 0.1):
- Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0,
- momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than
- `sld_warmup_steps`). Momentum is only added to latent guidance once all warmup periods are finished.
- edit_mom_beta (`float`, *optional*, defaults to 0.4):
- Defines how semantic guidance momentum builds up. `edit_mom_beta` indicates how much of the previous
- momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than
- `edit_warmup_steps`).
- edit_weights (`List[float]`, *optional*, defaults to `None`):
- Indicates how much each individual concept should influence the overall guidance. If no weights are
- provided all concepts are applied equally.
- sem_guidance (`List[torch.Tensor]`, *optional*):
- List of pre-generated guidance vectors to be applied at generation. Length of the list has to
- correspond to `num_inference_steps`.
-
- Examples:
-
- ```py
- >>> import torch
- >>> from diffusers import SemanticStableDiffusionPipeline
-
- >>> pipe = SemanticStableDiffusionPipeline.from_pretrained(
- ... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16
- ... )
- >>> pipe = pipe.to("cuda")
-
- >>> out = pipe(
- ... prompt="a photo of the face of a woman",
- ... num_images_per_prompt=1,
- ... guidance_scale=7,
- ... editing_prompt=[
- ... "smiling, smile", # Concepts to apply
- ... "glasses, wearing glasses",
- ... "curls, wavy hair, curly hair",
- ... "beard, full beard, mustache",
- ... ],
- ... reverse_editing_direction=[
- ... False,
- ... False,
- ... False,
- ... False,
- ... ], # Direction of guidance i.e. increase all concepts
- ... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept
- ... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept
- ... edit_threshold=[
- ... 0.99,
- ... 0.975,
- ... 0.925,
- ... 0.96,
- ... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions
- ... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance
- ... edit_mom_beta=0.6, # Momentum beta
- ... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other
- ... )
- >>> image = out.images[0]
- ```
-
- Returns:
- [`~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput`] or `tuple`:
- If `return_dict` is `True`,
- [`~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput`] is returned, otherwise a
- `tuple` is returned where the first element is a list with the generated images and the second element
- is a list of `bool`s indicating whether the corresponding generated image contains "not-safe-for-work"
- (nsfw) content.
- """
- # 0. Default height and width to unet
- height = height or self.unet.config.sample_size * self.vae_scale_factor
- width = width or self.unet.config.sample_size * self.vae_scale_factor
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(prompt, height, width, callback_steps)
-
- # 2. Define call parameters
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
-
- if editing_prompt:
- enable_edit_guidance = True
- if isinstance(editing_prompt, str):
- editing_prompt = [editing_prompt]
- enabled_editing_prompts = len(editing_prompt)
- elif editing_prompt_embeddings is not None:
- enable_edit_guidance = True
- enabled_editing_prompts = editing_prompt_embeddings.shape[0]
- else:
- enabled_editing_prompts = 0
- enable_edit_guidance = False
-
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
-
- if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
- removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
- text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
- text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
-
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- bs_embed, seq_len, _ = text_embeddings.shape
- text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
- text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- if enable_edit_guidance:
- # get safety text embeddings
- if editing_prompt_embeddings is None:
- edit_concepts_input = self.tokenizer(
- [x for item in editing_prompt for x in repeat(item, batch_size)],
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- )
-
- edit_concepts_input_ids = edit_concepts_input.input_ids
-
- if edit_concepts_input_ids.shape[-1] > self.tokenizer.model_max_length:
- removed_text = self.tokenizer.batch_decode(
- edit_concepts_input_ids[:, self.tokenizer.model_max_length :]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
- edit_concepts_input_ids = edit_concepts_input_ids[:, : self.tokenizer.model_max_length]
- edit_concepts = self.text_encoder(edit_concepts_input_ids.to(self.device))[0]
- else:
- edit_concepts = editing_prompt_embeddings.to(self.device).repeat(batch_size, 1, 1)
-
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- bs_embed_edit, seq_len_edit, _ = edit_concepts.shape
- edit_concepts = edit_concepts.repeat(1, num_images_per_prompt, 1)
- edit_concepts = edit_concepts.view(bs_embed_edit * num_images_per_prompt, seq_len_edit, -1)
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
- # get unconditional embeddings for classifier free guidance
-
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""]
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = uncond_embeddings.shape[1]
- uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
- uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- if enable_edit_guidance:
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings, edit_concepts])
- else:
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
- # get the initial random noise unless the user supplied it
-
- # 4. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=self.device)
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = self.unet.config.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- text_embeddings.dtype,
- self.device,
- generator,
- latents,
- )
-
- # 6. Prepare extra step kwargs.
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # Initialize edit_momentum to None
- edit_momentum = None
-
- self.uncond_estimates = None
- self.text_estimates = None
- self.edit_estimates = None
- self.sem_guidance = None
-
- for i, t in enumerate(self.progress_bar(timesteps)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = (
- torch.cat([latents] * (2 + enabled_editing_prompts)) if do_classifier_free_guidance else latents
- )
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_out = noise_pred.chunk(2 + enabled_editing_prompts) # [b,4, 64, 64]
- noise_pred_uncond, noise_pred_text = noise_pred_out[0], noise_pred_out[1]
- noise_pred_edit_concepts = noise_pred_out[2:]
-
- # default text guidance
- noise_guidance = guidance_scale * (noise_pred_text - noise_pred_uncond)
- # noise_guidance = (noise_pred_text - noise_pred_edit_concepts[0])
-
- if self.uncond_estimates is None:
- self.uncond_estimates = torch.zeros((num_inference_steps + 1, *noise_pred_uncond.shape))
- self.uncond_estimates[i] = noise_pred_uncond.detach().cpu()
-
- if self.text_estimates is None:
- self.text_estimates = torch.zeros((num_inference_steps + 1, *noise_pred_text.shape))
- self.text_estimates[i] = noise_pred_text.detach().cpu()
-
- if self.edit_estimates is None and enable_edit_guidance:
- self.edit_estimates = torch.zeros(
- (num_inference_steps + 1, len(noise_pred_edit_concepts), *noise_pred_edit_concepts[0].shape)
- )
-
- if self.sem_guidance is None:
- self.sem_guidance = torch.zeros((num_inference_steps + 1, *noise_pred_text.shape))
-
- if edit_momentum is None:
- edit_momentum = torch.zeros_like(noise_guidance)
-
- if enable_edit_guidance:
- concept_weights = torch.zeros(
- (len(noise_pred_edit_concepts), noise_guidance.shape[0]),
- device=self.device,
- dtype=noise_guidance.dtype,
- )
- noise_guidance_edit = torch.zeros(
- (len(noise_pred_edit_concepts), *noise_guidance.shape),
- device=self.device,
- dtype=noise_guidance.dtype,
- )
- # noise_guidance_edit = torch.zeros_like(noise_guidance)
- warmup_inds = []
- for c, noise_pred_edit_concept in enumerate(noise_pred_edit_concepts):
- self.edit_estimates[i, c] = noise_pred_edit_concept
- if isinstance(edit_guidance_scale, list):
- edit_guidance_scale_c = edit_guidance_scale[c]
- else:
- edit_guidance_scale_c = edit_guidance_scale
-
- if isinstance(edit_threshold, list):
- edit_threshold_c = edit_threshold[c]
- else:
- edit_threshold_c = edit_threshold
- if isinstance(reverse_editing_direction, list):
- reverse_editing_direction_c = reverse_editing_direction[c]
- else:
- reverse_editing_direction_c = reverse_editing_direction
- if edit_weights:
- edit_weight_c = edit_weights[c]
- else:
- edit_weight_c = 1.0
- if isinstance(edit_warmup_steps, list):
- edit_warmup_steps_c = edit_warmup_steps[c]
- else:
- edit_warmup_steps_c = edit_warmup_steps
-
- if isinstance(edit_cooldown_steps, list):
- edit_cooldown_steps_c = edit_cooldown_steps[c]
- elif edit_cooldown_steps is None:
- edit_cooldown_steps_c = i + 1
- else:
- edit_cooldown_steps_c = edit_cooldown_steps
- if i >= edit_warmup_steps_c:
- warmup_inds.append(c)
- if i >= edit_cooldown_steps_c:
- noise_guidance_edit[c, :, :, :, :] = torch.zeros_like(noise_pred_edit_concept)
- continue
-
- noise_guidance_edit_tmp = noise_pred_edit_concept - noise_pred_uncond
- # tmp_weights = (noise_pred_text - noise_pred_edit_concept).sum(dim=(1, 2, 3))
- tmp_weights = (noise_guidance - noise_pred_edit_concept).sum(dim=(1, 2, 3))
-
- tmp_weights = torch.full_like(tmp_weights, edit_weight_c) # * (1 / enabled_editing_prompts)
- if reverse_editing_direction_c:
- noise_guidance_edit_tmp = noise_guidance_edit_tmp * -1
- concept_weights[c, :] = tmp_weights
-
- noise_guidance_edit_tmp = noise_guidance_edit_tmp * edit_guidance_scale_c
-
- # torch.quantile function expects float32
- if noise_guidance_edit_tmp.dtype == torch.float32:
- tmp = torch.quantile(
- torch.abs(noise_guidance_edit_tmp).flatten(start_dim=2),
- edit_threshold_c,
- dim=2,
- keepdim=False,
- )
- else:
- tmp = torch.quantile(
- torch.abs(noise_guidance_edit_tmp).flatten(start_dim=2).to(torch.float32),
- edit_threshold_c,
- dim=2,
- keepdim=False,
- ).to(noise_guidance_edit_tmp.dtype)
-
- noise_guidance_edit_tmp = torch.where(
- torch.abs(noise_guidance_edit_tmp) >= tmp[:, :, None, None],
- noise_guidance_edit_tmp,
- torch.zeros_like(noise_guidance_edit_tmp),
- )
- noise_guidance_edit[c, :, :, :, :] = noise_guidance_edit_tmp
-
- # noise_guidance_edit = noise_guidance_edit + noise_guidance_edit_tmp
-
- warmup_inds = torch.tensor(warmup_inds).to(self.device)
- if len(noise_pred_edit_concepts) > warmup_inds.shape[0] > 0:
- concept_weights = concept_weights.to("cpu") # Offload to cpu
- noise_guidance_edit = noise_guidance_edit.to("cpu")
-
- concept_weights_tmp = torch.index_select(concept_weights.to(self.device), 0, warmup_inds)
- concept_weights_tmp = torch.where(
- concept_weights_tmp < 0, torch.zeros_like(concept_weights_tmp), concept_weights_tmp
- )
- concept_weights_tmp = concept_weights_tmp / concept_weights_tmp.sum(dim=0)
- # concept_weights_tmp = torch.nan_to_num(concept_weights_tmp)
-
- noise_guidance_edit_tmp = torch.index_select(
- noise_guidance_edit.to(self.device), 0, warmup_inds
- )
- noise_guidance_edit_tmp = torch.einsum(
- "cb,cbijk->bijk", concept_weights_tmp, noise_guidance_edit_tmp
- )
- noise_guidance_edit_tmp = noise_guidance_edit_tmp
- noise_guidance = noise_guidance + noise_guidance_edit_tmp
-
- self.sem_guidance[i] = noise_guidance_edit_tmp.detach().cpu()
-
- del noise_guidance_edit_tmp
- del concept_weights_tmp
- concept_weights = concept_weights.to(self.device)
- noise_guidance_edit = noise_guidance_edit.to(self.device)
-
- concept_weights = torch.where(
- concept_weights < 0, torch.zeros_like(concept_weights), concept_weights
- )
-
- concept_weights = torch.nan_to_num(concept_weights)
-
- noise_guidance_edit = torch.einsum("cb,cbijk->bijk", concept_weights, noise_guidance_edit)
-
- noise_guidance_edit = noise_guidance_edit + edit_momentum_scale * edit_momentum
-
- edit_momentum = edit_mom_beta * edit_momentum + (1 - edit_mom_beta) * noise_guidance_edit
-
- if warmup_inds.shape[0] == len(noise_pred_edit_concepts):
- noise_guidance = noise_guidance + noise_guidance_edit
- self.sem_guidance[i] = noise_guidance_edit.detach().cpu()
-
- if sem_guidance is not None:
- edit_guidance = sem_guidance[i].to(self.device)
- noise_guidance = noise_guidance + edit_guidance
-
- noise_pred = noise_pred_uncond + noise_guidance
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 8. Post-processing
- if not output_type == "latent":
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
- image, has_nsfw_concept = self.run_safety_checker(image, self.device, text_embeddings.dtype)
- else:
- image = latents
- has_nsfw_concept = None
-
- if has_nsfw_concept is None:
- do_denormalize = [True] * image.shape[0]
- else:
- do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
-
- image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return SemanticStableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py
deleted file mode 100644
index a26abfa50096f833e6cbdbccff931b843615b850..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py
+++ /dev/null
@@ -1,594 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import unittest
-
-import numpy as np
-import torch
-from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
- StableDiffusionPipeline,
- UNet2DConditionModel,
- logging,
-)
-from diffusers.utils import load_numpy, nightly, slow, torch_device
-from diffusers.utils.testing_utils import CaptureLogger, enable_full_determinism, require_torch_gpu
-
-from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
-from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
-
-
-enable_full_determinism()
-
-
-class StableDiffusion2PipelineFastTests(
- PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase
-):
- pipeline_class = StableDiffusionPipeline
- params = TEXT_TO_IMAGE_PARAMS
- batch_params = TEXT_TO_IMAGE_BATCH_PARAMS
- image_params = TEXT_TO_IMAGE_IMAGE_PARAMS
- image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS
-
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- # SD2-specific config below
- attention_head_dim=(2, 4),
- use_linear_projection=True,
- )
- scheduler = DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- )
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- sample_size=128,
- )
- torch.manual_seed(0)
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- # SD2-specific config below
- hidden_act="gelu",
- projection_dim=512,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- components = {
- "unet": unet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "safety_checker": None,
- "feature_extractor": None,
- }
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
- inputs = {
- "prompt": "A painting of a squirrel eating a burger",
- "generator": generator,
- "num_inference_steps": 2,
- "guidance_scale": 6.0,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_ddim(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.5753, 0.6113, 0.5005, 0.5036, 0.5464, 0.4725, 0.4982, 0.4865, 0.4861])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_pndm(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = PNDMScheduler(skip_prk_steps=True)
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.5121, 0.5714, 0.4827, 0.5057, 0.5646, 0.4766, 0.5189, 0.4895, 0.4990])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_k_lms(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config)
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.4865, 0.5439, 0.4840, 0.4995, 0.5543, 0.4846, 0.5199, 0.4942, 0.5061])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_k_euler_ancestral(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = EulerAncestralDiscreteScheduler.from_config(components["scheduler"].config)
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.4864, 0.5440, 0.4842, 0.4994, 0.5543, 0.4846, 0.5196, 0.4942, 0.5063])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_k_euler(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = EulerDiscreteScheduler.from_config(components["scheduler"].config)
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.4865, 0.5439, 0.4840, 0.4995, 0.5543, 0.4846, 0.5199, 0.4942, 0.5061])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_unflawed(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = DDIMScheduler.from_config(
- components["scheduler"].config, timestep_spacing="trailing"
- )
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- inputs["guidance_rescale"] = 0.7
- inputs["num_inference_steps"] = 10
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.4736, 0.5405, 0.4705, 0.4955, 0.5675, 0.4812, 0.5310, 0.4967, 0.5064])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_long_prompt(self):
- components = self.get_dummy_components()
- components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config)
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- do_classifier_free_guidance = True
- negative_prompt = None
- num_images_per_prompt = 1
- logger = logging.get_logger("diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion")
-
- prompt = 25 * "@"
- with CaptureLogger(logger) as cap_logger_3:
- text_embeddings_3 = sd_pipe._encode_prompt(
- prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
-
- prompt = 100 * "@"
- with CaptureLogger(logger) as cap_logger:
- text_embeddings = sd_pipe._encode_prompt(
- prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
-
- negative_prompt = "Hello"
- with CaptureLogger(logger) as cap_logger_2:
- text_embeddings_2 = sd_pipe._encode_prompt(
- prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
-
- assert text_embeddings_3.shape == text_embeddings_2.shape == text_embeddings.shape
- assert text_embeddings.shape[1] == 77
-
- assert cap_logger.out == cap_logger_2.out
- # 100 - 77 + 1 (BOS token) + 1 (EOS token) = 25
- assert cap_logger.out.count("@") == 25
- assert cap_logger_3.out == ""
-
- def test_attention_slicing_forward_pass(self):
- super().test_attention_slicing_forward_pass(expected_max_diff=3e-3)
-
- def test_inference_batch_single_identical(self):
- super().test_inference_batch_single_identical(expected_max_diff=3e-3)
-
-
-@slow
-@require_torch_gpu
-class StableDiffusion2PipelineSlowTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0):
- generator = torch.Generator(device=generator_device).manual_seed(seed)
- latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64))
- latents = torch.from_numpy(latents).to(device=device, dtype=dtype)
- inputs = {
- "prompt": "a photograph of an astronaut riding a horse",
- "latents": latents,
- "generator": generator,
- "num_inference_steps": 3,
- "guidance_scale": 7.5,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_default_ddim(self):
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-base")
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.49493, 0.47896, 0.40798, 0.54214, 0.53212, 0.48202, 0.47656, 0.46329, 0.48506])
- assert np.abs(image_slice - expected_slice).max() < 7e-3
-
- def test_stable_diffusion_pndm(self):
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-base")
- pipe.scheduler = PNDMScheduler.from_config(pipe.scheduler.config)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.49493, 0.47896, 0.40798, 0.54214, 0.53212, 0.48202, 0.47656, 0.46329, 0.48506])
- assert np.abs(image_slice - expected_slice).max() < 7e-3
-
- def test_stable_diffusion_k_lms(self):
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-base")
- pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.10440, 0.13115, 0.11100, 0.10141, 0.11440, 0.07215, 0.11332, 0.09693, 0.10006])
- assert np.abs(image_slice - expected_slice).max() < 3e-3
-
- def test_stable_diffusion_attention_slicing(self):
- torch.cuda.reset_peak_memory_stats()
- pipe = StableDiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-base", torch_dtype=torch.float16
- )
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- # enable attention slicing
- pipe.enable_attention_slicing()
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- image_sliced = pipe(**inputs).images
-
- mem_bytes = torch.cuda.max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
- # make sure that less than 3.3 GB is allocated
- assert mem_bytes < 3.3 * 10**9
-
- # disable slicing
- pipe.disable_attention_slicing()
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- image = pipe(**inputs).images
-
- # make sure that more than 3.3 GB is allocated
- mem_bytes = torch.cuda.max_memory_allocated()
- assert mem_bytes > 3.3 * 10**9
- assert np.abs(image_sliced - image).max() < 1e-3
-
- def test_stable_diffusion_text2img_intermediate_state(self):
- number_of_steps = 0
-
- def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None:
- callback_fn.has_been_called = True
- nonlocal number_of_steps
- number_of_steps += 1
- if step == 1:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 64, 64)
- latents_slice = latents[0, -3:, -3:, -1]
- expected_slice = np.array(
- [-0.3862, -0.4507, -1.1729, 0.0686, -1.1045, 0.7124, -1.8301, 0.1903, 1.2773]
- )
-
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
- elif step == 2:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 64, 64)
- latents_slice = latents[0, -3:, -3:, -1]
- expected_slice = np.array(
- [0.2720, -0.1863, -0.7383, -0.5029, -0.7534, 0.3970, -0.7646, 0.4468, 1.2686]
- )
-
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
-
- callback_fn.has_been_called = False
-
- pipe = StableDiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-base", torch_dtype=torch.float16
- )
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- pipe(**inputs, callback=callback_fn, callback_steps=1)
- assert callback_fn.has_been_called
- assert number_of_steps == inputs["num_inference_steps"]
-
- def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- pipe = StableDiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-base", torch_dtype=torch.float16
- )
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing(1)
- pipe.enable_sequential_cpu_offload()
-
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- _ = pipe(**inputs)
-
- mem_bytes = torch.cuda.max_memory_allocated()
- # make sure that less than 2.8 GB is allocated
- assert mem_bytes < 2.8 * 10**9
-
- def test_stable_diffusion_pipeline_with_model_offloading(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
-
- # Normal inference
-
- pipe = StableDiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-base",
- torch_dtype=torch.float16,
- )
- pipe.unet.set_default_attn_processor()
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- outputs = pipe(**inputs)
- mem_bytes = torch.cuda.max_memory_allocated()
-
- # With model offloading
-
- # Reload but don't move to cuda
- pipe = StableDiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-base",
- torch_dtype=torch.float16,
- )
- pipe.unet.set_default_attn_processor()
-
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- pipe.enable_model_cpu_offload()
- pipe.set_progress_bar_config(disable=None)
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- outputs_offloaded = pipe(**inputs)
- mem_bytes_offloaded = torch.cuda.max_memory_allocated()
-
- assert np.abs(outputs.images - outputs_offloaded.images).max() < 1e-3
- assert mem_bytes_offloaded < mem_bytes
- assert mem_bytes_offloaded < 3 * 10**9
- for module in pipe.text_encoder, pipe.unet, pipe.vae:
- assert module.device == torch.device("cpu")
-
- # With attention slicing
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- pipe.enable_attention_slicing()
- _ = pipe(**inputs)
- mem_bytes_slicing = torch.cuda.max_memory_allocated()
- assert mem_bytes_slicing < mem_bytes_offloaded
-
-
-@nightly
-@require_torch_gpu
-class StableDiffusion2PipelineNightlyTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0):
- generator = torch.Generator(device=generator_device).manual_seed(seed)
- latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64))
- latents = torch.from_numpy(latents).to(device=device, dtype=dtype)
- inputs = {
- "prompt": "a photograph of an astronaut riding a horse",
- "latents": latents,
- "generator": generator,
- "num_inference_steps": 50,
- "guidance_scale": 7.5,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_2_0_default_ddim(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-base").to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_2_text2img/stable_diffusion_2_0_base_ddim.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_stable_diffusion_2_1_default_pndm(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_pndm.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_stable_diffusion_ddim(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device)
- sd_pipe.scheduler = DDIMScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_ddim.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_stable_diffusion_lms(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device)
- sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_lms.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_stable_diffusion_euler(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device)
- sd_pipe.scheduler = EulerDiscreteScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_euler.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_stable_diffusion_dpm(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to(torch_device)
- sd_pipe.scheduler = DPMSolverMultistepScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- inputs["num_inference_steps"] = 25
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_2_text2img/stable_diffusion_2_1_base_dpm_multi.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco.py
deleted file mode 100644
index 762c72be00b94445897adb8b49420628fec9c33b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco.py
+++ /dev/null
@@ -1,37 +0,0 @@
-_base_ = './faster_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(
- norm_cfg=dict(requires_grad=False), norm_eval=True, style='caffe'))
-# use caffe img_norm
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_1x_coco.py
deleted file mode 100644
index 1c124328286c659d800d2c44a2c4e4fee15f26e5..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_1x_coco.py
+++ /dev/null
@@ -1,58 +0,0 @@
-_base_ = './mask_rcnn_r101_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron2/resnext101_32x8d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=8,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=False),
- style='pytorch'))
-
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675],
- std=[57.375, 57.120, 58.395],
- to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='LoadAnnotations',
- with_bbox=True,
- with_mask=True,
- poly2mask=False),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
- (1333, 768), (1333, 800)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py
deleted file mode 100644
index d665dfff83855e6db3866c681559ccdef09f9999..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import ConvModule, Linear, constant_init, xavier_init
-from mmcv.runner import auto_fp16
-
-from mmdet.models.builder import HEADS
-from .fcn_mask_head import FCNMaskHead
-
-
-@HEADS.register_module()
-class CoarseMaskHead(FCNMaskHead):
- """Coarse mask head used in PointRend.
-
- Compared with standard ``FCNMaskHead``, ``CoarseMaskHead`` will downsample
- the input feature map instead of upsample it.
-
- Args:
- num_convs (int): Number of conv layers in the head. Default: 0.
- num_fcs (int): Number of fc layers in the head. Default: 2.
- fc_out_channels (int): Number of output channels of fc layer.
- Default: 1024.
- downsample_factor (int): The factor that feature map is downsampled by.
- Default: 2.
- """
-
- def __init__(self,
- num_convs=0,
- num_fcs=2,
- fc_out_channels=1024,
- downsample_factor=2,
- *arg,
- **kwarg):
- super(CoarseMaskHead, self).__init__(
- *arg, num_convs=num_convs, upsample_cfg=dict(type=None), **kwarg)
- self.num_fcs = num_fcs
- assert self.num_fcs > 0
- self.fc_out_channels = fc_out_channels
- self.downsample_factor = downsample_factor
- assert self.downsample_factor >= 1
- # remove conv_logit
- delattr(self, 'conv_logits')
-
- if downsample_factor > 1:
- downsample_in_channels = (
- self.conv_out_channels
- if self.num_convs > 0 else self.in_channels)
- self.downsample_conv = ConvModule(
- downsample_in_channels,
- self.conv_out_channels,
- kernel_size=downsample_factor,
- stride=downsample_factor,
- padding=0,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg)
- else:
- self.downsample_conv = None
-
- self.output_size = (self.roi_feat_size[0] // downsample_factor,
- self.roi_feat_size[1] // downsample_factor)
- self.output_area = self.output_size[0] * self.output_size[1]
-
- last_layer_dim = self.conv_out_channels * self.output_area
-
- self.fcs = nn.ModuleList()
- for i in range(num_fcs):
- fc_in_channels = (
- last_layer_dim if i == 0 else self.fc_out_channels)
- self.fcs.append(Linear(fc_in_channels, self.fc_out_channels))
- last_layer_dim = self.fc_out_channels
- output_channels = self.num_classes * self.output_area
- self.fc_logits = Linear(last_layer_dim, output_channels)
-
- def init_weights(self):
- for m in self.fcs.modules():
- if isinstance(m, nn.Linear):
- xavier_init(m)
- constant_init(self.fc_logits, 0.001)
-
- @auto_fp16()
- def forward(self, x):
- for conv in self.convs:
- x = conv(x)
-
- if self.downsample_conv is not None:
- x = self.downsample_conv(x)
-
- x = x.flatten(1)
- for fc in self.fcs:
- x = self.relu(fc(x))
- mask_pred = self.fc_logits(x).view(
- x.size(0), self.num_classes, *self.output_size)
- return mask_pred
diff --git a/spaces/Andy1621/uniformer_image_detection/tools/dataset_converters/pascal_voc.py b/spaces/Andy1621/uniformer_image_detection/tools/dataset_converters/pascal_voc.py
deleted file mode 100644
index f109307c3d9bf461fa7e8d29fe2333413534f0d4..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/tools/dataset_converters/pascal_voc.py
+++ /dev/null
@@ -1,236 +0,0 @@
-import argparse
-import os.path as osp
-import xml.etree.ElementTree as ET
-
-import mmcv
-import numpy as np
-
-from mmdet.core import voc_classes
-
-label_ids = {name: i for i, name in enumerate(voc_classes())}
-
-
-def parse_xml(args):
- xml_path, img_path = args
- tree = ET.parse(xml_path)
- root = tree.getroot()
- size = root.find('size')
- w = int(size.find('width').text)
- h = int(size.find('height').text)
- bboxes = []
- labels = []
- bboxes_ignore = []
- labels_ignore = []
- for obj in root.findall('object'):
- name = obj.find('name').text
- label = label_ids[name]
- difficult = int(obj.find('difficult').text)
- bnd_box = obj.find('bndbox')
- bbox = [
- int(bnd_box.find('xmin').text),
- int(bnd_box.find('ymin').text),
- int(bnd_box.find('xmax').text),
- int(bnd_box.find('ymax').text)
- ]
- if difficult:
- bboxes_ignore.append(bbox)
- labels_ignore.append(label)
- else:
- bboxes.append(bbox)
- labels.append(label)
- if not bboxes:
- bboxes = np.zeros((0, 4))
- labels = np.zeros((0, ))
- else:
- bboxes = np.array(bboxes, ndmin=2) - 1
- labels = np.array(labels)
- if not bboxes_ignore:
- bboxes_ignore = np.zeros((0, 4))
- labels_ignore = np.zeros((0, ))
- else:
- bboxes_ignore = np.array(bboxes_ignore, ndmin=2) - 1
- labels_ignore = np.array(labels_ignore)
- annotation = {
- 'filename': img_path,
- 'width': w,
- 'height': h,
- 'ann': {
- 'bboxes': bboxes.astype(np.float32),
- 'labels': labels.astype(np.int64),
- 'bboxes_ignore': bboxes_ignore.astype(np.float32),
- 'labels_ignore': labels_ignore.astype(np.int64)
- }
- }
- return annotation
-
-
-def cvt_annotations(devkit_path, years, split, out_file):
- if not isinstance(years, list):
- years = [years]
- annotations = []
- for year in years:
- filelist = osp.join(devkit_path,
- f'VOC{year}/ImageSets/Main/{split}.txt')
- if not osp.isfile(filelist):
- print(f'filelist does not exist: {filelist}, '
- f'skip voc{year} {split}')
- return
- img_names = mmcv.list_from_file(filelist)
- xml_paths = [
- osp.join(devkit_path, f'VOC{year}/Annotations/{img_name}.xml')
- for img_name in img_names
- ]
- img_paths = [
- f'VOC{year}/JPEGImages/{img_name}.jpg' for img_name in img_names
- ]
- part_annotations = mmcv.track_progress(parse_xml,
- list(zip(xml_paths, img_paths)))
- annotations.extend(part_annotations)
- if out_file.endswith('json'):
- annotations = cvt_to_coco_json(annotations)
- mmcv.dump(annotations, out_file)
- return annotations
-
-
-def cvt_to_coco_json(annotations):
- image_id = 0
- annotation_id = 0
- coco = dict()
- coco['images'] = []
- coco['type'] = 'instance'
- coco['categories'] = []
- coco['annotations'] = []
- image_set = set()
-
- def addAnnItem(annotation_id, image_id, category_id, bbox, difficult_flag):
- annotation_item = dict()
- annotation_item['segmentation'] = []
-
- seg = []
- # bbox[] is x1,y1,x2,y2
- # left_top
- seg.append(int(bbox[0]))
- seg.append(int(bbox[1]))
- # left_bottom
- seg.append(int(bbox[0]))
- seg.append(int(bbox[3]))
- # right_bottom
- seg.append(int(bbox[2]))
- seg.append(int(bbox[3]))
- # right_top
- seg.append(int(bbox[2]))
- seg.append(int(bbox[1]))
-
- annotation_item['segmentation'].append(seg)
-
- xywh = np.array(
- [bbox[0], bbox[1], bbox[2] - bbox[0], bbox[3] - bbox[1]])
- annotation_item['area'] = int(xywh[2] * xywh[3])
- if difficult_flag == 1:
- annotation_item['ignore'] = 0
- annotation_item['iscrowd'] = 1
- else:
- annotation_item['ignore'] = 0
- annotation_item['iscrowd'] = 0
- annotation_item['image_id'] = int(image_id)
- annotation_item['bbox'] = xywh.astype(int).tolist()
- annotation_item['category_id'] = int(category_id)
- annotation_item['id'] = int(annotation_id)
- coco['annotations'].append(annotation_item)
- return annotation_id + 1
-
- for category_id, name in enumerate(voc_classes()):
- category_item = dict()
- category_item['supercategory'] = str('none')
- category_item['id'] = int(category_id)
- category_item['name'] = str(name)
- coco['categories'].append(category_item)
-
- for ann_dict in annotations:
- file_name = ann_dict['filename']
- ann = ann_dict['ann']
- assert file_name not in image_set
- image_item = dict()
- image_item['id'] = int(image_id)
- image_item['file_name'] = str(file_name)
- image_item['height'] = int(ann_dict['height'])
- image_item['width'] = int(ann_dict['width'])
- coco['images'].append(image_item)
- image_set.add(file_name)
-
- bboxes = ann['bboxes'][:, :4]
- labels = ann['labels']
- for bbox_id in range(len(bboxes)):
- bbox = bboxes[bbox_id]
- label = labels[bbox_id]
- annotation_id = addAnnItem(
- annotation_id, image_id, label, bbox, difficult_flag=0)
-
- bboxes_ignore = ann['bboxes_ignore'][:, :4]
- labels_ignore = ann['labels_ignore']
- for bbox_id in range(len(bboxes_ignore)):
- bbox = bboxes_ignore[bbox_id]
- label = labels_ignore[bbox_id]
- annotation_id = addAnnItem(
- annotation_id, image_id, label, bbox, difficult_flag=1)
-
- image_id += 1
-
- return coco
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='Convert PASCAL VOC annotations to mmdetection format')
- parser.add_argument('devkit_path', help='pascal voc devkit path')
- parser.add_argument('-o', '--out-dir', help='output path')
- parser.add_argument(
- '--out-format',
- default='pkl',
- choices=('pkl', 'coco'),
- help='output format, "coco" indicates coco annotation format')
- args = parser.parse_args()
- return args
-
-
-def main():
- args = parse_args()
- devkit_path = args.devkit_path
- out_dir = args.out_dir if args.out_dir else devkit_path
- mmcv.mkdir_or_exist(out_dir)
-
- years = []
- if osp.isdir(osp.join(devkit_path, 'VOC2007')):
- years.append('2007')
- if osp.isdir(osp.join(devkit_path, 'VOC2012')):
- years.append('2012')
- if '2007' in years and '2012' in years:
- years.append(['2007', '2012'])
- if not years:
- raise IOError(f'The devkit path {devkit_path} contains neither '
- '"VOC2007" nor "VOC2012" subfolder')
- out_fmt = f'.{args.out_format}'
- if args.out_format == 'coco':
- out_fmt = '.json'
- for year in years:
- if year == '2007':
- prefix = 'voc07'
- elif year == '2012':
- prefix = 'voc12'
- elif year == ['2007', '2012']:
- prefix = 'voc0712'
- for split in ['train', 'val', 'trainval']:
- dataset_name = prefix + '_' + split
- print(f'processing {dataset_name} ...')
- cvt_annotations(devkit_path, year, split,
- osp.join(out_dir, dataset_name + out_fmt))
- if not isinstance(year, list):
- dataset_name = prefix + '_test'
- print(f'processing {dataset_name} ...')
- cvt_annotations(devkit_path, year, 'test',
- osp.join(out_dir, dataset_name + out_fmt))
- print('Done!')
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x512_160k_ade20k.py
deleted file mode 100644
index f7821c559d2f92d23b28e07e040a54cfc425eefc..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/apcnet_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 7a1e66cf1c239eac3c6a4876a35d82e7b6ccec2e..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './nonlocal_r50-d8_512x1024_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x1024_40k_cityscapes.py
deleted file mode 100644
index 2c73b3839c8c1bc859eb3b8864256a00cfd022fe..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/ocrnet_hr18.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
diff --git a/spaces/AnimeStudio/anime-models/README.md b/spaces/AnimeStudio/anime-models/README.md
deleted file mode 100644
index 5bd79f1f137204e77aaebfb8b3fc111fb0e7236f..0000000000000000000000000000000000000000
--- a/spaces/AnimeStudio/anime-models/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Maximum Multiplier
-emoji: ๐๐
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: true
-duplicated_from: blueorigin6/stablediffusion-models
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/lr_updater.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/lr_updater.py
deleted file mode 100644
index 6365908ddf6070086de2ffc0afada46ed2f32256..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/lr_updater.py
+++ /dev/null
@@ -1,670 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numbers
-from math import cos, pi
-
-import annotator.uniformer.mmcv as mmcv
-from .hook import HOOKS, Hook
-
-
-class LrUpdaterHook(Hook):
- """LR Scheduler in MMCV.
-
- Args:
- by_epoch (bool): LR changes epoch by epoch
- warmup (string): Type of warmup used. It can be None(use no warmup),
- 'constant', 'linear' or 'exp'
- warmup_iters (int): The number of iterations or epochs that warmup
- lasts
- warmup_ratio (float): LR used at the beginning of warmup equals to
- warmup_ratio * initial_lr
- warmup_by_epoch (bool): When warmup_by_epoch == True, warmup_iters
- means the number of epochs that warmup lasts, otherwise means the
- number of iteration that warmup lasts
- """
-
- def __init__(self,
- by_epoch=True,
- warmup=None,
- warmup_iters=0,
- warmup_ratio=0.1,
- warmup_by_epoch=False):
- # validate the "warmup" argument
- if warmup is not None:
- if warmup not in ['constant', 'linear', 'exp']:
- raise ValueError(
- f'"{warmup}" is not a supported type for warming up, valid'
- ' types are "constant" and "linear"')
- if warmup is not None:
- assert warmup_iters > 0, \
- '"warmup_iters" must be a positive integer'
- assert 0 < warmup_ratio <= 1.0, \
- '"warmup_ratio" must be in range (0,1]'
-
- self.by_epoch = by_epoch
- self.warmup = warmup
- self.warmup_iters = warmup_iters
- self.warmup_ratio = warmup_ratio
- self.warmup_by_epoch = warmup_by_epoch
-
- if self.warmup_by_epoch:
- self.warmup_epochs = self.warmup_iters
- self.warmup_iters = None
- else:
- self.warmup_epochs = None
-
- self.base_lr = [] # initial lr for all param groups
- self.regular_lr = [] # expected lr if no warming up is performed
-
- def _set_lr(self, runner, lr_groups):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- for param_group, lr in zip(optim.param_groups, lr_groups[k]):
- param_group['lr'] = lr
- else:
- for param_group, lr in zip(runner.optimizer.param_groups,
- lr_groups):
- param_group['lr'] = lr
-
- def get_lr(self, runner, base_lr):
- raise NotImplementedError
-
- def get_regular_lr(self, runner):
- if isinstance(runner.optimizer, dict):
- lr_groups = {}
- for k in runner.optimizer.keys():
- _lr_group = [
- self.get_lr(runner, _base_lr)
- for _base_lr in self.base_lr[k]
- ]
- lr_groups.update({k: _lr_group})
-
- return lr_groups
- else:
- return [self.get_lr(runner, _base_lr) for _base_lr in self.base_lr]
-
- def get_warmup_lr(self, cur_iters):
-
- def _get_warmup_lr(cur_iters, regular_lr):
- if self.warmup == 'constant':
- warmup_lr = [_lr * self.warmup_ratio for _lr in regular_lr]
- elif self.warmup == 'linear':
- k = (1 - cur_iters / self.warmup_iters) * (1 -
- self.warmup_ratio)
- warmup_lr = [_lr * (1 - k) for _lr in regular_lr]
- elif self.warmup == 'exp':
- k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters)
- warmup_lr = [_lr * k for _lr in regular_lr]
- return warmup_lr
-
- if isinstance(self.regular_lr, dict):
- lr_groups = {}
- for key, regular_lr in self.regular_lr.items():
- lr_groups[key] = _get_warmup_lr(cur_iters, regular_lr)
- return lr_groups
- else:
- return _get_warmup_lr(cur_iters, self.regular_lr)
-
- def before_run(self, runner):
- # NOTE: when resuming from a checkpoint, if 'initial_lr' is not saved,
- # it will be set according to the optimizer params
- if isinstance(runner.optimizer, dict):
- self.base_lr = {}
- for k, optim in runner.optimizer.items():
- for group in optim.param_groups:
- group.setdefault('initial_lr', group['lr'])
- _base_lr = [
- group['initial_lr'] for group in optim.param_groups
- ]
- self.base_lr.update({k: _base_lr})
- else:
- for group in runner.optimizer.param_groups:
- group.setdefault('initial_lr', group['lr'])
- self.base_lr = [
- group['initial_lr'] for group in runner.optimizer.param_groups
- ]
-
- def before_train_epoch(self, runner):
- if self.warmup_iters is None:
- epoch_len = len(runner.data_loader)
- self.warmup_iters = self.warmup_epochs * epoch_len
-
- if not self.by_epoch:
- return
-
- self.regular_lr = self.get_regular_lr(runner)
- self._set_lr(runner, self.regular_lr)
-
- def before_train_iter(self, runner):
- cur_iter = runner.iter
- if not self.by_epoch:
- self.regular_lr = self.get_regular_lr(runner)
- if self.warmup is None or cur_iter >= self.warmup_iters:
- self._set_lr(runner, self.regular_lr)
- else:
- warmup_lr = self.get_warmup_lr(cur_iter)
- self._set_lr(runner, warmup_lr)
- elif self.by_epoch:
- if self.warmup is None or cur_iter > self.warmup_iters:
- return
- elif cur_iter == self.warmup_iters:
- self._set_lr(runner, self.regular_lr)
- else:
- warmup_lr = self.get_warmup_lr(cur_iter)
- self._set_lr(runner, warmup_lr)
-
-
-@HOOKS.register_module()
-class FixedLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, **kwargs):
- super(FixedLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- return base_lr
-
-
-@HOOKS.register_module()
-class StepLrUpdaterHook(LrUpdaterHook):
- """Step LR scheduler with min_lr clipping.
-
- Args:
- step (int | list[int]): Step to decay the LR. If an int value is given,
- regard it as the decay interval. If a list is given, decay LR at
- these steps.
- gamma (float, optional): Decay LR ratio. Default: 0.1.
- min_lr (float, optional): Minimum LR value to keep. If LR after decay
- is lower than `min_lr`, it will be clipped to this value. If None
- is given, we don't perform lr clipping. Default: None.
- """
-
- def __init__(self, step, gamma=0.1, min_lr=None, **kwargs):
- if isinstance(step, list):
- assert mmcv.is_list_of(step, int)
- assert all([s > 0 for s in step])
- elif isinstance(step, int):
- assert step > 0
- else:
- raise TypeError('"step" must be a list or integer')
- self.step = step
- self.gamma = gamma
- self.min_lr = min_lr
- super(StepLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
-
- # calculate exponential term
- if isinstance(self.step, int):
- exp = progress // self.step
- else:
- exp = len(self.step)
- for i, s in enumerate(self.step):
- if progress < s:
- exp = i
- break
-
- lr = base_lr * (self.gamma**exp)
- if self.min_lr is not None:
- # clip to a minimum value
- lr = max(lr, self.min_lr)
- return lr
-
-
-@HOOKS.register_module()
-class ExpLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, gamma, **kwargs):
- self.gamma = gamma
- super(ExpLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
- return base_lr * self.gamma**progress
-
-
-@HOOKS.register_module()
-class PolyLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, power=1., min_lr=0., **kwargs):
- self.power = power
- self.min_lr = min_lr
- super(PolyLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
- coeff = (1 - progress / max_progress)**self.power
- return (base_lr - self.min_lr) * coeff + self.min_lr
-
-
-@HOOKS.register_module()
-class InvLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, gamma, power=1., **kwargs):
- self.gamma = gamma
- self.power = power
- super(InvLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
- return base_lr * (1 + self.gamma * progress)**(-self.power)
-
-
-@HOOKS.register_module()
-class CosineAnnealingLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, min_lr=None, min_lr_ratio=None, **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- super(CosineAnnealingLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
- return annealing_cos(base_lr, target_lr, progress / max_progress)
-
-
-@HOOKS.register_module()
-class FlatCosineAnnealingLrUpdaterHook(LrUpdaterHook):
- """Flat + Cosine lr schedule.
-
- Modified from https://github.com/fastai/fastai/blob/master/fastai/callback/schedule.py#L128 # noqa: E501
-
- Args:
- start_percent (float): When to start annealing the learning rate
- after the percentage of the total training steps.
- The value should be in range [0, 1).
- Default: 0.75
- min_lr (float, optional): The minimum lr. Default: None.
- min_lr_ratio (float, optional): The ratio of minimum lr to the base lr.
- Either `min_lr` or `min_lr_ratio` should be specified.
- Default: None.
- """
-
- def __init__(self,
- start_percent=0.75,
- min_lr=None,
- min_lr_ratio=None,
- **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- if start_percent < 0 or start_percent > 1 or not isinstance(
- start_percent, float):
- raise ValueError(
- 'expected float between 0 and 1 start_percent, but '
- f'got {start_percent}')
- self.start_percent = start_percent
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- super(FlatCosineAnnealingLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- start = round(runner.max_epochs * self.start_percent)
- progress = runner.epoch - start
- max_progress = runner.max_epochs - start
- else:
- start = round(runner.max_iters * self.start_percent)
- progress = runner.iter - start
- max_progress = runner.max_iters - start
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
-
- if progress < 0:
- return base_lr
- else:
- return annealing_cos(base_lr, target_lr, progress / max_progress)
-
-
-@HOOKS.register_module()
-class CosineRestartLrUpdaterHook(LrUpdaterHook):
- """Cosine annealing with restarts learning rate scheme.
-
- Args:
- periods (list[int]): Periods for each cosine anneling cycle.
- restart_weights (list[float], optional): Restart weights at each
- restart iteration. Default: [1].
- min_lr (float, optional): The minimum lr. Default: None.
- min_lr_ratio (float, optional): The ratio of minimum lr to the base lr.
- Either `min_lr` or `min_lr_ratio` should be specified.
- Default: None.
- """
-
- def __init__(self,
- periods,
- restart_weights=[1],
- min_lr=None,
- min_lr_ratio=None,
- **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- self.periods = periods
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- self.restart_weights = restart_weights
- assert (len(self.periods) == len(self.restart_weights)
- ), 'periods and restart_weights should have the same length.'
- super(CosineRestartLrUpdaterHook, self).__init__(**kwargs)
-
- self.cumulative_periods = [
- sum(self.periods[0:i + 1]) for i in range(0, len(self.periods))
- ]
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- else:
- progress = runner.iter
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
-
- idx = get_position_from_periods(progress, self.cumulative_periods)
- current_weight = self.restart_weights[idx]
- nearest_restart = 0 if idx == 0 else self.cumulative_periods[idx - 1]
- current_periods = self.periods[idx]
-
- alpha = min((progress - nearest_restart) / current_periods, 1)
- return annealing_cos(base_lr, target_lr, alpha, current_weight)
-
-
-def get_position_from_periods(iteration, cumulative_periods):
- """Get the position from a period list.
-
- It will return the index of the right-closest number in the period list.
- For example, the cumulative_periods = [100, 200, 300, 400],
- if iteration == 50, return 0;
- if iteration == 210, return 2;
- if iteration == 300, return 3.
-
- Args:
- iteration (int): Current iteration.
- cumulative_periods (list[int]): Cumulative period list.
-
- Returns:
- int: The position of the right-closest number in the period list.
- """
- for i, period in enumerate(cumulative_periods):
- if iteration < period:
- return i
- raise ValueError(f'Current iteration {iteration} exceeds '
- f'cumulative_periods {cumulative_periods}')
-
-
-@HOOKS.register_module()
-class CyclicLrUpdaterHook(LrUpdaterHook):
- """Cyclic LR Scheduler.
-
- Implement the cyclical learning rate policy (CLR) described in
- https://arxiv.org/pdf/1506.01186.pdf
-
- Different from the original paper, we use cosine annealing rather than
- triangular policy inside a cycle. This improves the performance in the
- 3D detection area.
-
- Args:
- by_epoch (bool): Whether to update LR by epoch.
- target_ratio (tuple[float]): Relative ratio of the highest LR and the
- lowest LR to the initial LR.
- cyclic_times (int): Number of cycles during training
- step_ratio_up (float): The ratio of the increasing process of LR in
- the total cycle.
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing. Default: 'cos'.
- """
-
- def __init__(self,
- by_epoch=False,
- target_ratio=(10, 1e-4),
- cyclic_times=1,
- step_ratio_up=0.4,
- anneal_strategy='cos',
- **kwargs):
- if isinstance(target_ratio, float):
- target_ratio = (target_ratio, target_ratio / 1e5)
- elif isinstance(target_ratio, tuple):
- target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \
- if len(target_ratio) == 1 else target_ratio
- else:
- raise ValueError('target_ratio should be either float '
- f'or tuple, got {type(target_ratio)}')
-
- assert len(target_ratio) == 2, \
- '"target_ratio" must be list or tuple of two floats'
- assert 0 <= step_ratio_up < 1.0, \
- '"step_ratio_up" must be in range [0,1)'
-
- self.target_ratio = target_ratio
- self.cyclic_times = cyclic_times
- self.step_ratio_up = step_ratio_up
- self.lr_phases = [] # init lr_phases
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must be one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
-
- assert not by_epoch, \
- 'currently only support "by_epoch" = False'
- super(CyclicLrUpdaterHook, self).__init__(by_epoch, **kwargs)
-
- def before_run(self, runner):
- super(CyclicLrUpdaterHook, self).before_run(runner)
- # initiate lr_phases
- # total lr_phases are separated as up and down
- max_iter_per_phase = runner.max_iters // self.cyclic_times
- iter_up_phase = int(self.step_ratio_up * max_iter_per_phase)
- self.lr_phases.append(
- [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]])
- self.lr_phases.append([
- iter_up_phase, max_iter_per_phase, max_iter_per_phase,
- self.target_ratio[0], self.target_ratio[1]
- ])
-
- def get_lr(self, runner, base_lr):
- curr_iter = runner.iter
- for (start_iter, end_iter, max_iter_per_phase, start_ratio,
- end_ratio) in self.lr_phases:
- curr_iter %= max_iter_per_phase
- if start_iter <= curr_iter < end_iter:
- progress = curr_iter - start_iter
- return self.anneal_func(base_lr * start_ratio,
- base_lr * end_ratio,
- progress / (end_iter - start_iter))
-
-
-@HOOKS.register_module()
-class OneCycleLrUpdaterHook(LrUpdaterHook):
- """One Cycle LR Scheduler.
-
- The 1cycle learning rate policy changes the learning rate after every
- batch. The one cycle learning rate policy is described in
- https://arxiv.org/pdf/1708.07120.pdf
-
- Args:
- max_lr (float or list): Upper learning rate boundaries in the cycle
- for each parameter group.
- total_steps (int, optional): The total number of steps in the cycle.
- Note that if a value is not provided here, it will be the max_iter
- of runner. Default: None.
- pct_start (float): The percentage of the cycle (in number of steps)
- spent increasing the learning rate.
- Default: 0.3
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing.
- Default: 'cos'
- div_factor (float): Determines the initial learning rate via
- initial_lr = max_lr/div_factor
- Default: 25
- final_div_factor (float): Determines the minimum learning rate via
- min_lr = initial_lr/final_div_factor
- Default: 1e4
- three_phase (bool): If three_phase is True, use a third phase of the
- schedule to annihilate the learning rate according to
- final_div_factor instead of modifying the second phase (the first
- two phases will be symmetrical about the step indicated by
- pct_start).
- Default: False
- """
-
- def __init__(self,
- max_lr,
- total_steps=None,
- pct_start=0.3,
- anneal_strategy='cos',
- div_factor=25,
- final_div_factor=1e4,
- three_phase=False,
- **kwargs):
- # validate by_epoch, currently only support by_epoch = False
- if 'by_epoch' not in kwargs:
- kwargs['by_epoch'] = False
- else:
- assert not kwargs['by_epoch'], \
- 'currently only support "by_epoch" = False'
- if not isinstance(max_lr, (numbers.Number, list, dict)):
- raise ValueError('the type of max_lr must be the one of list or '
- f'dict, but got {type(max_lr)}')
- self._max_lr = max_lr
- if total_steps is not None:
- if not isinstance(total_steps, int):
- raise ValueError('the type of total_steps must be int, but'
- f'got {type(total_steps)}')
- self.total_steps = total_steps
- # validate pct_start
- if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float):
- raise ValueError('expected float between 0 and 1 pct_start, but '
- f'got {pct_start}')
- self.pct_start = pct_start
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must be one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
- self.div_factor = div_factor
- self.final_div_factor = final_div_factor
- self.three_phase = three_phase
- self.lr_phases = [] # init lr_phases
- super(OneCycleLrUpdaterHook, self).__init__(**kwargs)
-
- def before_run(self, runner):
- if hasattr(self, 'total_steps'):
- total_steps = self.total_steps
- else:
- total_steps = runner.max_iters
- if total_steps < runner.max_iters:
- raise ValueError(
- 'The total steps must be greater than or equal to max '
- f'iterations {runner.max_iters} of runner, but total steps '
- f'is {total_steps}.')
-
- if isinstance(runner.optimizer, dict):
- self.base_lr = {}
- for k, optim in runner.optimizer.items():
- _max_lr = format_param(k, optim, self._max_lr)
- self.base_lr[k] = [lr / self.div_factor for lr in _max_lr]
- for group, lr in zip(optim.param_groups, self.base_lr[k]):
- group.setdefault('initial_lr', lr)
- else:
- k = type(runner.optimizer).__name__
- _max_lr = format_param(k, runner.optimizer, self._max_lr)
- self.base_lr = [lr / self.div_factor for lr in _max_lr]
- for group, lr in zip(runner.optimizer.param_groups, self.base_lr):
- group.setdefault('initial_lr', lr)
-
- if self.three_phase:
- self.lr_phases.append(
- [float(self.pct_start * total_steps) - 1, 1, self.div_factor])
- self.lr_phases.append([
- float(2 * self.pct_start * total_steps) - 2, self.div_factor, 1
- ])
- self.lr_phases.append(
- [total_steps - 1, 1, 1 / self.final_div_factor])
- else:
- self.lr_phases.append(
- [float(self.pct_start * total_steps) - 1, 1, self.div_factor])
- self.lr_phases.append(
- [total_steps - 1, self.div_factor, 1 / self.final_div_factor])
-
- def get_lr(self, runner, base_lr):
- curr_iter = runner.iter
- start_iter = 0
- for i, (end_iter, start_lr, end_lr) in enumerate(self.lr_phases):
- if curr_iter <= end_iter:
- pct = (curr_iter - start_iter) / (end_iter - start_iter)
- lr = self.anneal_func(base_lr * start_lr, base_lr * end_lr,
- pct)
- break
- start_iter = end_iter
- return lr
-
-
-def annealing_cos(start, end, factor, weight=1):
- """Calculate annealing cos learning rate.
-
- Cosine anneal from `weight * start + (1 - weight) * end` to `end` as
- percentage goes from 0.0 to 1.0.
-
- Args:
- start (float): The starting learning rate of the cosine annealing.
- end (float): The ending learing rate of the cosine annealing.
- factor (float): The coefficient of `pi` when calculating the current
- percentage. Range from 0.0 to 1.0.
- weight (float, optional): The combination factor of `start` and `end`
- when calculating the actual starting learning rate. Default to 1.
- """
- cos_out = cos(pi * factor) + 1
- return end + 0.5 * weight * (start - end) * cos_out
-
-
-def annealing_linear(start, end, factor):
- """Calculate annealing linear learning rate.
-
- Linear anneal from `start` to `end` as percentage goes from 0.0 to 1.0.
-
- Args:
- start (float): The starting learning rate of the linear annealing.
- end (float): The ending learing rate of the linear annealing.
- factor (float): The coefficient of `pi` when calculating the current
- percentage. Range from 0.0 to 1.0.
- """
- return start + (end - start) * factor
-
-
-def format_param(name, optim, param):
- if isinstance(param, numbers.Number):
- return [param] * len(optim.param_groups)
- elif isinstance(param, (list, tuple)): # multi param groups
- if len(param) != len(optim.param_groups):
- raise ValueError(f'expected {len(optim.param_groups)} '
- f'values for {name}, got {len(param)}')
- return param
- else: # multi optimizers
- if name not in param:
- raise KeyError(f'{name} is not found in {param.keys()}')
- return param[name]
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/log_buffer.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/log_buffer.py
deleted file mode 100644
index d949e2941c5400088c7cd8a1dc893d8b233ae785..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/log_buffer.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from collections import OrderedDict
-
-import numpy as np
-
-
-class LogBuffer:
-
- def __init__(self):
- self.val_history = OrderedDict()
- self.n_history = OrderedDict()
- self.output = OrderedDict()
- self.ready = False
-
- def clear(self):
- self.val_history.clear()
- self.n_history.clear()
- self.clear_output()
-
- def clear_output(self):
- self.output.clear()
- self.ready = False
-
- def update(self, vars, count=1):
- assert isinstance(vars, dict)
- for key, var in vars.items():
- if key not in self.val_history:
- self.val_history[key] = []
- self.n_history[key] = []
- self.val_history[key].append(var)
- self.n_history[key].append(count)
-
- def average(self, n=0):
- """Average latest n values or all values."""
- assert n >= 0
- for key in self.val_history:
- values = np.array(self.val_history[key][-n:])
- nums = np.array(self.n_history[key][-n:])
- avg = np.sum(values * nums) / np.sum(nums)
- self.output[key] = avg
- self.ready = True
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/__init__.py
deleted file mode 100644
index ebeaef4a28ef655e43578552a8aef6b77f13a636..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from .ade import ADE20KDataset
-from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset
-from .chase_db1 import ChaseDB1Dataset
-from .cityscapes import CityscapesDataset
-from .custom import CustomDataset
-from .dataset_wrappers import ConcatDataset, RepeatDataset
-from .drive import DRIVEDataset
-from .hrf import HRFDataset
-from .pascal_context import PascalContextDataset, PascalContextDataset59
-from .stare import STAREDataset
-from .voc import PascalVOCDataset
-
-__all__ = [
- 'CustomDataset', 'build_dataloader', 'ConcatDataset', 'RepeatDataset',
- 'DATASETS', 'build_dataset', 'PIPELINES', 'CityscapesDataset',
- 'PascalVOCDataset', 'ADE20KDataset', 'PascalContextDataset',
- 'PascalContextDataset59', 'ChaseDB1Dataset', 'DRIVEDataset', 'HRFDataset',
- 'STAREDataset'
-]
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/install/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/install/__init__.py
deleted file mode 100644
index 24d6a5dd31fe33b03f90ed0f9ee465253686900c..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/install/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-"""For modules related to installing packages.
-"""
diff --git a/spaces/Audio-AGI/AudioSep/pipeline.py b/spaces/Audio-AGI/AudioSep/pipeline.py
deleted file mode 100644
index 25c58d8761a7e1c0d0eb53560c3ce10887ffe53d..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/AudioSep/pipeline.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import yaml
-from typing import Dict, List
-import torch
-import torch.nn as nn
-import numpy as np
-import librosa
-from scipy.io.wavfile import write
-from utils import ignore_warnings; ignore_warnings()
-from utils import parse_yaml, load_ss_model
-from models.clap_encoder import CLAP_Encoder
-
-
-def build_audiosep(config_yaml, checkpoint_path, device):
- configs = parse_yaml(config_yaml)
-
- query_encoder = CLAP_Encoder().eval()
- model = load_ss_model(
- configs=configs,
- checkpoint_path=checkpoint_path,
- query_encoder=query_encoder
- ).eval().to(device)
-
- print(f'Load AudioSep model from [{checkpoint_path}]')
- return model
-
-
-def inference(model, audio_file, text, output_file, device='cuda'):
- print(f'Separate audio from [{audio_file}] with textual query [{text}]')
- mixture, fs = librosa.load(audio_file, sr=32000, mono=True)
- with torch.no_grad():
- text = [text]
-
- conditions = model.query_encoder.get_query_embed(
- modality='text',
- text=text,
- device=device
- )
-
- input_dict = {
- "mixture": torch.Tensor(mixture)[None, None, :].to(device),
- "condition": conditions,
- }
-
- sep_segment = model.ss_model.chunk_inference(input_dict)
-
- write(output_file, 32000, np.round(sep_segment * 32767).astype(np.int16))
- print(f'Write separated audio to [{output_file}]')
-
-
-if __name__ == '__main__':
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
- model = build_audiosep(
- config_yaml='config/audiosep_base.yaml',
- checkpoint_path='checkpoint/step=3920000.ckpt',
- device=device)
-
- audio_file = '/mnt/bn/data-xubo/project/AudioShop/YT_audios/Y3VHpLxtd498.wav'
- text = 'pigeons are cooing in the background'
- output_file='separated_audio.wav'
-
- inference(model, audio_file, text, output_file, device)
-
-
-
-
-
-
diff --git a/spaces/BMukhtar/facemaskDetector/README.md b/spaces/BMukhtar/facemaskDetector/README.md
deleted file mode 100644
index cba085eb4c57d375fdae5ac67d6d1e86f1a50ba1..0000000000000000000000000000000000000000
--- a/spaces/BMukhtar/facemaskDetector/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: FacemaskDetector
-emoji: ๐ฆ
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Bart92/RVC_HF/infer/modules/ipex/__init__.py.py b/spaces/Bart92/RVC_HF/infer/modules/ipex/__init__.py.py
deleted file mode 100644
index 9f53b2d3f7025b2d71369dababa4e6f2a4affc48..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/infer/modules/ipex/__init__.py.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import os
-import sys
-import contextlib
-import torch
-import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
-from .hijacks import ipex_hijacks
-from .attention import attention_init
-
-# pylint: disable=protected-access, missing-function-docstring, line-too-long
-
-def ipex_init(): # pylint: disable=too-many-statements
- try:
- #Replace cuda with xpu:
- torch.cuda.current_device = torch.xpu.current_device
- torch.cuda.current_stream = torch.xpu.current_stream
- torch.cuda.device = torch.xpu.device
- torch.cuda.device_count = torch.xpu.device_count
- torch.cuda.device_of = torch.xpu.device_of
- torch.cuda.getDeviceIdListForCard = torch.xpu.getDeviceIdListForCard
- torch.cuda.get_device_name = torch.xpu.get_device_name
- torch.cuda.get_device_properties = torch.xpu.get_device_properties
- torch.cuda.init = torch.xpu.init
- torch.cuda.is_available = torch.xpu.is_available
- torch.cuda.is_initialized = torch.xpu.is_initialized
- torch.cuda.is_current_stream_capturing = lambda: False
- torch.cuda.set_device = torch.xpu.set_device
- torch.cuda.stream = torch.xpu.stream
- torch.cuda.synchronize = torch.xpu.synchronize
- torch.cuda.Event = torch.xpu.Event
- torch.cuda.Stream = torch.xpu.Stream
- torch.cuda.FloatTensor = torch.xpu.FloatTensor
- torch.Tensor.cuda = torch.Tensor.xpu
- torch.Tensor.is_cuda = torch.Tensor.is_xpu
- torch.cuda._initialization_lock = torch.xpu.lazy_init._initialization_lock
- torch.cuda._initialized = torch.xpu.lazy_init._initialized
- torch.cuda._lazy_seed_tracker = torch.xpu.lazy_init._lazy_seed_tracker
- torch.cuda._queued_calls = torch.xpu.lazy_init._queued_calls
- torch.cuda._tls = torch.xpu.lazy_init._tls
- torch.cuda.threading = torch.xpu.lazy_init.threading
- torch.cuda.traceback = torch.xpu.lazy_init.traceback
- torch.cuda.Optional = torch.xpu.Optional
- torch.cuda.__cached__ = torch.xpu.__cached__
- torch.cuda.__loader__ = torch.xpu.__loader__
- torch.cuda.ComplexFloatStorage = torch.xpu.ComplexFloatStorage
- torch.cuda.Tuple = torch.xpu.Tuple
- torch.cuda.streams = torch.xpu.streams
- torch.cuda._lazy_new = torch.xpu._lazy_new
- torch.cuda.FloatStorage = torch.xpu.FloatStorage
- torch.cuda.Any = torch.xpu.Any
- torch.cuda.__doc__ = torch.xpu.__doc__
- torch.cuda.default_generators = torch.xpu.default_generators
- torch.cuda.HalfTensor = torch.xpu.HalfTensor
- torch.cuda._get_device_index = torch.xpu._get_device_index
- torch.cuda.__path__ = torch.xpu.__path__
- torch.cuda.Device = torch.xpu.Device
- torch.cuda.IntTensor = torch.xpu.IntTensor
- torch.cuda.ByteStorage = torch.xpu.ByteStorage
- torch.cuda.set_stream = torch.xpu.set_stream
- torch.cuda.BoolStorage = torch.xpu.BoolStorage
- torch.cuda.os = torch.xpu.os
- torch.cuda.torch = torch.xpu.torch
- torch.cuda.BFloat16Storage = torch.xpu.BFloat16Storage
- torch.cuda.Union = torch.xpu.Union
- torch.cuda.DoubleTensor = torch.xpu.DoubleTensor
- torch.cuda.ShortTensor = torch.xpu.ShortTensor
- torch.cuda.LongTensor = torch.xpu.LongTensor
- torch.cuda.IntStorage = torch.xpu.IntStorage
- torch.cuda.LongStorage = torch.xpu.LongStorage
- torch.cuda.__annotations__ = torch.xpu.__annotations__
- torch.cuda.__package__ = torch.xpu.__package__
- torch.cuda.__builtins__ = torch.xpu.__builtins__
- torch.cuda.CharTensor = torch.xpu.CharTensor
- torch.cuda.List = torch.xpu.List
- torch.cuda._lazy_init = torch.xpu._lazy_init
- torch.cuda.BFloat16Tensor = torch.xpu.BFloat16Tensor
- torch.cuda.DoubleStorage = torch.xpu.DoubleStorage
- torch.cuda.ByteTensor = torch.xpu.ByteTensor
- torch.cuda.StreamContext = torch.xpu.StreamContext
- torch.cuda.ComplexDoubleStorage = torch.xpu.ComplexDoubleStorage
- torch.cuda.ShortStorage = torch.xpu.ShortStorage
- torch.cuda._lazy_call = torch.xpu._lazy_call
- torch.cuda.HalfStorage = torch.xpu.HalfStorage
- torch.cuda.random = torch.xpu.random
- torch.cuda._device = torch.xpu._device
- torch.cuda.classproperty = torch.xpu.classproperty
- torch.cuda.__name__ = torch.xpu.__name__
- torch.cuda._device_t = torch.xpu._device_t
- torch.cuda.warnings = torch.xpu.warnings
- torch.cuda.__spec__ = torch.xpu.__spec__
- torch.cuda.BoolTensor = torch.xpu.BoolTensor
- torch.cuda.CharStorage = torch.xpu.CharStorage
- torch.cuda.__file__ = torch.xpu.__file__
- torch.cuda._is_in_bad_fork = torch.xpu.lazy_init._is_in_bad_fork
- #torch.cuda.is_current_stream_capturing = torch.xpu.is_current_stream_capturing
-
- #Memory:
- torch.cuda.memory = torch.xpu.memory
- if 'linux' in sys.platform and "WSL2" in os.popen("uname -a").read():
- torch.xpu.empty_cache = lambda: None
- torch.cuda.empty_cache = torch.xpu.empty_cache
- torch.cuda.memory_stats = torch.xpu.memory_stats
- torch.cuda.memory_summary = torch.xpu.memory_summary
- torch.cuda.memory_snapshot = torch.xpu.memory_snapshot
- torch.cuda.memory_allocated = torch.xpu.memory_allocated
- torch.cuda.max_memory_allocated = torch.xpu.max_memory_allocated
- torch.cuda.memory_reserved = torch.xpu.memory_reserved
- torch.cuda.memory_cached = torch.xpu.memory_reserved
- torch.cuda.max_memory_reserved = torch.xpu.max_memory_reserved
- torch.cuda.max_memory_cached = torch.xpu.max_memory_reserved
- torch.cuda.reset_peak_memory_stats = torch.xpu.reset_peak_memory_stats
- torch.cuda.reset_max_memory_cached = torch.xpu.reset_peak_memory_stats
- torch.cuda.reset_max_memory_allocated = torch.xpu.reset_peak_memory_stats
- torch.cuda.memory_stats_as_nested_dict = torch.xpu.memory_stats_as_nested_dict
- torch.cuda.reset_accumulated_memory_stats = torch.xpu.reset_accumulated_memory_stats
-
- #RNG:
- torch.cuda.get_rng_state = torch.xpu.get_rng_state
- torch.cuda.get_rng_state_all = torch.xpu.get_rng_state_all
- torch.cuda.set_rng_state = torch.xpu.set_rng_state
- torch.cuda.set_rng_state_all = torch.xpu.set_rng_state_all
- torch.cuda.manual_seed = torch.xpu.manual_seed
- torch.cuda.manual_seed_all = torch.xpu.manual_seed_all
- torch.cuda.seed = torch.xpu.seed
- torch.cuda.seed_all = torch.xpu.seed_all
- torch.cuda.initial_seed = torch.xpu.initial_seed
-
- #AMP:
- torch.cuda.amp = torch.xpu.amp
- if not hasattr(torch.cuda.amp, "common"):
- torch.cuda.amp.common = contextlib.nullcontext()
- torch.cuda.amp.common.amp_definitely_not_available = lambda: False
- try:
- torch.cuda.amp.GradScaler = torch.xpu.amp.GradScaler
- except Exception: # pylint: disable=broad-exception-caught
- try:
- from .gradscaler import gradscaler_init # pylint: disable=import-outside-toplevel, import-error
- gradscaler_init()
- torch.cuda.amp.GradScaler = torch.xpu.amp.GradScaler
- except Exception: # pylint: disable=broad-exception-caught
- torch.cuda.amp.GradScaler = ipex.cpu.autocast._grad_scaler.GradScaler
-
- #C
- torch._C._cuda_getCurrentRawStream = ipex._C._getCurrentStream
- ipex._C._DeviceProperties.major = 2023
- ipex._C._DeviceProperties.minor = 2
-
- #Fix functions with ipex:
- torch.cuda.mem_get_info = lambda device=None: [(torch.xpu.get_device_properties(device).total_memory - torch.xpu.memory_allocated(device)), torch.xpu.get_device_properties(device).total_memory]
- torch._utils._get_available_device_type = lambda: "xpu"
- torch.has_cuda = True
- torch.cuda.has_half = True
- torch.cuda.is_bf16_supported = lambda *args, **kwargs: True
- torch.cuda.is_fp16_supported = lambda *args, **kwargs: True
- torch.version.cuda = "11.7"
- torch.cuda.get_device_capability = lambda *args, **kwargs: [11,7]
- torch.cuda.get_device_properties.major = 11
- torch.cuda.get_device_properties.minor = 7
- torch.cuda.ipc_collect = lambda *args, **kwargs: None
- torch.cuda.utilization = lambda *args, **kwargs: 0
-
- ipex_hijacks()
- attention_init()
- except Exception as e:
- return False, e
- return True, None
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Apk Mod 8 Piscina De Bolas 5.11.2.md b/spaces/Benson/text-generation/Examples/Apk Mod 8 Piscina De Bolas 5.11.2.md
deleted file mode 100644
index d53a7bb9ea49c66e20b2ce69fffa3f31eb63b865..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Apk Mod 8 Piscina De Bolas 5.11.2.md
+++ /dev/null
@@ -1,151 +0,0 @@
-
-Mod APK 8 Ball Pool 5.11.2: Cรณmo descargar y jugar el mejor juego de billar en Android
- ยฟTe gusta jugar juegos de billar en tu smartphone? Si es asรญ, entonces debes haber oรญdo hablar de 8 Ball Pool, el juego de billar mรกs popular y adictivo en Android. Pero ยฟsabรญas que hay una manera de hacer este juego aรบn mรกs divertido y emocionante? Sรญ, estamos hablando de mod apk 8 bola piscina 5.11.2, la รบltima versiรณn de la aplicaciรณn modificada que le da monedas ilimitadas, dinero en efectivo, seรฑales, y mรกs.
-apk mod 8 piscina de bolas 5.11.2
DOWNLOAD ⚡ https://bltlly.com/2v6MZ1
- En este artรญculo, le diremos todo lo que necesita saber sobre apk mod 8 piscina de bolas 5.11.2, incluyendo lo que es, cรณmo descargar e instalar, y cรณmo jugarlo. Tambiรฉn lo compararemos con la versiรณn original del juego y destacaremos sus pros y sus contras. Asรญ que, sin mรกs preรกmbulos, ยกvamos a bucear!
- ยฟQuรฉ es la piscina de bolas 8?
- Antes de hablar de apk mod 8 piscina de bolas 5.11.2, primero vamos a entender lo que es 8 Ball Pool y por quรฉ es tan popular entre millones de jugadores en todo el mundo.
- Las caracterรญsticas y el juego de 8 Ball Pool
- 8 Ball Pool es un juego de billar realista e inmersivo que te permite jugar online con tus amigos u otros jugadores de todo el mundo. Puedes elegir entre diferentes modos de juego, tales como partidos 1-on-1, torneos, , juegos de 9 bolas, o modo de prรกctica. Tambiรฉn puedes personalizar tu entrada, tabla, avatar, frases de chat y mรกs.
-
- El juego de 8 Ball Pool es simple e intuitivo. Solo tienes que deslizar el dedo en la pantalla para apuntar la seรฑal, ajustar la potencia y soltar para golpear la pelota. Tambiรฉn puede utilizar la funciรณn de giro para agregar un poco de curva o รกngulo a sus disparos. El objetivo es embolsarte todas tus bolas antes que tu oponente, siguiendo las reglas estรกndar del pool.
- Los beneficios y desventajas de jugar 8 Ball Pool
-
-
-- Mejorar tus habilidades: Jugar juegos de billar puede ayudarte a mejorar tu concentraciรณn, precisiรณn, estrategia y conocimientos de fรญsica.
-- Hacer nuevos amigos: Jugar online con otros jugadores puede ayudarte a socializar, chatear y hacer nuevos amigos de diferentes paรญses y culturas.
-- Ganar recompensas: Jugar partidos y torneos puede ayudarte a ganar monedas, dinero, seรฑales, trofeos y otras recompensas que puedes usar para mejorar tu juego.
-
- Sin embargo, jugar 8 Ball Pool tambiรฉn puede tener algunos inconvenientes, como:
-
-- Gastar demasiado dinero: Jugar a 8 Ball Pool puede ser tentador para gastar dinero real para comprar monedas, efectivo, tacos u otros artรญculos que pueden darte una ventaja sobre tus oponentes. Sin embargo, esto puede ser arriesgado y derrochador, ya que puede no obtener el valor que espera o perder su cuenta debido a la piraterรญa o la prohibiciรณn.
-- Volverse adicto: Jugar 8 Ball Pool puede ser muy adictivo, ya que es posible que desee jugar mรกs y mรกs partidos para ganar mรกs recompensas, clasificar o vencer a sus rivales. Sin embargo, esto puede ser perjudicial para su salud, productividad y relaciones, ya que puede descuidar sus otras responsabilidades y aficiones.
-- Frente a la competencia desleal: Jugar al billar de 8 bolas puede ser frustrante e injusto, ya que puedes enfrentarte a oponentes que usan trucos, hacks, mods o bots para ganar juegos fรกcil e injustamente. Esto puede arruinar tu experiencia de juego y hacerte perder tu motivaciรณn y confianza.
-
- Entonces, ยฟcรณmo se puede disfrutar de 8 Ball Pool sin hacer frente a estos inconvenientes? Bueno, una posible soluciรณn es utilizar mod apk 8 ball pool 5.11.2.
- ยฟQuรฉ es la piscina de bolas mod apk 8 5.11.2?
-
- Las diferencias entre el original y la versiรณn modificada de 8 Ball Pool
- Las principales diferencias entre el original y la versiรณn modificada de 8 Ball Pool son:
-
-
-Versiรณn original |
-Versiรณn modificada |
-
-
-Monedas y efectivo limitados |
-Monedas y efectivo ilimitados |
-
-
-Claves y tablas bรกsicas |
-Claves y tablas premium |
-
-
-Juego normal y dificultad |
-Fรกcil juego y dificultad |
-
-
-No hay caracterรญsticas adicionales u opciones |
-Muchas caracterรญsticas y opciones adicionales |
-
-
-Seguro y protegido |
-Arriesgado e inseguro |
-
-
- Las ventajas y desventajas de usar la piscina de bolas mod apk 8 5.11.2
- El uso de la piscina de bolas mod apk 8 5.11.2 puede tener algunas ventajas y desventajas, como:
-
-- Ventajas:
- - Puedes disfrutar de monedas ilimitadas y dinero en efectivo que puedes usar para comprar lo que quieras en el juego.
- - Puede usar seรฑales y tablas premium que tienen mejores estadรญsticas y diseรฑos que las bรกsicas.
- - Puedes jugar juegos mรกs fรกciles y ganar mรกs partidos sin mucho esfuerzo o habilidad.
- - Puede acceder a muchas caracterรญsticas y opciones adicionales que pueden mejorar su experiencia de juego y diversiรณn.
-
-
-- Desventajas:
- - Usted puede enfrentar problemas legales o sanciones por violar los tรฉrminos y condiciones del juego original.
- - Puede perder su cuenta o progreso si la aplicaciรณn modded es detectada o prohibida por los desarrolladores o autoridades del juego.
- - Puede exponer su dispositivo o datos a malware o virus que pueden daรฑar su seguridad o privacidad.
- - Usted puede perder el desafรญo y la emociรณn de jugar juegos justos y competitivos con jugadores reales.
-
-
-
-
- ยฟCรณmo descargar e instalar mod apk 8 ball pool 5.11.2?
- Si has decidido probar mod apk 8 ball pool 5.11.2, debes seguir algunos pasos para descargarlo e instalarlo en tu dispositivo Android. Estos son los pasos:
- Los pasos para descargar mod apk 8 bola piscina 5.11.2 de una fuente confiable
-
-- En primer lugar, es necesario encontrar una fuente confiable que ofrece apk mod 8 bola piscina 5.11.2 para su descarga gratuita. Usted puede buscar en Google u otros motores de bรบsqueda de palabras clave como "apk mod 8 bola piscina 5.11.2 descarga" o "apk mod 8 bola piscina 5.11.2 descarga gratuita". Sin embargo, debes tener cuidado y evitar cualquier enlace sospechoso o falso que pueda contener malware o virus.
-- En segundo lugar, es necesario elegir una fuente confiable y de buena reputaciรณn que tiene comentarios positivos y calificaciones de otros usuarios. Tambiรฉn puede comprobar los comentarios y comentarios de otros usuarios que han descargado y utilizado apk mod 8 ball pool 5.11.2 de esa fuente. Tambiรฉn puede verificar la autenticidad y seguridad de la fuente utilizando herramientas como VirusTotal o Malwarebytes.
-- En tercer lugar, es necesario hacer clic en el enlace de descarga o botรณn proporcionado por la fuente y esperar a que el archivo apk mod para ser descargado en su dispositivo. Es posible que necesite permitir algunos permisos o habilitar algunos ajustes para permitir que el proceso de descarga se realice sin problemas.
-
- Las precauciones y consejos para evitar el malware y los virus al descargar apk mod 8 piscina de bolas 5.11.2
- Descargar mod apk 8 ball pool 5.11.2 puede ser arriesgado y peligroso, ya que puede exponer su dispositivo o datos a malware o virus que pueden daรฑar su seguridad o privacidad. Por lo tanto, es necesario tomar algunas precauciones y consejos para evitar el malware y los virus al descargar apk mod 8 piscina de bolas 5.11.2, tales como:
-
-
-- Utilice un antivirus: El uso de un antivirus puede ayudarle a escanear y detectar cualquier malware o virus que puedan estar ocultos en el archivo apk mod o en el sitio web de origen. Tambiรฉn puede ayudarlo a eliminar o poner en cuarentena cualquier archivo o programa malicioso que pueda infectar su dispositivo o datos.
-- Utilice una copia de seguridad: El uso de una copia de seguridad puede ayudarle a guardar y restaurar su dispositivo o datos en caso de cualquier daรฑo o pรฉrdida causada por malware o virus. Puede utilizar un servicio en la nube, un dispositivo de almacenamiento externo o una herramienta de recuperaciรณn para realizar copias de seguridad de su dispositivo o datos con regularidad.
-
- Las instrucciones para instalar y ejecutar mod apk 8 ball pool 5.11.2 en su dispositivo Android
-
-- Primero, necesita desinstalar la versiรณn original de 8 Ball Pool desde su dispositivo si ya lo tiene instalado. Esto se debe a que la versiรณn modificada puede no funcionar correctamente o causar conflictos con la versiรณn original.
-- En segundo lugar, es necesario habilitar la instalaciรณn de aplicaciones de fuentes desconocidas en el dispositivo. Esto se debe a mod apk 8 piscina de bolas 5.11.2 no estรก disponible en el oficial de Google Play Store y es considerado como una fuente desconocida por su dispositivo. Para habilitar esta opciรณn, debe ir a Configuraciรณn > Seguridad > Fuentes desconocidas y activarla.
-- En tercer lugar, es necesario localizar el archivo apk mod que ha descargado en su dispositivo y toque en รฉl para iniciar el proceso de instalaciรณn. Es posible que necesite seguir algunas instrucciones o aceptar algunos tรฉrminos y condiciones para completar el proceso de instalaciรณn.
-- Cuarto, es necesario iniciar la aplicaciรณn modded desde el cajรณn de la aplicaciรณn o la pantalla de inicio y disfrutar de jugar apk mod 8 piscina de bolas 5.11.2 con recursos y caracterรญsticas ilimitadas.
-
- ยฟCรณmo se juega apk mod 8 bola piscina 5.11.2?
-
- Las reglas bรกsicas y los controles de la piscina de bola mod apk 8 5.11.2
- Las reglas y controles bรกsicos de mod apk 8 ball pool 5.11.2 son:
-
-- Puedes jugar online con tus amigos u otros jugadores de todo el mundo en diferentes modos de juego, como partidos 1-on-1, torneos, , juegos de 9 bolas, o modo de prรกctica.
-- Puede deslizar el dedo sobre la pantalla para apuntar su seรฑal, ajustar la potencia y soltar para golpear la pelota. Tambiรฉn puede utilizar la funciรณn de giro para agregar alguna curva o รกngulo a sus disparos.
-- Puedes meter todas tus bolas antes que tu oponente, siguiendo las reglas estรกndar del pool.
-- Puede personalizar su seรฑal, tabla, avatar, frases de chat, y mรกs con monedas ilimitadas y dinero en efectivo que tiene en la aplicaciรณn modded.
-- Puedes usar seรฑales y tablas premium que tienen mejores estadรญsticas y diseรฑos que las bรกsicas. Tambiรฉn puedes desbloquear y usar pistas y tablas exclusivas que no estรกn disponibles en el juego original.
-- Puedes jugar juegos mรกs fรกciles y ganar mรกs partidos sin mucho esfuerzo o habilidad. Tambiรฉn puedes usar trucos, hacks, mods o bots para ganar juegos fรกcil e injustamente.
-- Puede acceder a muchas caracterรญsticas y opciones adicionales que pueden mejorar su experiencia de juego y diversiรณn. Por ejemplo, puedes usar la funciรณn auto-win para ganar cualquier juego al instante, la funciรณn long-line para extender tu lรญnea de punterรญa, la funciรณn anti-van para evitar la detecciรณn o la prohibiciรณn, y mรกs.
-
- Los modos y desafรญos de mod apk 8 bola piscina 5.11.2
- Los modos y desafรญos de mod apk 8 ball pool 5.11.2 son:
-
-- 1-on-1 matches: Puedes jugar contra otro jugador en un solo juego de 8 bolas. Puedes elegir la cantidad de apuesta, la mesa y las reglas. El ganador se lleva todas las monedas y trofeos.
-
-- 9-ball games: Puedes jugar contra otro jugador en un solo juego de 9 bolas. Tienes que meter las bolas en orden numรฉrico, del 1 al 9. El primer jugador en meter la bola 9 gana el juego.
-- Modo de prรกctica: Puedes jugar solo en un juego de pool de 8 bolas o pool de 9 bolas. Puedes practicar tus habilidades, probar diferentes pistas y mesas, y divertirte sin ninguna presiรณn o competencia.
-
- Los consejos y trucos para ganar mรกs juegos y monedas en apk mod 8 bola piscina 5.11.2
- Aunque mod apk 8 bola piscina 5.11.2 le da recursos ilimitados y caracterรญsticas que pueden hacer que su experiencia de juego mรกs divertido y emocionante, todavรญa necesita algunos consejos y trucos para ganar mรกs juegos y monedas en apk mod 8 bola piscina 5.11.2, tales como:
-
-- Elige tu seรฑal sabiamente: Diferentes seรฑales tienen diferentes estadรญsticas, como poder, objetivo, efectos y tiempo. Debe elegir un taco que se adapte a su estilo y preferencia. Tambiรฉn puede actualizar su taco con monedas o dinero en efectivo para mejorar sus estadรญsticas.
-- Usa la funciรณn de giro: La funciรณn de giro puede ayudarte a aรฑadir alguna curva o รกngulo a tus disparos, lo que puede ayudarte a evitar obstรกculos, crear mejores posiciones o meter bolas complicadas. Puedes usar la funciรณn de giro pulsando el icono de bola blanca en la esquina superior derecha de la pantalla y arrastrรกndolo para ajustar la direcciรณn e intensidad del giro.
-- Planifica tus tiros: Antes de golpear la pelota, debes planificar tus tiros y pensar en las consecuencias. Debe considerar factores como la posiciรณn de la bola, el รกngulo de referencia, la potencia, el giro, el diseรฑo de la mesa y las reglas. Tambiรฉn debe intentar predecir dรณnde terminarรกn la bola blanca y las bolas de objeto despuรฉs de su disparo.
-
-
- Conclusiรณn
- En conclusiรณn, mod apk 8 ball pool 5.11.2 es una versiรณn modificada de la aplicaciรณn original 8 Ball Pool que le da acceso a recursos ilimitados y caracterรญsticas que no estรกn disponibles en el juego oficial. Es una aplicaciรณn de terceros creada por hackers o desarrolladores que modifican el cรณdigo original del juego y aรฑaden nuevas funciones y elementos.
- Mod apk 8 piscina de bolas 5.11.2 puede ser muy divertido y emocionante, ya que le ofrece muchas ventajas, como monedas ilimitadas y dinero en efectivo, pistas y mesas premium, fรกcil juego y dificultad, y muchas caracterรญsticas y opciones adicionales. Sin embargo, tambiรฉn puede ser arriesgado y peligroso, ya que puede exponerlo a problemas legales o sanciones, pรฉrdida de cuentas o progreso, malware o virus y competencia desleal.
- Por lo tanto, debe sopesar los pros y los contras de usar apk mod 8 ball pool 5.11.2 antes de decidir descargarlo e instalarlo en su dispositivo. Tambiรฉn debe seguir algunos pasos y consejos para descargar e instalar de forma segura. Tambiรฉn debes aprender algunos consejos y trucos para jugarlo de manera efectiva y agradable.
- Esperamos que este artรญculo le ha ayudado a entender lo que apk mod 8 ball pool 5.11.2 es, cรณmo descargarlo e instalarlo, y cรณmo jugarlo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuaciรณn. ยกGracias por leer!
- Preguntas frecuentes
- Aquรญ hay algunas preguntas frecuentes sobre la piscina de bola mod apk 8 5.11.2:
-
-- ยฟEs mod apk 8 piscina de bolas 5.11.2 legal?
-
No, mod apk 8 ball pool 5.11.2 no es legal, ya que viola los tรฉrminos y condiciones del juego original. Tambiรฉn se considera piraterรญa, ya que utiliza el contenido original del juego sin permiso ni pago. El uso de mod apk 8 ball pool 5.11.2 puede resultar en acciones legales o sanciones de los desarrolladores o autoridades del juego.
-- Es mod apk 8 piscina de bolas 5.11.2 seguro?
-
-
- Es mod apk 8 bola piscina 5.11.2 compatible con todos los dispositivos Android?
-
no, mod apk 8 ball pool 5.11.2 puede no ser compatible con todos los dispositivos Android, ya que puede requerir ciertas especificaciones o permisos que pueden no estar disponibles en algunos dispositivos. Tambiรฉn puede causar algunos errores o fallos en algunos dispositivos debido a problemas de compatibilidad. Usar mod apk 8 ball pool 5.11.2 puede afectar el rendimiento o la funcionalidad de su dispositivo.
-- ยฟPuedo jugar en lรญnea con otros jugadores usando mod apk 8 ball pool 5.11.2?
-
Sรญ, se puede jugar en lรญnea con otros jugadores usando apk mod 8 pool de bolas 5.11.2, pero puede enfrentar algunos problemas, como:
-
-- Es posible que no pueda unirse a algunos juegos o salas que estรกn restringidos a la versiรณn original del juego.
-- Usted puede ser emparejado con otros jugadores que tambiรฉn estรกn utilizando apk mod 8 piscina de bolas 5.11.2, que puede hacer que los juegos aburridos o injustos.
-- Puedes ser reportado o marcado por otros jugadores que estรกn usando la versiรณn original del juego, lo que puede llevar a la detecciรณn o prohibiciรณn.
-
-- ยฟPuedo actualizar la piscina de bolas mod apk 8 5.11.2?
-
No, no se puede actualizar el mod apk 8 ball pool 5.11.2, ya que no es compatible con los desarrolladores del juego o las autoridades. Si intenta actualizarlo, puede perder sus caracterรญsticas o recursos modificados, o puede enfrentar algunos errores o problemas debido a problemas de compatibilidad.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar 28 Semanas Despus.md b/spaces/Benson/text-generation/Examples/Descargar 28 Semanas Despus.md
deleted file mode 100644
index 633574c38aade096a577d10b5cfbacc5a2aea57f..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar 28 Semanas Despus.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-Cรณmo descargar 28 semanas despuรฉs, la aterradora secuela a 28 dรญas despuรฉs
- Si eres un fan de las pelรญculas de terror, probablemente hayas oรญdo hablar de 28 Days Later, la aclamada pelรญcula britรกnica que representa un apocalipsis zombi causado por un virus mortal. ยฟPero sabรญas que hay una secuela de esta pelรญcula, llamada 28 semanas despuรฉs, que es aรบn mรกs aterradora y emocionante?
-descargar 28 semanas despuรฉs
Download Zip ✦ https://bltlly.com/2v6LdK
- 28 Weeks Later es una pelรญcula de terror post-apocalรญptica dirigida por Juan Carlos Fresnadillo, quien co-escribiรณ con Rowan Joffรฉ, Enrique Lรณpez Lavigne y Jesus Olmo. La secuela independiente de 28 Days Later, estรก protagonizada por Robert Carlyle, Rose Byrne, Jeremy Renner, Harold Perrineau, Catherine McCormack, Mackintosh Muggleton, Imogen Poots e Idris Elba.
- La pelรญcula se desarrolla seis meses despuรฉs de los acontecimientos de la primera pelรญcula, cuando las fuerzas de la OTAN han declarado a Gran Bretaรฑa a salvo del virus de la rabia y han comenzado a repoblar Londres. Sin embargo, las cosas van horriblemente mal cuando un portador del virus entra en la ciudad y desencadena un nuevo brote. Los sobrevivientes deben luchar por sus vidas contra las hordas infectadas y las fuerzas militares que tratan de contenerlas.
- En este artรญculo, te diremos por quรฉ deberรญas ver 28 Weeks Later, donde puedes encontrarlo online, y cรณmo puedes descargarlo de forma legal y segura. Por lo tanto, si usted estรก listo para un poco de acciรณn palpitante y suspenso, siga leyendo!
- Por quรฉ deberรญas ver 28 Weeks Later
- 28 Weeks Later no es solo una pelรญcula de zombis sin sentido. Es una pelรญcula inteligente y bien hecha que explora temas como la supervivencia, la familia, la moralidad y la humanidad en un escenario distรณpico. Tambiรฉn ofrece una descripciรณn realista y arenosa de lo que podrรญa suceder si una pandemia se saliera de control.
-
-
- Ademรกs, 28 Weeks Later presenta algunas de las escenas mรกs intensas y memorables de la historia del cine de terror. Desde la secuencia de apertura donde Don escapa de una granja atacada por los infectados, a la escena de la persecuciรณn en helicรณptero donde Doyle corta una multitud de zombies, a la toma final de la Torre Eiffel rodeado de infectados corriendo descontrolado en Parรญs, usted estarรก en el borde de su asiento durante toda la pelรญcula.
- Finalmente, 28 Weeks Later ha recibido crรญticas y valoraciones positivas de crรญticos y audiencias por igual. Tiene una calificaciรณn de aprobaciรณn del 71% en Rotten Tomatoes, basada en 195 revisiones, con una calificaciรณn promedio de 6.6/10. Tambiรฉn tiene una puntuaciรณn de 7/10 en IMDb, basada en 260.000 votos. La pelรญcula fue elogiada por su direcciรณn, actuaciรณn, cinematografรญa y atmรณsfera.
- Dรณnde encontrar 28 semanas mรกs tarde en lรญnea
- Si te estรกs preguntando dรณnde puedes ver 28 Weeks Later online, tienes varias opciones para elegir. Estos son algunos de los mejores servicios de streaming y plataformas que ofrecen la pelรญcula:
-
-- Netflix: Netflix es uno de los servicios de streaming mรกs populares y ampliamente utilizados en el mundo. Tiene una enorme biblioteca de pelรญculas y programas, incluyendo 28 Weeks Later. Puede ver la pelรญcula en Netflix con un plan de suscripciรณn que comienza desde $8.99 por mes. Tambiรฉn puede descargar la pelรญcula en su dispositivo y verla sin conexiรณn.
-- Hulu: Hulu es otro gran servicio de streaming que ofrece una variedad de contenido, incluyendo 28 Weeks Later. Puedes ver la pelรญcula en Hulu con un plan de suscripciรณn que comienza desde $5.99 por mes. Tambiรฉn puedes agregar canales de TV en vivo y redes premium a tu plan por un cargo adicional.
-
-- iTunes: iTunes es una plataforma que te permite comprar o alquilar pelรญculas y programas de Apple. Puedes comprar 28 Weeks Later en iTunes por $9.99 o alquilarlo por $3.99. Tambiรฉn puedes descargar la pelรญcula en tu dispositivo y verla sin conexiรณn.
-- Vudu: Vudu es una plataforma que le permite comprar o alquilar pelรญculas y programas de Walmart. Puedes comprar 28 semanas mรกs tarde en Vudu por $9.99 o alquilarlo por $3.99. Tambiรฉn puedes descargar la pelรญcula en tu dispositivo y verla sin conexiรณn.
-
- Estos son algunos de los pros y contras de cada servicio de streaming:
-
-
-Servicio de streaming |
-Pros |
-Contras |
-
-
-Netflix |
-- Gran selecciรณn de pelรญculas y programas - Planes de suscripciรณn asequibles - Opciรณn de visualizaciรณn sin conexiรณn - No hay anuncios |
-- La disponibilidad de contenido puede variar segรบn la regiรณn - La cuota de suscripciรณn puede aumentar con el tiempo - No hay canales de TV en vivo o redes premium |
-
-
-Hulu |
-- Gran selecciรณn de pelรญculas y programas - Planes de suscripciรณn asequibles - Opciรณn de visualizaciรณn sin conexiรณn - Canales de TV en vivo y redes premium disponibles |
-- La disponibilidad de contenido puede variar segรบn la regiรณn - La cuota de suscripciรณn puede aumentar con el tiempo - Los anuncios pueden interrumpir su experiencia de visualizaciรณn a menos que pague extra |
-
-
-Amazon Prime Video |
-- Gran selecciรณn de pelรญculas y programas - Opciรณn de visualizaciรณn sin conexiรณn - No hay anuncios - Otros beneficios de la membresรญa de Amazon Prime como envรญo gratuito, mรบsica, libros, etc. |
-- La disponibilidad de contenido puede variar segรบn la regiรณn - La cuota de suscripciรณn puede aumentar con el tiempo - No hay canales de televisiรณn en vivo o redes premium incluidas en la membresรญa |
-
-
-iTunes |
-- Vรญdeo y audio de alta calidad - Opciรณn de visualizaciรณn sin conexiรณn - Sin anuncios - Compatible con los dispositivos y servicios de Apple |
-- La disponibilidad de contenido puede variar segรบn la regiรณn - No hay opciรณn de suscripciรณn - Solo disponible en los dispositivos y servicios de Apple - No hay canales de TV en vivo o redes premium |
-
-
-
-- Video y audio de alta calidad - Opciรณn de visualizaciรณn sin conexiรณn - Sin anuncios - Compatible con varios dispositivos y servicios |
-- La disponibilidad de contenido puede variar segรบn la regiรณn - No hay opciรณn de suscripciรณn - Solo disponible en los Estados Unidos - No hay canales de televisiรณn en vivo o redes premium |
-
-
-
Cรณmo descargar 28 Weeks Later de forma legal y segura
- Ahora que sabes dรณnde puedes ver 28 Weeks Later online, es posible que te estรฉs preguntando cรณmo puedes descargarlo de forma legal y segura. Descargar pelรญculas en lรญnea puede ser un negocio complicado y arriesgado, ya que hay muchos sitios web y aplicaciones ilegales y poco รฉticas que ofrecen contenido pirata, malware, virus y estafas. Por lo tanto, siempre debe tener cuidado y precauciรณn al descargar pelรญculas en lรญnea, y siga estos consejos:
-
-- Utilice un sitio web o aplicaciรณn de buena reputaciรณn: Siempre debe usar un sitio web o aplicaciรณn que tenga licencia y autorizaciรณn para ofrecer 28 Weeks Later para su descarga. Algunos de los mejores sitios web y aplicaciones que le permiten descargar 28 Weeks Later de forma legal y segura son Netflix, Hulu, Amazon Prime Video, iTunes y Vudu. Estas plataformas tienen mรฉtodos de pago seguros, tecnologรญa de cifrado y soporte al cliente para garantizar su seguridad y satisfacciรณn.
-- Evite los torrents o el intercambio entre pares: Nunca debe usar torrents o sitios web o aplicaciones para compartir entre pares para descargar 28 Weeks Later, ya que son ilegales y poco รฉticos. Torrenting o intercambio entre pares implica la descarga de archivos de otros usuarios que pueden haber infectado o archivos daรฑados, que pueden daรฑar su dispositivo o exponer su informaciรณn personal. Ademรกs, torrenting o peer-to-peer sharing viola los derechos de propiedad intelectual de los creadores y distribuidores de 28 Weeks Later, lo que puede resultar en consecuencias legales.
-
-- Elija la calidad y el formato adecuados: Siempre debe elegir la calidad y el formato adecuados para descargar 28 Weeks Later, ya que pueden afectar su experiencia de visualizaciรณn y almacenamiento de dispositivos. La calidad de una pelรญcula se refiere a la resoluciรณn o claridad de la imagen y el sonido, que puede variar de baja a alta. El formato de una pelรญcula se refiere al tipo de archivo o extensiรณn, que puede variar dependiendo del dispositivo o plataforma que utilice. Algunas de las opciones de calidad y formato mรกs comunes para descargar 28 Weeks Later son SD (definiciรณn estรกndar), HD (alta definiciรณn), 4K (ultra alta definiciรณn), MP4 (MPEG-4), AVI (Audio Video Interleave), MKV (Matroska) y MOV (QuickTime).
-- Agrega subtรญtulos si es necesario: Siempre debes agregar subtรญtulos si es necesario al descargar 28 Weeks Later, ya que pueden mejorar tu comprensiรณn y disfrute de la pelรญcula. Los subtรญtulos son versiones de texto del diรกlogo o narraciรณn de una pelรญcula, que se pueden mostrar en la pantalla en diferentes idiomas o estilos. Algunos de los sitios web y aplicaciones que ofrecen subtรญtulos para 28 Weeks Later son Netflix, Hulu, Amazon Prime Video, iTunes y Vudu. Tambiรฉn puede descargar subtรญtulos de otras fuentes como Subscene, OpenSubtitles o YIFY Subtitles.
-
- Conclusiรณn
- En conclusiรณn, 28 Weeks Later es una pelรญcula imprescindible para los fanรกticos del terror, ya que es una secuela aterradora y emocionante de 28 Days Later. Es una pelรญcula inteligente y bien hecha que explora temas como la supervivencia, la familia, la moralidad y la humanidad en un escenario distรณpico. Tambiรฉn presenta algunas de las escenas mรกs intensas y memorables de la historia del cine de terror.
-
- Entonces, ยฟquรฉ estรกs esperando? Descarga 28 Weeks Later hoy y disfruta de esta increรญble pelรญcula con tus amigos o familiares. Y no olvides compartir tus comentarios y opiniones sobre la pelรญcula con otros dejando un comentario a continuaciรณn o en las redes sociales.
- Preguntas frecuentes
- Aquรญ estรกn algunas de las preguntas mรกs frecuentes sobre 28 semanas despuรฉs:
-- Is 28 Weeks Later a remake or a sequel? : 28 Weeks Later is a sequel to 28 Days Later, not a remake. Se desarrolla seis meses despuรฉs de los eventos de la primera pelรญcula, y sigue un grupo diferente de personajes y una nueva historia.
-- ยฟNecesito ver 28 dรญas despuรฉs antes de 28 semanas despuรฉs? : No es necesario ver 28 dรญas despuรฉs antes de 28 semanas despuรฉs, ya que las pelรญculas son independientes y tienen conexiones mรญnimas. Sin embargo, se recomienda ver 28 Days Later primero, ya que te darรก mรกs contexto y antecedentes sobre el virus de la rabia y el mundo de las pelรญculas.
-- Is 28 Weeks Later based on a true story or a book? : No, 28 Weeks Later is not based on a true story or a book. Es un guion original escrito por Juan Carlos Fresnadillo, Rowan Joffรฉ, Enrique Lรณpez Lavigne y Jesus Olmo.
-- Es 28 semanas mรกs tarde adecuado para los niรฑos o los espectadores sensibles? : No, 28 semanas mรกs tarde no es adecuado para niรฑos o espectadores sensibles. Estรก clasificado como R por su fuerte violencia, su lenguaje y su desnudez. Contiene escenas de violencia grรกfica, sangre, mutilaciรณn, muerte y horror que pueden ser perturbadores o perturbadores para algunos espectadores.
-
-