diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Buku Matematika Kelas 6.pdf.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Buku Matematika Kelas 6.pdf.md
deleted file mode 100644
index 393655806971ffb8128a09bf71c780c37506651e..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Buku Matematika Kelas 6.pdf.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
- H2: The Content of Buku Matematika Kelas 6.pdf
- H2: The Benefits of Buku Matematika Kelas 6.pdf
- H2: How to Download Buku Matematika Kelas 6.pdf
- Conclusion: A summary of the main points and a call to action
- FAQs: Some common questions and answers about Buku Matematika Kelas 6.pdf | # Article with HTML formatting
Buku Matematika Kelas 6.pdf: A Guide for Students and Teachers
- If you are a student or a teacher of grade 6 mathematics in Indonesia, you might have heard of Buku Matematika Kelas 6.pdf. This is a book that follows the curriculum of 2013 and has been revised in 2018 by the Ministry of Education and Culture. It is a book that covers various topics and skills in mathematics, such as negative integers, mixed operations, circles, prisms, pyramids, cones, spheres, and data analysis. In this article, we will explain what Buku Matematika Kelas 6.pdf is, what it contains, what its benefits are, and how you can download it for free.
-Buku Matematika Kelas 6.pdf
DOWNLOAD 🆓 https://byltly.com/2uKzSr
- The Content of Buku Matematika Kelas 6.pdf
- Buku Matematika Kelas 6.pdf consists of two parts: a book for students and a book for teachers. The book for students is called Buku Siswa and the book for teachers is called Buku Guru. Both books are available in PDF format and can be easily opened and read using various gadgets. They can also be displayed as slide presentations.
- The book for students has eight chapters that correspond to the eight competencies that are expected from grade 6 students. Each chapter has several subtopics that are explained in detail with examples, exercises, tasks, and activities. The book also features some interesting sections, such as observing, reasoning, questioning, knowing the figures, practice questions, group assignments, and others. These sections are designed to help students develop their scientific skills, higher-order thinking skills, problem-based learning skills, literacy skills, and connection skills.
- The book for teachers is a guide and a reference for teachers in teaching mathematics, especially in grade 6 elementary school or madrasah ibtidaiyah. The book provides some tips and suggestions on how to plan, implement, and evaluate the learning process using Buku Siswa. The book also explains the learning objectives, indicators, materials, methods, media, resources, assessment tools, and feedback for each subtopic.
- The Benefits of Buku Matematika Kelas 6.pdf
- Buku Matematika Kelas 6.pdf has many benefits for both students and teachers. Some of the benefits are:
-
-- It is aligned with the curriculum of 2013 that has been revised in 2018. It reflects the latest changes and updates in the mathematics education field.
-- It is comprehensive and thorough. It covers all the topics and skills that are required for grade 6 students. It also provides enough exercises and tasks to practice and reinforce the concepts.
-- It is engaging and interactive. It uses a variety of methods and media to present the information. It also encourages students to participate actively in the learning process by observing, questioning, reasoning, trying, and communicating.
-- It is relevant and contextual. It connects the mathematics concepts with real-life situations and problems. It also relates the mathematics concepts with other subjects and disciplines.
-- It is accessible and free. It can be downloaded easily from the internet without any cost. It can also be used with different devices and platforms.
-
- How to Download Buku Matematika Kelas 6.pdf
- If you are interested in using Buku Matematika Kelas 6.pdf for your learning or teaching purposes, you can download it from the following links:
-
- After downloading the files, you can open them using any PDF reader software or application. You can also print them if you prefer to have a hard copy.
- Conclusion
- Buku Matematika Kelas 6.pdf is a valuable resource for students and teachers of grade 6 mathematics in Indonesia. It follows the curriculum of 2013 that has been revised in 2018 by the Ministry of Education and Culture. It covers various topics and skills in mathematics in a comprehensive, engaging, interactive, relevant, and contextual way. It also provides enough exercises and tasks to practice and reinforce the concepts. Moreover, it can be downloaded easily from the internet without any cost.
- If you want to improve your mathematics knowledge and skills or help your students do so, you should consider using Buku Matematika Kelas 6.pdf as your learning or teaching material. You will not regret it!
-Download Buku Matematika Kelas 6 Kurikulum 2013 pdf
-Buku Matematika Kelas 6 Semester 1 dan 2 pdf
-Buku Matematika Kelas 6 SD/MI pdf
-Buku Matematika Kelas 6 Edisi Revisi 2018 pdf
-Buku Matematika Kelas 6 Penerbit Erlangga pdf
-Buku Matematika Kelas 6 Penerbit Esis pdf
-Buku Matematika Kelas 6 Penerbit Yudhistira pdf
-Buku Matematika Kelas 6 Penerbit Intan Pariwara pdf
-Buku Matematika Kelas 6 Penerbit Ganeca Exact pdf
-Buku Matematika Kelas 6 Penerbit Quadra pdf
-Buku Matematika Kelas 6 Tema 1 Hidup Rukun pdf
-Buku Matematika Kelas 6 Tema 2 Selalu Berhemat Energi pdf
-Buku Matematika Kelas 6 Tema 3 Cita-Citaku pdf
-Buku Matematika Kelas 6 Tema 4 Peduli Terhadap Makhluk Hidup pdf
-Buku Matematika Kelas 6 Tema 5 Bangga Sebagai Bangsa Indonesia pdf
-Buku Matematika Kelas 6 Tema 6 Kesehatan dan Olahraga pdf
-Buku Matematika Kelas 6 Tema 7 Permainan Tradisional pdf
-Buku Matematika Kelas 6 Tema 8 Lingkungan Sahabat Kita pdf
-Buku Matematika Kelas 6 Tema 9 Makanan Sehat dan Bergizi pdf
-Buku Matematika Kelas 6 Tema 10 Peristiwa dalam Kehidupan pdf
-Ringkasan Materi Buku Matematika Kelas 6 pdf
-Soal dan Pembahasan Buku Matematika Kelas 6 pdf
-Latihan Ulangan Harian Buku Matematika Kelas 6 pdf
-Latihan Ulangan Tengah Semester Buku Matematika Kelas 6 pdf
-Latihan Ulangan Akhir Semester Buku Matematika Kelas 6 pdf
-Latihan Ujian Sekolah Berstandar Nasional (USBN) Buku Matematika Kelas 6 pdf
-Latihan Ujian Nasional Berbasis Komputer (UNBK) Buku Matematika Kelas 6 pdf
-Kunci Jawaban Buku Matematika Kelas 6 pdf
-RPP dan Silabus Buku Matematika Kelas 6 pdf
-Lembar Kerja Siswa (LKS) Buku Matematika Kelas 6 pdf
-Modul Pembelajaran Jarak Jauh (PJJ) Buku Matematika Kelas 6 pdf
-Video Pembelajaran Interaktif (VPI) Buku Matematika Kelas 6 pdf
-Media Pembelajaran Digital (MPD) Buku Matematika Kelas 6 pdf
-Evaluasi Diri dan Remedial (EDR) Buku Matematika Kelas 6 pdf
-Penilaian Kinerja dan Portofolio (PKP) Buku Matematika Kelas 6 pdf
-Penilaian HOTS (Higher Order Thinking Skills) Buku Matematika Kelas 6 pdf
-Penilaian Afektif dan Psikomotorik (PAP) Buku Matematika Kelas 6 pdf
-Penilaian Autentik dan Holistik (PAH) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Kompetensi (PBK) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Proyek (PBP) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Kinerja (PBK) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Portofolio (PBP) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Produk (PBP) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Proses (PBP) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Kemampuan Berpikir Kritis (PBKBK) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Kemampuan Berpikir Kreatif (PBKBK) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Kemampuan Memecahkan Masalah (PBKMM) Buku Matematika Kelas 6 pdf
-Penilaian Berbasis Kemampuan Komunikasi Efektif (PBKKE) Buku Matematika Kelas
- FAQs
- Here are some common questions and answers about Buku Matematika Kelas 6.pdf:
-
-- What is Buku Matematika Kelas 6.pdf?
Buku Matematika Kelas 6.pdf is a book that follows the curriculum of 2013 and has been revised in 2018 by the Ministry of Education and Culture. It is a book that covers various topics and skills in mathematics for grade 6 students in Indonesia.
-- What are the two parts of Buku Matematika Kelas 6.pdf?
Buku Matematika Kelas 6.pdf consists of two parts: a book for students (Buku Siswa) and a book for teachers (Buku Guru). Both books are available in PDF format.
-- What are some of the benefits of Buku Matematika Kelas 6.pdf?
Buku Matematika Kelas 6.pdf has many benefits for both students and teachers. Some of them are: it is aligned with the latest curriculum; it is comprehensive and thorough; it is engaging and interactive; it is relevant and contextual; it is accessible and free.
-- How can I download Buku Matematika Kelas 6.pdf?
You can download Buku Matematika Kelas 6.pdf from several links on the internet. Some of them are provided in this article.
-- How can I use Buku Matematika Kelas 6.pdf?
You can use Buku Matematika Kelas 6.pdf as your learning or teaching material for grade 6 mathematics. You can open it using any PDF reader software or application. You can also print it if you prefer to have a hard copy.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar Crack Memories On Tv 4.1.1 32l Crez des albums photo pour votre TV ou votre ordinateur.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar Crack Memories On Tv 4.1.1 32l Crez des albums photo pour votre TV ou votre ordinateur.md
deleted file mode 100644
index cf5f0fd9ee3483c0d775e77384e97321b0c1fb38..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar Crack Memories On Tv 4.1.1 32l Crez des albums photo pour votre TV ou votre ordinateur.md
+++ /dev/null
@@ -1,149 +0,0 @@
-
-Descargar Crack Memories On Tv 4.1.1 32l
-Do you want to turn your photos and videos into amazing slideshows that you can watch on your TV or share online? Do you want to do it easily and quickly, without spending a lot of money or time? If so, then you need Memories On TV, a powerful and user-friendly software that lets you create stunning slideshows with just a few clicks. And if you want to unlock all the features and benefits of this software, then you need Crack Memories On Tv 4.1.1 32l, a simple and effective tool that will activate your copy of Memories On TV for free.
-Descargar Crack Memories On Tv 4.1.1 32l
Download ★★★ https://byltly.com/2uKwWy
-In this article, we will tell you everything you need to know about Memories On TV and Crack Memories On Tv 4.1.1 32l, including what they are, how they work, how to download and install them, and how to use them to create amazing slideshows that will impress your friends and family.
-What is Memories On TV?
-Memories On TV is a software program that allows you to create photo/video slideshows that you can watch on your TV or computer, or share online via YouTube, Facebook, or email. You can use it to make slideshows for weddings, birthdays, anniversaries, vacations, or any other occasion that you want to remember and celebrate.
-Features and benefits of Memories On TV
-Some of the features and benefits of Memories On TV are:
-
-- It supports various image formats, such as JPG, PNG, BMP, GIF, TIFF, etc.
-- It supports various video formats, such as AVI, MPEG, WMV, MOV, MP4, etc.
-- It allows you to add captions, titles, credits, and logos to your slideshows.
-- It allows you to adjust the brightness, contrast, color, and orientation of your photos.
-- It allows you to crop, rotate, zoom, pan, and flip your photos and videos.
-- It allows you to apply various effects and filters to your photos and videos.
-- It allows you to add background music and sound effects to your slideshows.
-- It allows you to synchronize the music and the images according to the beat or the duration.
-- It allows you to add transitions between the slides.
-- It allows you to preview your slideshows before burning them or exporting them.
-- It allows you to burn your slideshows to DVD or CD with a built-in menu system.
-- It allows you to export your slideshows as video files that can be played on any device or platform.
-- It has a simple and intuitive interface that makes it easy for anyone to use.
-
-How to download and install Memories On TV 4.1.1
-To download and install Memories On TV 4.1.1 on your computer, follow these steps:
-
-- Go to this website and click on the Download button.
-- Save the file MOTVSetup.exe on your computer.
-- Double-click on the file MOTVSetup.exe and follow the instructions on the screen.
-- Select the language of your choice and click on Next.
-- Read the license agreement and click on I Agree.
-- Select the destination folder where you want to install Memories On TV and click on Next.
-- Select the components that you want to install and click on Next.
-- Select the start menu folder where you want to create shortcuts for Memories On TV and click on Next.
-- Select whether you want to create a desktop icon for Memories On TV and click on Next.
-- Click on Install to start the installation process.
-- Wait for the installation process to finish and click on Finish.
-
- What is Crack Memories On Tv 4.1.1 32l?
- Crack Memories On Tv 4.1.1 32l is a small program that can activate your copy of Memories On TV for free. It works by generating a valid serial number that can unlock all the features and benefits of Memories On TV without paying anything.
- Why do you need Crack Memories On Tv 4.1.1 32l?
- You need Crack Memories On Tv 4.1.1 32l if you want to enjoy all the advantages of Memories On TV without spending any money or time. With Crack Memories On Tv 4.1.1 32l, you can:
-
- - Create unlimited slideshows with no watermark or limitation.
- - Add as many photos and videos as you want without worrying about the size or quality.
- - Edit your photos and videos with advanced tools and effects without any restriction.
- - Burn your slideshows to DVD or CD with high-quality output without any error.
- - Export your slideshows as video files with any format or resolution without any loss.
- - Share your slideshows online with anyone without any hassle.
-
- How to download and use Crack Memories On Tv 4.1.1 32l?
- To download and use Crack Memories On Tv 4.1.1 32l on your computer, follow these steps:
-
- - Go to this websiteand click on the MORE button.
- - Select Dowload file.
- - Select a location where you want save it on your computer.
- - The file name is
CMTV41132L.zip
. Extract it using WinRAR or any other program that can unzip files.
- - You will see two files:
CMTV41132L.exe
, which is the crack program;and CMTV41132L.txt
, which contains instructions on how use it.
- - To use Crack Memories On Tv 4.1.1 32l , first make sure that MemorysOnTV is not running on your computer . Then double-click on CMTV41132L.exe . A window will open asking for a serial number . Copy one of serial numbers from CMTV41132L.txt file . Paste it into window . Click OK . A message will appear saying that MemorysOnTV has been activated successfully . Click OK again . Close window .
- - You can now run MemorysOnTV normally . You will see that all features are unlocked . You can create , edit , burn , export , share unlimited slide shows with no problem . Enjoy !
-
- Tips and tricks for using Memories On TV and Crack Memories On Tv 4.1.1 32l
-Now that you have downloaded and installed Memories On TV and Crack Memories On Tv 4.1.1 32l, you can start creating amazing slideshows with your photos and videos. Here are some tips and tricks that will help you make the most out of these tools:
-How to create stunning slideshows with Memories On TV
-To create a slideshow with Memories On TV, follow these steps:
-Download Crack Memories On Tv 4.1.1 32l Full Version
-How to Crack Memories On Tv 4.1.1 32l for Free
-Memories On Tv 4.1.1 32l Crack Serial Keygen
-Memories On Tv 4.1.1 32l Crack Patch Download
-Memories On Tv 4.1.1 32l Crack License Key Activation
-Memories On Tv 4.1.1 32l Crack Torrent Magnet Link
-Memories On Tv 4.1.1 32l Crack No Survey No Password
-Memories On Tv 4.1.1 32l Crack Online Generator
-Memories On Tv 4.1.1 32l Crack Working Tested
-Memories On Tv 4.1.1 32l Crack Latest Update
-Descargar Gratis Crack Memories On Tv 4.1.1 32l Español
-Descargar Crack Memories On Tv 4.1.1 32l Mega Mediafire
-Descargar Crack Memories On Tv 4.1.1 32l Sin Virus
-Descargar Crack Memories On Tv 4.1.1 32l Facil Rapido
-Descargar Crack Memories On Tv 4.1.1 32l Windows Mac Linux
-Descargar Crack Memories On Tv 4.1.1 32l Portable USB
-Descargar Crack Memories On Tv 4.1.1 32l Ultima Version
-Descargar Crack Memories On Tv 4.1.1 32l Premium Pro
-Descargar Crack Memories On Tv 4.1.1 32l Mod Apk Android
-Descargar Crack Memories On Tv 4.1.1 32l Full HD Quality
-Download Free Crack Memories On Tv 4.1.1 32l English
-Download Crack Memories On Tv 4.1.1 32l Google Drive Dropbox
-Download Crack Memories On Tv 4.1.1 32l Without Virus
-Download Crack Memories On Tv 4.1.1 32l Easy Fast
-Download Crack Memories On Tv 4.1.1 32l Windows Mac Linux
-Download Crack Memories On Tv 4.1.1 32l Portable USB
-Download Crack Memories On Tv 4.1.1 32l Latest Version
-Download Crack Memories On Tv 4.1.1 32l Premium Pro
-Download Crack Memories On Tv 4.1.1 32l Mod Apk Android
-Download Crack Memories On Tv 4.1.1 32l Full HD Quality
-Télécharger Gratuitement Crack Memories On Tv 4.1.1 French
-Télécharger Crack Memories On Tv
-
-- Launch Memories On TV and click on New Project.
-- Select the folder where your photos and videos are stored and click on OK.
-- Drag and drop your photos and videos to the timeline at the bottom of the screen.
-- Arrange them in the order that you want them to appear in your slideshow.
-- To add captions, titles, credits, or logos to your slides, click on the Text button on the toolbar and select the type of text that you want to add.
-- To adjust the brightness, contrast, color, or orientation of your photos, click on the Edit button on the toolbar and select the option that you want to apply.
-- To crop, rotate, zoom, pan, or flip your photos and videos, click on the Transform button on the toolbar and select the option that you want to apply.
-- To apply effects and filters to your photos and videos, click on the Effects button on the toolbar and select the effect or filter that you want to apply.
-- To add background music and sound effects to your slideshow, click on the Audio button on the toolbar and select the option that you want to add.
-- To synchronize the music and the images according to the beat or the duration, click on the Sync button on the toolbar and select the option that you want to use.
-- To add transitions between the slides, click on the Transitions button on the toolbar and select the transition that you want to use.
-- To preview your slideshow before burning it or exporting it, click on the Preview button on the toolbar and watch your slideshow on the screen.
-
- How to add effects, music, and transitions to your slideshows
- One of the best features of Memories On TV is that it allows you to add various effects, music, and transitions to your slideshows. Here are some tips on how to use them effectively:
-
- - To make your slideshows more dynamic and interesting, use different effects and filters for different photos and videos. For example, you can use a sepia effect for old photos, a black-and-white effect for vintage photos, a blur effect for dreamy photos, etc.
- - To make your slideshows more emotional and expressive, use appropriate background music and sound effects for different themes and moods. For example, you can use a romantic music for a wedding slideshow, a cheerful music for a birthday slideshow, a sad music for a memorial slideshow, etc.
- - To make your slideshows more smooth and seamless, use transitions that match the style and pace of your slideshow. For example, you can use a fade transition for a slow-paced slideshow, a wipe transition for a fast-paced slideshow, a dissolve transition for a soft slideshow, etc.
-
- How to burn your slideshows to DVD or share them online
- After creating your slideshow with Memories On TV , you can burn it to DVD or CD with a built-in menu system , or export it as video file that can be played on any device or platform . You can also share it online via YouTube , Facebook , or email . Here are some tips on how to do it :
-
- - To burn your slideshow to DVD or CD , click on the Burn button on the toolbar . Select whether you want to burn it as DVD-Video , VCD , SVCD , or MiniDVD . Select whether you want to use NTSC or PAL format . Select whether you want to use 4:3 or 16:9 aspect ratio . Select whether you want to use stereo or surround sound . Select whether you want to create a menu system for your DVD or CD . Click on Burn again . Insert a blank DVD or CD into your drive . Wait for the burning process to finish .
- - To export your slideshow as video file , click on the Export button on the toolbar . Select whether you want to export it as AVI , MPEG , WMV , MOV , MP4 , FLV , or MKV file . Select whether you want to use high quality or low quality output . Select whether you want to use custom settings or presets for your video file . Click on Export again . Choose a location where you want save it on your computer . Wait for exporting process finish .
- - To share your slideshow online , click on Publish Online button toolbar . Select whether you want share it via YouTube , Facebook , or email . Enter required information such as account details , title , description , tags , etc . Click on Publish Online again . Wait for uploading process finish . You will get link that you can copy paste share with anyone .
-
- Conclusion
- In conclusion , Memories On TV is a great software program that allows you create photo/video slideshows that you can watch on your TV or computer , or share online via YouTube , Facebook , or email . You can use it make slideshows for weddings , birthdays , anniversaries , vacations , or any other occasion that you want remember celebrate . And with Crack Memories On Tv 4.1.1 32l , you can activate your copy of Memories On TV for free unlock all features benefits of this software without paying anything .
- Summary of main points
- In this article , we have told everything need know about Memories On TV Crack Memories On Tv 4.1.1 32l , including what they are how they work how download install them how use them create amazing slideshows that will impress friends family . We have also given some tips tricks that will help make most out these tools .
- Call action
- If want try Memories On TV Crack Memories On Tv 4.1.1 32l yourself see how easy fun it is create stunning slideshows with photos videos , then don't wait any longer download them now from links provided in this article follow instructions given in this article start creating slideshows today ! You won't regret it !
- Frequently Asked Questions (FAQs)
- Here are some frequently asked questions (FAQs) about Memories On TV and Crack Memories On Tv 4.1.1 32l:
-
-- Is Memories On TV safe to download and install?
-Yes, Memories On TV is safe to download and install. It does not contain any viruses, malware, spyware, or adware. It does not harm your computer or compromise your privacy.
-- Is Crack Memories On Tv 4.1.1 32l safe to download and use?
-Yes, Crack Memories On Tv 4.1.1 32l is safe to download and use. It does not contain any viruses, malware, spyware, or adware. It does not harm your computer or compromise your privacy. It only activates your copy of Memories On TV for free.
-- Does Crack Memories On Tv 4.1.1 32l work with any version of Memories On TV?
-No, Crack Memories On Tv 4.1.1 32l only works with Memories On TV 4.1.1. If you have a different version of Memories On TV, you need to find a different crack program that matches your version.
-- Does Crack Memories On Tv 4.1.1 32l work with any operating system?
-Yes, Crack Memories On Tv 4.1.1 32l works with any operating system that supports Memories On TV, such as Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, etc.
-- Does Crack Memories On Tv 4.1.1 32l affect the quality or performance of Memories On TV?
-No, Crack Memories On Tv 4.1.1 32l does not affect the quality or performance of Memories On TV. It only unlocks all the features and benefits of Memories On TV without paying anything.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Memoriesontv 4 Crack Serial 11 with Keygen [Working Tested].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Memoriesontv 4 Crack Serial 11 with Keygen [Working Tested].md
deleted file mode 100644
index e48e317bd14b61ebad2f256d50f5c7334618aeab..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Memoriesontv 4 Crack Serial 11 with Keygen [Working Tested].md
+++ /dev/null
@@ -1,85 +0,0 @@
-
-Memoriesontv 4 Crack Serial 11: What You Need to Know
-If you are looking for a way to create stunning slideshows from your photos and videos, you might have heard of Memoriesontv. This software allows you to add effects, transitions, music, captions, and more to your slideshows and burn them to DVD or share them online. However, Memoriesontv is not a free software, and you need to pay for a license key to use it without limitations. That's why some people resort to using crack serials to unlock the full features of Memoriesontv without paying. In this article, we will explain what Memoriesontv 4 Crack Serial 11 is, how to get it, what are the risks and drawbacks of using it, and what are some alternatives to using it.
- Introduction
-What is Memoriesontv?
-Memoriesontv is a software developed by CodeJam Pte Ltd that allows you to create professional-looking slideshows from your photos and videos. You can import your media files from your computer, camera, scanner, or other devices, and organize them into albums. You can then customize your slideshows by adding effects, transitions, music, captions, clipart, and more. You can also edit your photos and videos with basic tools such as crop, rotate, resize, color correction, red-eye removal, etc. Once you are done with your slideshow creation, you can preview it on your computer screen or TV. You can also burn it to DVD or CD with a built-in disc menu creator. Alternatively, you can export it as a video file or upload it to YouTube or Facebook directly from the software.
-Memoriesontv 4 Crack Serial 11
DOWNLOAD ::: https://byltly.com/2uKxHU
- What is a crack serial?
-A crack serial is a code that is used to bypass the registration or activation process of a software. It is usually generated by hackers or crackers who modify the original software code to remove or disable the protection mechanisms that prevent unauthorized use. A crack serial can be entered into the software interface or applied as a patch file that modifies the software executable file. By using a crack serial, you can access the full features of a software without paying for a license key.
- Why do people use crack serials for Memoriesontv?
-Some people use crack serials for Memoriesontv because they want to save money and avoid paying for a license key. The official price of Memoriesontv is $59.99 for the Pro edition and $39.99 for the Standard edition. Some people might find this price too expensive or unreasonable for a slideshow software. They might also think that they will only use the software once or twice and not need it anymore. Therefore, they look for ways to get the software for free or at a lower cost.
-Memoriesontv 4 full version with keygen download
-How to activate Memoriesontv 4 using serial number
-Memoriesontv 4 cracked software free download
-Memoriesontv 4 license key generator online
-Memoriesontv 4 patch for windows 11
-Memoriesontv 4 registration code and email
-Memoriesontv 4 torrent download with crack
-Memoriesontv 4 activation key 2023
-Memoriesontv 4 serial key list
-Memoriesontv 4 crack only download
-Memoriesontv 4 product key finder
-Memoriesontv 4 crack for mac os
-Memoriesontv 4 keygen.exe download
-Memoriesontv 4 crack file download
-Memoriesontv 4 serial number verification
-Memoriesontv 4 crack reddit
-Memoriesontv 4 keygen online
-Memoriesontv 4 crack no survey
-Memoriesontv 4 serial key free download
-Memoriesontv 4 crack latest version
-Memoriesontv 4 activation code generator
-Memoriesontv 4 crack zip file download
-Memoriesontv 4 serial number generator
-Memoriesontv 4 crack for windows 10
-Memoriesontv 4 keygen download free
-Memoriesontv 4 crack and keygen download
-Memoriesontv 4 serial number and activation code
-Memoriesontv 4 crack for pc download
-Memoriesontv 4 keygen and patch download
-Memoriesontv 4 crack without serial number
-Memoriesontv 4 serial key generator online
-Memoriesontv 4 crack direct download link
-Memoriesontv 4 serial number free download
-Memoriesontv 4 crack for linux
-Memoriesontv 4 keygen and crack download
-Memoriesontv 4 serial number and email address
-Memoriesontv 4 crack for android download
-Memoriesontv 4 keygen and serial number download
-Memoriesontv 4 crack with serial key download
-Memoriesontv 4 activation code free download
-Memoriesontv 4 crack rar file download
-Memoriesontv 4 serial number and license key
-Memoriesontv 4 crack for ios download
-Memoriesontv 4 keygen and activation code download
-Memoriesontv 4 crack with license key download
-Memoriesontv 4 activation code and email address
- How to get Memoriesontv 4 Crack Serial 11
-Download from official website
-The first option to get Memoriesontv 4 Crack Serial 11 is to download it from the official website of CodeJam Pte Ltd. The website offers a free trial version of Memoriesontv that you can download and install on your computer. The trial version has some limitations such as watermarking your slideshows, limiting the number of photos per album, and restricting some features such as DVD burning and video exporting. However, you can unlock these limitations by entering a crack serial that you can find on various websites on the internet. Some examples of websites that provide crack serials for Memoriesontv are Smart Serials, KeyGenNinja, and SerialBay. These websites claim to offer valid and working crack serials for various versions of Memoriesontv, including version 4.
- Download from third-party websites
-The second option to get Memoriesontv 4 Crack Serial 11 is to download it from third-party websites that host cracked versions of the software. These websites offer direct downloads of Memoriesontv that have been modified or patched by hackers or crackers to bypass the registration or activation process. You do not need to enter any crack serial or apply any patch file when you install these cracked versions of Memoriesontv. Some examples of websites that offer cracked versions of Memoriesontv are Softpedia, Softonic, and FileHippo. These websites claim to offer safe and virus-free downloads of Memoriesontv that have been tested by their editors or users.
- Download from keygen or generator
-The third option to get Memoriesontv 4 Crack Serial 11 is to download it from a keygen or generator program that can create crack serials for various software products. A keygen or generator is a software tool that can generate random codes that match the algorithm or pattern of a specific software license key. You can download these programs from various websites on the internet and run them on your computer. Some examples of websites that offer keygen or generator programs for Memoriesontv are YouTube, Praxis Benefits, and SoundCloud. These websites claim to offer working and verified keygen or generator programs for Memoriesontv that can produce valid and unlimited crack serials.
- Risks and drawbacks of using Memoriesontv 4 Crack Serial 11
-Legal issues
-One of the main risks of using Memoriesontv 4 Crack Serial 11 is that it is illegal and unethical. By using a crack serial, you are violating the terms and conditions of CodeJam Pte Ltd and infringing their intellectual property rights. You are also depriving them of their rightful revenue and profit from their software product. This could result in legal consequences such as fines, lawsuits, or even criminal charges if you are caught using or distributing crack serials for Memoriesontv.
- Malware and viruses
-Another risk of using Memoriesontv 4 Crack Serial 11 is that it could expose your computer to malware and viruses. Since crack serials are generated by hackers or crackers who have malicious intentions, they could embed harmful code into the crack serial itself or into the software executable file that they modify or patch. This could compromise your computer security and privacy by installing spyware, ransomware, trojans, worms, keyloggers with an internet connection. The main drawback is that they might require an account or subscription to use some features or remove watermarks.
- Conclusion
-Memoriesontv 4 Crack Serial 11 is a code that can unlock the full features of Memoriesontv, a software that allows you to create slideshows from your photos and videos. However, using a crack serial is illegal, unethical, risky, and not recommended. You could face legal issues, malware and viruses, performance and quality issues, and other problems by using a crack serial. Therefore, it is better to use alternatives to using Memoriesontv 4 Crack Serial 11, such as buying a licensed version of Memoriesontv, using a free or open-source slideshow software, or using an online slideshow maker. These alternatives are safer, legal, and more reliable than using a crack serial.
- FAQs
-What is the difference between Memoriesontv Pro and Standard editions?
-The Pro edition of Memoriesontv has more features than the Standard edition, such as video support, clipart library, disc menu creation, video export options, and more. The Pro edition also costs more than the Standard edition ($59.99 vs $39.99).
- How can I get technical support or customer service from CodeJam Pte Ltd?
-You can get technical support or customer service from CodeJam Pte Ltd by visiting their website and clicking on the Support tab. You can also email them at support@codejam.com or call them at +65 6220 8837.
- What are some examples of free or open-source slideshow software?
-Some examples of free or open-source slideshow software are LibreOffice Impress, OpenOffice Impress, Google Slides, and Zoho Show. These software allow you to create and edit slideshows from your photos and videos with various effects, transitions, music, captions, and more.
- What are some examples of online slideshow makers?
-Some examples of online slideshow makers are Canva, Prezi, Powtoon, and Visme. These online tools allow you to choose from hundreds of templates, themes, and styles for your slideshows. You can also add effects, transitions, music, captions, and more to your slideshows with drag-and-drop features.
- What are some advantages and disadvantages of using Memoriesontv?
-Some advantages of using Memoriesontv are that it allows you to create professional-looking slideshows from your photos and videos with various effects, transitions, music, captions, and more. It also allows you to burn your slideshows to DVD or CD with a built-in disc menu creator or export them as video files or upload them to YouTube or Facebook directly from the software. Some disadvantages of using Memoriesontv are that it is not a free software and requires a license key to use it without limitations. It also might not have as many templates or options as some other slideshow software or online tools.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Descargar Extreme Car Driving Simulator APK Un juego de conduccin realista y divertido.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Descargar Extreme Car Driving Simulator APK Un juego de conduccin realista y divertido.md
deleted file mode 100644
index b0d7256f1666ef9168265f6eedadf58d012c1fd7..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Descargar Extreme Car Driving Simulator APK Un juego de conduccin realista y divertido.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-Extreme Car Driving Simulator: A Realistic and Fun Driving Game for Android
-Do you love driving cars? Do you want to experience the thrill of driving in a realistic and open world environment? If yes, then you should try Extreme Car Driving Simulator, a popular and exciting driving game for Android devices. In this game, you can drive various cars with different features and performance, and enjoy the realistic physics and car damage effects. You can also explore a large and detailed city with traffic, ramps, obstacles, and more. Whether you want to drive fast, drift, or crash your car, Extreme Car Driving Simulator will give you the freedom and fun you are looking for.
-extreme car driving simulator descargar apk
Download Zip 🆓 https://urlin.us/2uSScA
-What is Extreme Car Driving Simulator?
-Extreme Car Driving Simulator is a driving game developed by AxesInMotion Racing, a studio that specializes in creating realistic and immersive car games. The game was released in 2015 and has since gained over 100 million downloads on Google Play Store. It is one of the best-rated driving games on the platform, with an average rating of 4.3 out of 5 stars.
-Features of Extreme Car Driving Simulator
-Extreme Car Driving Simulator has many features that make it stand out from other driving games. Here are some of them:
-Drive with traffic
-You can choose to drive with traffic or without traffic in the game. Driving with traffic will make the game more challenging and realistic, as you will have to avoid collisions and follow the traffic rules. You can also honk your horn, flash your lights, and use your indicators to communicate with other drivers.
-Full real HUD
-The game has a full real HUD that shows you important information such as your speed, gear, revs, and fuel level. You can also see the status of your ABS, TC, and ESP systems, which you can turn on or off depending on your preference.
-ABS, TC and ESP simulation
-The game simulates the anti-lock braking system (ABS), traction control (TC), and electronic stability program (ESP) of real cars. These systems help you control your car better and prevent skidding, spinning, or losing control. You can also turn them off if you want to test your driving skills without any assistance.
-Explore a detailed open world environment
-The game has a large and detailed open world environment that you can explore freely. You can drive around the city, which has various buildings, roads, bridges, tunnels, and landmarks. You can also find ramps, loops, obstacles, and other elements that you can use to perform stunts and tricks. The game also has different weather conditions and time of day effects that add to the realism and variety of the game.
-extreme car driving simulator 2023 descargar apk
-extreme car driving simulator mod apk descargar gratis
-extreme car driving simulator hack apk descargar
-extreme car driving simulator 2 descargar apk
-extreme car driving simulator 3d descargar apk
-extreme car driving simulator apk descargar ultima version
-extreme car driving simulator apk descargar para pc
-extreme car driving simulator apk descargar android
-extreme car driving simulator apk descargar uptodown
-extreme car driving simulator apk descargar mega
-extreme car driving simulator pro apk descargar
-extreme car driving simulator full apk descargar
-extreme car driving simulator premium apk descargar
-extreme car driving simulator unlimited money apk descargar
-extreme car driving simulator offline apk descargar
-extreme car driving simulator online apk descargar
-extreme car driving simulator real physics engine apk descargar
-extreme car driving simulator free roam apk descargar
-extreme car driving simulator open world apk descargar
-extreme car driving simulator city drive apk descargar
-extreme car driving simulator drift mode apk descargar
-extreme car driving simulator racing mode apk descargar
-extreme car driving simulator traffic mode apk descargar
-extreme car driving simulator police chase mode apk descargar
-extreme car driving simulator zombie mode apk descargar
-extreme car driving simulator multiplayer mode apk descargar
-extreme car driving simulator custom cars apk descargar
-extreme car driving simulator new cars apk descargar
-extreme car driving simulator best cars apk descargar
-extreme car driving simulator all cars unlocked apk descargar
-extreme car driving simulator realistic graphics apk descargar
-extreme car driving simulator hd graphics apk descargar
-extreme car driving simulator low graphics apk descargar
-extreme car driving simulator high graphics mod apk descargar
-extreme car driving simulator no ads apk descargar
-extreme car driving simulator no internet apk descargar
-extreme car driving simulator no root apk descargar
-extreme car driving simulator latest version apk descargar
-extreme car driving simulator old version apk descargar
-extreme car driving simulator beta version apk descargar
-como descargar e instalar extreme car driving simulator apk
-como jugar a extreme car driving simulator sin descargar el apk
-donde puedo descargar el juego de extreme car driving simulator en formato apk
-que es el juego de extreme car driving simulator y como se descarga el archivo apk
-como actualizar el juego de extreme car driving simulator desde el archivo apk
-como solucionar los problemas de instalacion del juego de extreme car driving simulator con el archivo apk
-como conseguir monedas infinitas en el juego de extreme car driving simulator con el archivo modificado de la aplicacion (modded/hacked)
-Realistic car damage
-The game has a realistic car damage system that shows you the effects of your driving actions. You can see your car get dented, scratched, or smashed depending on how hard you hit something. You can also see parts of your car fall off or fly away after a collision. The game also has a repair button that you can use to fix your car instantly if you don't want to see it damaged.
-Accurate physics
-The game has an accurate physics engine that makes the driving experience more realistic and fun. You can feel the weight, speed, and inertia of your car as you drive it. You can also see how your car reacts to different surfaces, slopes, curves, and jumps. The game also
Control your car with different options
-The game gives you different options to control your car. You can choose between tilt, buttons, or steering wheel modes. You can also adjust the sensitivity and position of the controls according to your liking. You can also change the camera view from inside or outside the car, or use the free camera mode to see your car from any angle.
-How to download and install Extreme Car Driving Simulator APK?
-If you want to play Extreme Car Driving Simulator on your Android device, you will need to download and install the APK file of the game. APK stands for Android Package Kit, and it is a file format that contains all the necessary files and data for an Android application. Here are the requirements and steps to download and install Extreme Car Driving Simulator APK:
-Requirements for Extreme Car Driving Simulator APK
-
-- You will need an Android device that runs on Android 4.1 or higher.
-- You will need at least 100 MB of free storage space on your device.
-- You will need a stable internet connection to download the APK file.
-- You will need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
-Steps to download and install Extreme Car Driving Simulator APK
-
-- Go to a trusted and reliable website that offers the APK file of Extreme Car Driving Simulator. You can use this link as an example.
-- Click on the download button and wait for the APK file to be downloaded on your device.
-- Once the download is complete, locate the APK file in your device's file manager and tap on it.
-- Follow the instructions on the screen to install the game on your device.
-- After the installation is done, you can launch the game from your app drawer or home screen and enjoy driving.
-
-Why should you play Extreme Car Driving Simulator?
-Extreme Car Driving Simulator is a game that will appeal to anyone who loves driving cars and wants to have a realistic and fun experience. Here are some of the pros and cons of playing this game:
-Pros of Extreme Car Driving Simulator
-
-- The game has high-quality graphics and sound effects that create a immersive atmosphere.
-- The game has a variety of cars with different features and performance that you can unlock and customize.
-- The game has a large and detailed open world environment that you can explore freely and find many surprises.
-- The game has a realistic physics and car damage system that makes the driving experience more challenging and fun.
-- The game has different modes and options that you can choose from depending on your mood and preference.
-
-Cons of Extreme Car Driving Simulator
-
-- The game may have some bugs and glitches that affect the gameplay and performance.
-- The game may have some ads and in-app purchases that may interrupt or limit your enjoyment.
-- The game may require a lot of battery power and storage space on your device.
-
- Conclusion
- Extreme Car Driving Simulator is a driving game that will give you a realistic and fun driving experience on your Android device. You can drive various cars with different features and performance, and enjoy the realistic physics and car damage effects. You can also explore a large and detailed city with traffic, ramps, obstacles, and more. Whether you want to drive fast, drift, or crash your car, Extreme Car Driving Simulator will give you the freedom and fun you are looking for. You can download and install the APK file of the game from a trusted website and start driving today.
- If you have any questions or feedback about Extreme Car Driving Simulator, feel free to ask them in the comments section below. Here are some FAQs that may help you:
- Frequently Asked Questions
-
- - Is Extreme Car Driving Simulator free?
- Yes, Extreme Car Driving Simulator is free to download and play. However, it may have some ads and in-app purchases that may affect your gameplay or enjoyment.
- - Is Extreme Car Driving Simulator safe?
- Yes, Extreme Car Driving Simulator is safe to play as long as you download it from a trusted and reliable website. However, you should always be careful when downloading any app from unknown sources, as they may contain viruses or malware that may harm your device or data.
- - Is Extreme Car Driving Simulator offline?
- No, Extreme Car Driving Simulator requires an internet connection to run properly. You will need a stable internet connection to download the APK file, update the game, and access some of the features and content of the game.
- - How can I unlock more cars in Extreme Car Driving Simulator?
- You can unlock more cars in Extreme Car Driving Simulator by earning coins and gems in the game. You can earn coins and gems by completing missions, driving with traffic, performing stunts, and watching ads. You can also buy coins and gems with real money if you want to unlock cars faster.
- - How can I customize my car in Extreme Car Driving Simulator?
- You can customize your car in Extreme Car Driving Simulator by going to the garage menu and selecting the car you want to modify. You can change the color, wheels, spoilers, and stickers of your car. You can also upgrade the engine, brakes, suspension, and turbo of your car to improve its performance.
- - How can I contact the developer of Extreme Car Driving Simulator?
- You can contact the developer of Extreme Car Driving Simulator by sending an email to support@axesinmotion.com. You can also visit their website or follow them on Facebook, Twitter, or Instagram for more information and updates about the game.
-
- I hope you enjoyed reading this article and learned something new about Extreme Car Driving Simulator. If you did, please share it with your friends and family who might be interested in this game. Thank you for your time and attention.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Descarga Soul Knight Mod APK con gemas y monedas ilimitadas y todo desbloqueado.md b/spaces/1phancelerku/anime-remove-background/Descarga Soul Knight Mod APK con gemas y monedas ilimitadas y todo desbloqueado.md
deleted file mode 100644
index 54811ca10b0e3f546a1e203fafb35ed85d1ce296..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Descarga Soul Knight Mod APK con gemas y monedas ilimitadas y todo desbloqueado.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-Soul Knight Mod Apk Todo Desbloqueado: Cómo Descargar y Jugar el Juego de Calabozos Más Divertido
- ¿Te gustan los juegos de acción, aventura y exploración? ¿Te apasionan los juegos de estilo roguelike con gráficos pixelados y una gran variedad de armas y personajes? ¿Quieres disfrutar de una experiencia de juego sin límites ni restricciones? Entonces, te encantará Soul Knight Mod Apk Todo Desbloqueado, una versión modificada del juego original que te permite acceder a todas las características y beneficios del juego desde el principio.
-soul knight mod apk todo desbloqueado
Download > https://jinyurl.com/2uNOix
- En este artículo, te contaremos todo lo que necesitas saber sobre Soul Knight, el juego de calabozos más divertido y adictivo que puedes jugar en tu dispositivo Android o iOS. Te explicaremos qué es Soul Knight, qué es Soul Knight Mod Apk Todo Desbloqueado, cómo descargarlo e instalarlo, cómo jugarlo y algunos consejos y trucos para mejorar tu rendimiento. Además, al final del artículo, responderemos a algunas preguntas frecuentes que pueden surgirte sobre el juego.
- ¿Qué es Soul Knight?
- Soul Knight es un juego de rol y acción desarrollado por ChillyRoom Inc. que se lanzó en febrero de 2017 para Android y iOS. El juego está inspirado en el juego Enter The Gungeon (un juego de disparos con elementos roguelike producido por Dodge Roll y Devolver Digital).
- Características del juego
- Soul Knight tiene las siguientes características:
-
-- Más de 20 héroes únicos con habilidades especiales. Puedes elegir entre un pícaro, un arquero elfo, un mago y muchos más.
-- Más de 400 armas diferentes que puedes usar para eliminar a los monstruos que te atacan. Hay pist
olas, escopetas, rifles, espadas, lanzallamas y mucho más.
-- Un sistema de generación aleatoria de calabozos que hace que cada partida sea diferente y única. Nunca sabrás qué te espera en la próxima habitación.
-- Un modo multijugador cooperativo local que te permite jugar con hasta 3 amigos en la misma pantalla. Puedes compartir las armas y los objetos con tus compañeros y ayudaros mutuamente.
-- Un estilo gráfico retro y colorido que recuerda a los juegos clásicos de 8 bits y 16 bits. El juego tiene un aspecto nostálgico y encantador.
-- Un humor irreverente y divertido que se refleja en los diálogos, los personajes y las situaciones. El juego no se toma muy en serio a sí mismo y te hará reír con sus ocurrencias.
-
- Historia del juego
- La historia de Soul Knight es muy simple y no tiene mucha importancia para el desarrollo del juego. Según el propio juego, "en un tiempo de espadas y pistolas, la piedra mágica que mantiene el equilibrio del mundo es robada por alienígenas de alta tecnología. El mundo pende de un hilo. Todo depende de ti recuperar la piedra mágica...".
- Así pues, tu objetivo es entrar en los calabozos infestados de monstruos y alienígenas, encontrar la piedra mágica y devolverla a su lugar. Por el camino, tendrás que enfrentarte a todo tipo de enemigos y jefes, recolectar armas y objetos, y sobrevivir a las trampas y los obstáculos.
-soul knight mod apk gemas infinitas
-soul knight mod apk monedas ilimitadas
-soul knight mod apk personajes desbloqueados
-soul knight mod apk armas desbloqueadas
-soul knight mod apk ultima version
-soul knight mod apk descargar gratis
-soul knight mod apk sin anuncios
-soul knight mod apk energia infinita
-soul knight mod apk modo dios
-soul knight mod apk mega
-soul knight mod apk mediafıre
-soul knight mod apk android 1
-soul knight mod apk hackeado
-soul knight mod apk 2023
-soul knight mod apk 5.2.4
-soul knight mod apk skins desbloqueadas
-soul knight mod apk plantas infinitas
-soul knight mod apk mascotas desbloqueadas
-soul knight mod apk actualizado
-soul knight mod apk online
-soul knight mod apk offline
-soul knight mod apk no root
-soul knight mod apk facil de instalar
-soul knight mod apk español
-soul knight mod apk full
-soul knight mod apk premium
-soul knight mod apk vip
-soul knight mod apk pro
-soul knight mod apk oro infinito
-soul knight mod apk vida infinita
-soul knight mod apk balas infinitas
-soul knight mod apk daño aumentado
-soul knight mod apk velocidad aumentada
-soul knight mod apk inmortalidad
-soul knight mod apk invencible
-soul knight mod apk todo gratis
-soul knight mod apk todo ilimitado
-soul knight mod apk todo hackeado
-soul knight mod apk todo facil
-soul knight mod apk todo rapido
- ¿Qué es Soul Knight Mod Apk Todo Desbloqueado?
- Soul Knight Mod Apk Todo Desbloqueado es una versión modificada del juego original que te permite disfrutar de todas las ventajas y beneficios del juego desde el principio. Con este mod apk, no tendrás que gastar dinero real ni esperar a desbloquear los contenidos del juego. Podrás acceder a todo lo siguiente:
- Ventajas de usar el mod apk
-
-- Todos los héroes desbloqueados. Podrás elegir entre más de 20 personajes diferentes con habilidades únicas y personalizarlos a tu gusto.
-- Todas las armas desbloqueadas. Podrás usar cualquiera de las más de 400 armas disponibles en el juego sin restricciones ni limitaciones.
-- Todos los objetos desbloqueados. Podrás equiparte con todo tipo de objetos que te ayudarán en tu aventura, como pociones, granadas, anillos, mascotas y más.
-- Todos los niveles desbloqueados. Podrás explorar todos los calabozos del juego sin tener que completarlos en orden ni cumplir con ciertos requisitos.
-- Todos los modos de juego desbloqueados. Podrás jugar en el modo normal, el modo difícil, el modo jefe o el modo infinito según tu preferencia.
-- Dinero ilimitado. Podrás comprar todo lo que quieras en las tiendas del juego sin preocuparte por el precio ni por tu saldo.
-- Energía ilimitada. Podrás usar tu habilidad especial tantas veces como quieras sin tener que esperar a que se recargue.
-- Sin anuncios. Podrás jugar sin interrupciones ni molestias por parte de la publicidad.
-
- Cómo descargar e instalar el mod apk
- Para descargar e instalar el mod apk de Soul Knight Todo Desbloqueado, solo tienes que seguir estos pasos:
-
-- Descarga el archivo apk desde este enlace. El archivo tiene un tamaño de unos 100 MB y es seguro y confiable.
-- Abre el archivo apk desde tu gestor de archivos o desde la carpeta de descargas de tu dispositivo.
-- Acepta los permisos e inicia la instalación. El proceso puede tardar unos segundos o minutos dependiendo de tu velocidad de conexión y de tu dispositivo.
-- Una vez instalado el juego, abrelo y disfruta de Soul Knight Mod Apk Todo Desbloqueado.
- Para explorar los calabozos, solo tienes que usar el joystick virtual de la parte inferior izquierda para moverte y el botón de disparo de la parte inferior derecha para atacar. También puedes usar el botón de habilidad especial de la parte superior derecha para activar el poder único de tu héroe. Además, puedes cambiar de arma pulsando en el icono del arma en la parte superior izquierda o recoger nuevas armas que encuentres por el camino.
- Cada calabozo tiene varias habitaciones que tendrás que atravesar hasta llegar al jefe. Algunas habitaciones están vacías, otras tienen enemigos que tendrás que eliminar, y otras tienen objetos o personajes que te pueden ayudar o perjudicar. Por ejemplo, puedes encontrar tiendas donde comprar armas u objetos, estatuas que te dan bendiciones o maldiciones, cofres que contienen tesoros o trampas, o personajes secundarios que te ofrecen misiones o consejos.
- Usa las armas y los objetos
- Una de las características más divertidas y variadas de Soul Knight es la gran cantidad de armas y objetos que puedes usar en tu aventura. Hay más de 400 armas diferentes que puedes encontrar, comprar o fusionar, cada una con su propio tipo de disparo, daño, cadencia, alcance y efecto especial. Por ejemplo, hay armas que disparan balas normales, otras que disparan rayos láser, otras que disparan misiles teledirigidos, otras que disparan bolas de fuego, y así sucesivamente.
- Además, hay muchos objetos que puedes equiparte o usar para mejorar tu rendimiento o tu supervivencia. Por ejemplo, hay pociones que te curan o te dan energía, granadas que explotan y dañan a los enemigos cercanos, anillos que te dan bonificaciones permanentes o temporales, mascotas que te acompañan y te ayudan en el combate, y mucho más.
- Para usar las armas y los objetos, solo tienes que pulsar en el icono correspondiente en la pantalla. Puedes llevar hasta dos armas al mismo tiempo y cambiar entre ellas cuando quieras. También puedes llevar hasta tres objetos diferentes y usarlos cuando los necesites. Además, puedes fusionar o mejorar tus armas usando las forjas o los talleres que encuentres en los calabozos.
- Combate a los enemigos y los jefes
- El principal desafío de Soul Knight es enfrentarte a los numerosos enemigos y jefes que te atacarán sin piedad en los calabozos. Hay más de 200 tipos de enemigos diferentes, cada uno con su propio aspecto, comportamiento y habilidad. Por ejemplo, hay esqueletos que te lanzan huesos, zombis que te muerden, arañas que te lanzan telarañas, robots que te disparan láseres, alienígenas que se teletransportan y muchos más.
- Para combatir a los enemigos, tendrás que usar tus armas, tus objetos y tu habilidad especial con inteligencia y estrategia. Tendrás que tener en cuenta el tipo de arma que usas, el tipo de enemigo al que te enfrentas y el entorno en el que te encuentras. También tendrás que esquivar los ataques enemigos moviéndote por la pantalla y aprovechando los obstáculos o las coberturas.
- Al final de cada calabozo, tendrás que enfrentarte a un jefe final que será mucho más fuerte y resistente que los demás enemigos. Cada jefe tiene su propio diseño , su patrón de ataque y su debilidad. Por ejemplo, hay un jefe que es una planta gigante que te lanza espinas, otro que es un dragón de hielo que te congela, otro que es un caballero oscuro que te persigue con su espada y muchos más.
- Para derrotar a los jefes, tendrás que usar tus mejores armas, tus objetos más útiles y tu habilidad especial más poderosa. Tendrás que estar atento a sus movimientos y a sus señales para anticiparte a sus ataques y evitarlos. También tendrás que buscar sus puntos débiles y aprovecharlos para infligirles más daño.
- Consejos y trucos para jugar Soul Knight Mod Apk Todo Desbloqueado
- Soul Knight Mod Apk Todo Desbloqueado es un juego muy divertido y adictivo, pero también puede ser muy desafiante y frustrante si no sabes cómo jugarlo bien. Por eso, te vamos a dar algunos consejos y trucos para que puedas mejorar tu rendimiento y disfrutar más del juego. Estos son algunos de ellos:
- Aprovecha tu habilidad especial
- Cada héroe tiene una habilidad especial que puede marcar la diferencia en el combate. Estas habilidades pueden ser ofensivas, defensivas o de apoyo, y tienen un tiempo de recarga que varía según el héroe. Por ejemplo, el caballero puede usar dos armas al mismo tiempo durante unos segundos, el asesino puede volverse invisible y lanzar cuchillos, el alquimista puede lanzar bombas venenosas y curarse a sí mismo, y así sucesivamente.
- Para aprovechar tu habilidad especial, tienes que saber cuándo y cómo usarla. No la desperdicies en situaciones innecesarias o fáciles, sino que guárdala para los momentos más difíciles o decisivos. Por ejemplo, puedes usarla para escapar de una situación peligrosa, para acabar con un grupo de enemigos o para enfrentarte a un jefe. También tienes que tener en cuenta el tipo de habilidad que tienes y cómo se complementa con tu arma y tu estilo de juego.
- Gestiona tu energía y tu salud
- Otro aspecto importante de Soul Knight es la gestión de tu energía y tu salud. La energía es el recurso que necesitas para usar tus armas, mientras que la salud es el indicador de tu vida. Ambos se pueden ver en la parte superior izquierda de la pantalla.
- Para gestionar tu energía y tu salud, tienes que tener en cuenta lo siguiente:
-
-- No malgastes tu energía disparando sin sentido o usando armas que consumen mucha energía. Intenta disparar solo cuando tengas un objetivo claro y usa armas que se adapten a tu nivel de energía.
-- Recupera tu energía usando las pociones azules que encuentres por el camino o comprando en las tiendas. También puedes recargar tu energía usando las fuentes o los generadores que hay en algunos calabozos.
-- No arriesgues tu salud recibiendo daño innecesario o exponiéndote al fuego enemigo. Intenta esquivar los ataques enemigos moviéndote por la pantalla y aprovechando los obstáculos o las coberturas.
-- Recupera tu salud usando las pociones rojas que encuentres por el camino o comprando en las tiendas. También puedes curarte usando algunas habilidades especiales o algunos objetos como el anillo de vida o la mascota vampiro.
-
- Busca las estatuas y los cofres
- En los calabozos de Soul Knight hay muchos secretos y sorpresas que puedes descubrir si exploras bien cada habitación. Algunos de estos secretos son las estatuas y los cofres, que te pueden dar beneficios o perjuicios según lo que hagas con ellos.
- Las estatuas son objetos que representan a diferentes personajes o criaturas del juego. Puedes interactuar con ellas usando una moneda dorada o una moneda plateada. Si usas una moneda dorada, la estatua te dará una bendición, que es un efecto positivo temporal o permanente. Por ejemplo, puede aumentar tu daño, tu velocidad, tu resist encia o tu suerte. Si usas una moneda plateada, la estatua te dará una maldición, que es un efecto negativo temporal o permanente. Por ejemplo, puede reducir tu daño, tu velocidad, tu resistencia o tu suerte. Por lo tanto, ten cuidado con lo que usas y con lo que eliges.
- Los cofres son objetos que contienen tesoros o trampas. Puedes abrirlos usando una llave o rompiéndolos con tu arma. Si abres un cofre con una llave, obtendrás un tesoro, que puede ser una arma, un objeto o una moneda. Si rompes un cofre con tu arma, puede que obtengas un tesoro o una trampa, que puede ser un enemigo, una explosión o una maldición. Por lo tanto, piensa bien si vale la pena arriesgarte o no.
- Fusiona y mejora tus armas
- Otro consejo para jugar a Soul Knight Mod Apk Todo Desbloqueado es fusionar y mejorar tus armas para hacerlas más poderosas y eficaces. Para fusionar tus armas, tienes que usar las forjas que hay en algunos calabozos. Las forjas te permiten combinar dos armas del mismo tipo para obtener una nueva arma con mejores características y efectos. Por ejemplo, puedes fusionar dos pistolas para obtener una pistola doble, o dos escopetas para obtener una escopeta de doble cañón.
- Para mejorar tus armas, tienes que usar los talleres que hay en algunos calabozos. Los talleres te permiten aumentar el nivel de tus armas usando monedas doradas. Al aumentar el nivel de tus armas, aumentas su daño, su cadencia, su alcance y su efecto especial. Por ejemplo, puedes mejorar una pistola normal para que dispare más rápido, más lejos y con más fuerza.
- Conclusión
- Soul Knight Mod Apk Todo Desbloqueado es un juego de rol y acción muy divertido y adictivo que te hará pasar horas de diversión y entretenimiento. Con este mod apk, podrás disfrutar de todas las ventajas y beneficios del juego sin tener que gastar dinero ni esperar a desbloquear los contenidos. Podrás acceder a todos los héroes, todas las armas, todos los objetos, todos los niveles y todos los modos de juego desde el principio.
- Además, podrás explorar los calabozos generados aleatoriamente, usar las armas y los objetos más variados y originales, combatir a los enemigos y los jefes más desafiantes y divertidos, y jugar con tus amigos en el modo multijugador cooperativo local. Todo ello con un estilo gráfico retro y colorido y un humor irreverente y divertido.
- Si te gustan los juegos de estilo roguelike con gráficos pixelados y una gran variedad de armas y personajes, no dudes en descargar Soul Knight Mod Apk Todo Desbloqueado y empezar a jugar ya. Te aseguramos que no te arrepentirás.
- Preguntas frecuentes
- A continuación, responderemos a algunas preguntas frecuentes que pueden surgirte sobre Soul Knight Mod Apk Todo Desbloqueado:
- ¿Es seguro descargar e instalar Soul Knight Mod Apk Todo Desbloqueado?
- Sí, es seguro descargar e instalar Soul Knight Mod Apk Todo Desbloqueado desde el enlace que te hemos proporcionado en este artículo. El archivo apk es seguro y confiable, y no contiene virus ni malware. Además, no necesitas rootear ni jailbreakear tu dispositivo para instalarlo.
- ¿Es compatible Soul Knight Mod Apk Todo Desbloqueado con mi dispositivo?
- Soul Knight Mod Apk Todo Desbloqueado es compatible con la mayoría de los dispositivos Android e iOS que tengan al menos la versión 4.1 o superior del sistema operativo. Sin embargo, puede haber algunos dispositivos que no sean compatibles o que presenten problemas de rendimiento o estabilidad. En ese caso, te recomendamos que pruebes el juego original o que contactes con el desarrollador para solucionar el problema.
- ¿Puedo jugar online con Soul Knight Mod Apk Todo Desbloqueado?
- No, no puedes jugar online con Soul Knight Mod Apk Todo Desbloqueado con otros jugadores que no estén en tu misma red local. El juego solo tiene un modo multijugador cooperativo local que te permite jugar con hasta 3 amigos en la misma pantalla. Para jugar con tus amigos, solo tienes que conectaros a la misma red wifi y pulsar en el icono de multijugador en el menú principal. Luego, podréis elegir vuestros héroes y entrar en los calabozos juntos.
- ¿Puedo actualizar Soul Knight Mod Apk Todo Desbloqueado?
- Sí, puedes actualizar Soul Knight Mod Apk Todo Desbloqueado cuando haya una nueva versión disponible. Para actualizar el juego, solo tienes que descargar el nuevo archivo apk desde el mismo enlace que te hemos proporcionado en este artículo y seguir los mismos pasos que para instalarlo. No hace falta que desinstales la versión anterior, solo sobrescríbela con la nueva. Así, podrás disfrutar de las últimas novedades y mejoras del juego.
- ¿Qué otros juegos similares a Soul Knight Mod Apk Todo Desbloqueado me recomiendas?
- Si te gusta Soul Knight Mod Apk Todo Desbloqueado, quizás también te gusten otros juegos similares que tienen un estilo de juego parecido o que están inspirados en el mismo género. Algunos de estos juegos son:
-
-- Enter The Gungeon: el juego que inspiró a Soul Knight, un juego de disparos con elementos roguelike donde tienes que explorar un calabozo lleno de armas y balas vivientes.
-- Archero: un juego de acción y aventura donde controlas a un arquero que tiene que avanzar por diferentes niveles llenos de enemigos y obstáculos.
-- Dead Cells: un juego de plataformas y acción con elementos roguelike donde controlas a un guerrero inmortal que tiene que escapar de una prisión maldita.
-- The Binding of Isaac: un juego de acción y aventura con elementos roguelike donde controlas a un niño que tiene que huir de su madre loca y explorar un sótano lleno de monstruos y secretos.
-- Rogue Legacy: un juego de plataformas y acción con elementos roguelike donde controlas a un héroe que tiene que explorar un castillo generacional lleno de peligros y sorpresas.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download and Play Special Forces Group 2 with God Mod APK and Get Ready for Intense Shooter Action.md b/spaces/1phancelerku/anime-remove-background/Download and Play Special Forces Group 2 with God Mod APK and Get Ready for Intense Shooter Action.md
deleted file mode 100644
index c5e3e755fbb128760d4917d066fc4092430615e8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download and Play Special Forces Group 2 with God Mod APK and Get Ready for Intense Shooter Action.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-Special Forces Group 2: A Thrilling FPS Game for Android
-If you are a fan of first-person shooter (FPS) games, you might have heard of Special Forces Group 2, a popular game for Android devices. This game is developed by ForgeGames, a company that specializes in creating realistic and immersive shooting games. Special Forces Group 2 is one of their best creations, as it offers a variety of game modes, weapons, maps, and characters to choose from. You can play this game solo or with your friends online or offline, and enjoy the adrenaline rush of shooting your enemies in different scenarios.
-Features of Special Forces Group 2
-Special Forces Group 2 has many features that make it stand out from other FPS games. Here are some of them:
-special forces group 2 god mod apk download
DOWNLOAD ✶ https://jinyurl.com/2uNTYZ
-
-- Singleplayer and multiplayer modes: You can play this game alone with bots or with other players online or via WiFi router. You can also create your own room and invite your friends to join you.
-- 9 game modes to choose from: You can choose from different game modes, such as Classic, Resurrection, Capture the Flag, Zombie Mode, BombMode, Knives, Deathmatch, ArmsRace, and Sniper. Each mode has its own rules and objectives, so you will never get bored.
-- Weapons skins and characters customization: You can customize your weapons and characters with different skins. There are 134 weapons skins and 8 characters per team to choose from. You can also change the color of your crosshair, blood, and chat.
-- 30+ maps to explore: You can play on different maps, such as desert, city, snow, forest, and more. Each map has its own layout, obstacles, and hiding spots. You can also use grenades, bulletproof vests, and other items to help you in your missions.
-
-How to Download and Install Special Forces Group 2 on Android
-If you want to download and install Special Forces Group 2 on your Android device, you can follow these simple steps:
-
-- Step 1: Go to the official website or a trusted source that provides the APK file and the OBB file for Special Forces Group 2. For example, you can go to [APKdone](^1^), a website that offers free and safe downloads for Android games.
-- Step 2: Download the APK file and the OBB file for Special Forces Group 2. The APK file is about 40 MB in size, while the OBB file is about 300 MB in size. Make sure you have enough storage space on your device before downloading.
-- Step 3 , you should be careful of the compatibility issues and the risks of using God Mod APK, as mentioned above.
-- Q4: Can I play online with God Mod APK?
-- A4: You can play online with God Mod APK, but you may face some problems, such as lag, disconnects, or bans. You may also encounter other players who are using God Mod APK or other cheats, which may ruin your gaming experience.
-- Q5: What are some alternatives to God Mod APK?
-- A5: Some alternatives to God Mod APK are normal APK, modded OBB, or hacked data. These are different ways of modifying the game files to get some advantages in the game. However, they also have their own drawbacks and risks, so you should use them with caution.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download emulator black ps2 android with these simple steps.md b/spaces/1phancelerku/anime-remove-background/Download emulator black ps2 android with these simple steps.md
deleted file mode 100644
index 990b7d39de39086f8d751f2ce624fa3d8651058b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download emulator black ps2 android with these simple steps.md
+++ /dev/null
@@ -1,224 +0,0 @@
-
-How to Download and Install Emulator Black PS2 Android
-If you are a fan of PlayStation 2 games and want to play them on your Android device, you might be interested in Emulator Black PS2 Android. This is a powerful and reliable PS2 emulator that can run most of the PS2 games smoothly and with high-quality graphics. In this article, we will show you what Emulator Black PS2 Android is, why you need it, how to download and install it, how to configure and optimize it, and how to solve some common issues. Let's get started!
- What is Emulator Black PS2 Android?
-Emulator Black PS2 Android is a new PS2 emulator that was launched in late 2021 by a developer named Tahlreth. It is based on the PCSX2 emulator, which is a popular and well-established emulator for PC. Tahlreth got the permission from the PCSX2 developers to use their code and licensed it under the LGPL license. Unlike some other shady PS2 emulators on Android, such as DamonPS2, Emulator Black PS2 Android does not steal code or charge money for its features.
-download emulator black ps2 android
Download Zip ⏩ https://jinyurl.com/2uNKk9
-Emulator Black PS2 Android has many features that make it one of the best PS2 emulators on Android. Some of these features are:
-
-- It supports both OpenGL and Vulkan graphics renderers, which can improve the graphics quality and performance of the games.
-- It allows you to adjust various settings, such as resolution, framerate, aspect ratio, anti-aliasing, texture filtering, audio latency, controller layout, and more.
-- It supports save states, which let you save and load your game progress at any point.
-- It supports widescreen patches and upscaling, which can enhance the appearance of the games on modern devices.
-- It supports touchscreen and Bluetooth controller input, which gives you more options to control the games.
-- It supports loading games from your device storage or external sources, such as Google Drive or Dropbox.
-
- Why do you need Emulator Black PS2 Android?
-Emulator Black PS2 Android is a great way to enjoy your favorite PS2 games on your Android device. There are many benefits of using this emulator, such as:
-
-- You can play hundreds of PS2 games that are not available on any other platform.
-- You can play PS2 games anytime and anywhere without needing a console or a TV.
-- You can play PS2 games with better graphics and sound than the original hardware.
-- You can play PS2 games with more convenience and customization than the original hardware.
-
-However, playing PS2 games on Android is not an easy task. There are many challenges that you might face when using a PS2 emulator on Android, such as:
-
-- You need a powerful device that can handle the high demands of PS2 emulation. The developer of Emulator Black PS2 Android recommends a Snapdragon 845-level processor or better, with four large CPU cores (Cortex-A75 or higher) and an Adreno GPU.
-- You need enough storage space to store the emulator app and the PS2 game files. The emulator app is about 30 MB, while the PS2 game files can range from 500 MB to 4 GB each.
-- You need to obtain the PS2 game files legally and ethically. You can either rip them from your own PS2 discs using a PC and a DVD drive, or download them from legitimate sources that have the permission of the game publishers. You should not download or share pirated or illegal PS2 game files.
-- You need to tweak and test the emulator settings for each game to find the best balance between quality and performance. Not all games will run perfectly on the emulator, and some might require specific settings or patches to work properly.
-
-Fortunately, Emulator Black PS2 Android is designed to overcome these challenges and provide you with the best PS2 emulation experience on Android. It has a user-friendly interface, a fast and stable performance, a high compatibility rate, and a helpful community. It also has regular updates and bug fixes that improve its functionality and features.
- How to download and install Emulator Black PS2 Android?
-Downloading and installing Emulator Black PS2 Android is very easy and straightforward. You have two options to get the emulator app on your device:
-
-- Download it from the official website. You can visit the website of Emulator Black PS2 Android at https://emulatorblackps2android.com and click on the download button. This will download the latest version of the emulator app as an APK file. You can then install it by tapping on the file and following the instructions.
-- Download it from the Google Play Store. You can also find Emulator Black PS2 Android on the Google Play Store by searching for its name or scanning this QR code:
. This will take you to the app page, where you can tap on the install button. This will download and install the emulator app automatically.
-
-After you have installed the emulator app, you need to grant it some permissions to access your device storage, camera, microphone, and location. These permissions are necessary for the emulator to function properly and load your PS2 games. You can grant these permissions by going to your device settings, finding Emulator Black PS2 Android in the list of apps, and toggling on the permissions.
-download aethersx2 ps2 emulator for android
-download deus ex play ps2 emulator for android
-download pro playstation ps2 emulator for android
-download golden ps2 emulator for android
-download gold ps2 emulator for android
-download new ps2 emulator for android
-download free pro ps2 emulator for android
-download free hd ps2 emulator for android
-download pro ppss2 emulator for android
-download playstation 2 emulator for android apk
-download best ps2 emulator for android 2023
-download damonps2 pro ps2 emulator for android
-download pcsx2 emulator for android phone
-download playstation 2 games for android emulator
-download black ps2 game iso for android
-how to play black ps2 on android with aethersx2
-how to install deus ex play ps2 emulator on android
-how to configure pro playstation ps2 emulator on android
-how to use golden ps2 emulator on android device
-how to run gold ps2 emulator on android tablet
-how to set up new ps2 emulator on android smartphone
-how to optimize free pro ps2 emulator on android performance
-how to improve free hd ps2 emulator on android graphics
-how to fix pro ppss2 emulator on android errors
-how to download playstation 2 bios for android emulator
-best settings for aethersx2 ps2 emulator on android
-best games for deus ex play ps2 emulator on android
-best features of pro playstation ps2 emulator on android
-best alternatives to golden ps2 emulator on android
-best tips and tricks for gold ps2 emulator on android
-new updates for new ps2 emulator on android 2023
-new cheats for free pro ps2 emulator on android 2023
-new mods for free hd ps2 emulator on android 2023
-new roms for pro ppss2 emulator on android 2023
-new version of playstation 2 emulator for android apk 2023
-compare aethersx2 vs damonps2 pro ps2 emulator for android
-compare deus ex play vs pcsx2 ps2 emulator for android phone
-compare pro playstation vs playstation 2 games for android emulator
-compare golden vs gold ps2 emulator for android 2023
-compare new vs free pro ps2 emulator for android performance
-compare free hd vs pro ppss2 emulator for android graphics
-compare playstation 2 bios vs playstation 2 iso for android emulator
-review of aethersx2 ps2 emulator for android 2023
-review of deus ex play ps2 emulator for android 2023
-review of pro playstation ps2 emulator for android 2023
- How to load PS2 games on Emulator Black PS2 Android?
-Once you have installed the emulator app and granted it the permissions, you can start loading your PS2 games on it. You have two options to load your PS2 games:
-
-- Load them from your device storage. If you have stored your PS2 game files on your device storage, such as your internal memory or SD card, you can load them directly from there. You just need to launch the emulator app, tap on the "Load Game" button, and browse to the folder where you have saved your PS2 game files. You can then select the game file that you want to play and tap on it.
-- Load them from external sources. If you have stored your PS2 game files on external sources, such as Google Drive or Dropbox, you can load them from there as well. You just need to launch the emulator app, tap on the "Load Game" button, and tap on the "Cloud" icon. This will open a menu where you can choose which cloud service you want to use. You can then sign in with your account and access your PS2 game files. You can then select the game file that you want to play and tap on it.
-
-Note that Emulator Black PS2 Android supports both ISO and CSO formats for PS2 game files. ISO is the standard format that preserves all the data of the original disc, while CSO is a compressed format that reduces the file size but may lose some quality or functionality. You can choose whichever format suits your preference and storage space.
- How to configure and optimize Emulator Black PS2 Android?
-Emulator Black PS2 Android has a lot of settings and options that you can adjust to configure and optimize the emulator according to your device and game. You can access these settings by tapping on the "Settings" button on the main menu of the emulator app. Here are some of the main settings and options that you can tweak:
-Graphics settings
-The graphics settings allow you to change the graphics renderer, resolution, framerate, aspect ratio, anti-aliasing, texture filtering, and other options that affect the visual quality and performance of the games. You can find these settings under the "Graphics" tab in the settings menu. Here are some of the graphics settings and what they do:
-
-
-Setting |
-Description |
-
-
-Renderer |
-This lets you choose between OpenGL and Vulkan as the graphics renderer for the emulator. OpenGL is more compatible and stable, but Vulkan is more powerful and efficient. You can try both and see which one works better for your device and game. |
-
-
-Resolution |
-This lets you choose the resolution of the game output. The higher the resolution, the sharper and clearer the game will look, but it will also consume more resources and battery. You can choose from several presets, such as native (the original resolution of the PS2), 2x native, 3x native, 4x native, or custom (where you can enter your own resolution). |
-
-
-Framerate |
-This lets you choose the framerate of the game output. The higher the framerate, the smoother and more fluid the game will run, but it will also consume more resources and battery. You can choose from several presets, such as 30 FPS (the standard framerate of most PS2 games), 60 FPS (the ideal framerate for smooth gameplay), or custom (where you can enter your own framerate). |
-
-
-Aspect ratio |
-This lets you choose the aspect ratio of the game output. The aspect ratio is the ratio between the width and height of the screen. You can choose from several presets, such as 4:3 (the original aspect ratio of most PS2 games), 16:9 (the widescreen aspect ratio of modern devices), or custom (where you can enter your own aspect ratio). |
-
-
-Anti-aliasing |
-This lets you choose whether to enable or disable anti-aliasing for the game output. Anti-aliasing is a technique that smooths out the jagged edges of the graphics, making them look more realistic and less pixelated. However, it also consumes more resources and battery. You can choose from several levels of anti-aliasing, such as none, 2x, 4x, or 8x. |
-
-
-Texture filtering |
-This lets you choose whether to enable or disable texture filtering for the game output. Texture filtering is a technique that improves the quality and sharpness of the textures, making them look more detailed and less blurry. However, it also consumes more resources and battery. You can choose from several levels of texture filtering, such as none, bilinear, trilinear, or anisotropic. |
-
-
-Other options |
-There are also some other options that you can toggle on or off in the graphics settings, such as: |
-
-- Widescreen patch: This applies a patch to the game that makes it compatible with the widescreen aspect ratio, without stretching or cropping the image.
-- Upscaling: This enhances the resolution and quality of the game graphics, making them look more crisp and smooth.
-- Frame skipping: This skips some frames of the game output, making it run faster but less smoothly.
-- Vsync: This synchronizes the framerate of the game output with the refresh rate of your device screen, preventing screen tearing and stuttering.
-
-Sound settings
-The sound settings allow you to change the sound latency, volume, and quality of the games. You can find these settings under the "Sound" tab in the settings menu. Here are some of the sound settings and what they do:
-
-
-Setting |
-Description |
-
-
-Latency |
-This lets you choose the latency of the sound output. The latency is the delay between the sound being generated by the emulator and being played by your device. The lower the latency, the more responsive and accurate the sound will be, but it will also consume more resources and battery. You can choose from several presets, such as low, medium, high, or custom (where you can enter your own latency). |
-
-
-Volume |
-This lets you choose the volume of the sound output. You can adjust the volume by using a slider or entering a value between 0 and 100. |
-
-
-Quality |
-This lets you choose the quality of the sound output. The quality is the fidelity and clarity of the sound, which depends on factors such as sampling rate, bit depth, and channels. The higher the quality, the better and richer the sound will be, but it will also consume more resources and battery. You can choose from several presets, such as low, medium, high, or custom (where you can enter your own quality). |
-
-
- Controls settings
-The controls settings allow you to change the input method, layout, sensitivity, and vibration of the games. You can find these settings under the "Controls" tab in the settings menu. Here are some of the controls settings and what they do:
-
-
-Setting |
-Description |
-
-
-Input method |
-This lets you choose between touchscreen and Bluetooth controller as your input method for the games. If you choose touchscreen, you will see a virtual controller on your device screen that mimics the PS2 controller buttons and sticks. If you choose Bluetooth controller, you will need to pair your device with a compatible Bluetooth controller that has enough buttons and sticks to map to the PS2 controller. |
-
-
-Layout |
-This lets you customize the layout of the virtual controller on your device screen. You can drag and drop each button and stick to any position on your screen. You can also resize and rotate them by using pinch and twist gestures. You can save your layout as a preset and load it for different games. |
-
-
-Sensitivity |
-This lets you adjust the sensitivity of each stick on your virtual or Bluetooth controller. The sensitivity is how fast and responsive the stick is to your input. You can adjust the sensitivity by using a slider or entering a value between 0 and 100. |
-
-
-Vibration |
-This lets you enable or disable vibration for your virtual or Bluetooth controller. Vibration is a feature that makes your controller rumble or shake when certain events happen in the game, such as shooting, hitting, or exploding. However, it also consumes more resources and battery. |
-
Performance settings
-The performance settings allow you to change the CPU and GPU emulation modes, speed hacks, and power saving options of the games. You can find these settings under the "Performance" tab in the settings menu. Here are some of the performance settings and what they do:
-
-
-Setting |
-Description |
-
-
-CPU emulation mode |
-This lets you choose between interpreter and recompiler as the CPU emulation mode for the games. The CPU emulation mode is how the emulator translates the PS2 CPU instructions to your device CPU instructions. Interpreter is more accurate and compatible, but slower and more resource-intensive. Recompiler is faster and more efficient, but less accurate and compatible. You can try both and see which one works better for your device and game. |
-
-
-GPU emulation mode |
-This lets you choose between software and hardware as the GPU emulation mode for the games. The GPU emulation mode is how the emulator renders the PS2 graphics to your device screen. Software is more accurate and compatible, but slower and more resource-intensive. Hardware is faster and more efficient, but less accurate and compatible. You can try both and see which one works better for your device and game. |
-
-
-Speed hacks |
-This lets you enable or disable some speed hacks for the games. Speed hacks are some tricks that the emulator uses to boost the speed of the games, such as skipping some calculations, frames, or effects. However, they can also cause some glitches, errors, or crashes in some games. You can choose from several presets, such as none, safe, balanced, or aggressive. |
-
-
-Power saving |
-This lets you enable or disable some power saving options for the games. Power saving options are some features that the emulator uses to reduce the battery consumption of your device, such as lowering the brightness, sound, or resolution of the games. However, they can also affect the quality and performance of the games. You can choose from several presets, such as none, low, medium, or high. |
-
-
- Conclusion
-Emulator Black PS2 Android is a fantastic PS2 emulator that can let you play your favorite PS2 games on your Android device with ease and enjoyment. It has many features and settings that you can customize and optimize to suit your preferences and needs. It also has a high compatibility rate and a supportive community that can help you with any issues or questions. If you are looking for a way to relive your PS2 memories or discover new PS2 gems on your Android device, you should definitely give Emulator Black PS2 Android a try!
- Frequently Asked Questions
-Here are some of the frequently asked questions about Emulator Black PS2 Android:
-
-- Is Emulator Black PS2 Android free?
-Yes, Emulator Black PS2 Android is completely free to download and use. It does not have any ads or in-app purchases. However, if you want to support the developer and the project, you can donate via PayPal or Patreon.
- - Is Emulator Black PS2 Android legal?
-Yes, Emulator Black PS2 Android is legal as long as you use it with your own legally obtained PS2 game files. You should not download or share any pirated or illegal PS2 game files.
- - Is Emulator Black PS2 Android safe?
-Yes, Emulator Black PS2 Android is safe as long as you download it from its official website or the Google Play Store. It does not contain any malware or viruses that can harm your device or data.
- - What are the minimum requirements for Emulator Black PS2 Android?
-The minimum requirements for Emulator Black PS2 Android are:
-
-- An Android device running Android 7.0 or higher.
-- A Snapdragon 845-level processor or better, with four large CPU cores (Cortex-A75 or higher) and an Adreno GPU.
-- At least 4 GB of RAM.
-- At least 10 GB of free storage space.
-- A stable internet connection.
-
- - Where can I get more information and help about Emulator Black PS2 Android?
-You can get more information and help about Emulator Black PS2 Android by visiting its official website at https://emulatorblackps2android.com, where you can find FAQs, tutorials, guides, forums, blogs, social media links, contact details, and more.
-
- I hope you enjoyed this article and learned something new about Emulator Black PS2 Android. If you have any feedback or suggestions, please let me know in the comments section below. Thank you for reading and happy gaming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the Smooth and Comprehensive Gameplay of Battle Royale 3D - Warrior63 with Mod APK.md b/spaces/1phancelerku/anime-remove-background/Enjoy the Smooth and Comprehensive Gameplay of Battle Royale 3D - Warrior63 with Mod APK.md
deleted file mode 100644
index a0eb4b82b94301b4ac37a5940bb4391179cbac90..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy the Smooth and Comprehensive Gameplay of Battle Royale 3D - Warrior63 with Mod APK.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-Battle Royale 3D - Warrior63 Mod APK Download: A Guide for Android Users
- If you are a fan of survival shooting games, you might want to try Battle Royale 3D - Warrior63, a popular mobile game that challenges you to be the last man standing in a fierce combat. In this article, we will tell you everything you need to know about this game, and how to download and install the mod apk version for free. Read on to find out more!
- What is Battle Royale 3D - Warrior63?
- Battle Royale 3D - Warrior63 is a game developed by LQ-GAME, a Chinese studio that specializes in creating action-packed games for Android devices. The game was released in 2020 and has since gained millions of downloads and positive reviews from players around the world.
-battle royale 3d warrior 63 mod apk download
Download ☆☆☆☆☆ https://jinyurl.com/2uNQig
- The game is inspired by the popular genre of battle royale, where you have to compete with other players in a shrinking map and eliminate them until you are the only survivor. The game features a mega battle map in 4km x 4km, with various terrains such as land, sea, mountains, and buildings. You can also use vehicles and weapons to enhance your mobility and firepower.
- Features of the game
- Some of the features that make Battle Royale 3D - Warrior63 stand out from other similar games are:
-
-- A variety of weapons, such as pistols, rifles, submachine guns, sniper guns, grenades, and more.
-- A new weapon control system that makes shooting more smooth and stable.
-- A new custom key mapping that allows you to personalize your controls for a better gaming experience.
-- A new player level system that rewards you with coins and diamonds as you progress.
-- Three different game modes: Death Battle, Team Battle, and Training Challenge.
-- Optimized enemy direction tips and network stability.
-- A fix for a weapon picking bug.
-
- How to play the game
- The gameplay of Battle Royale 3D - Warrior63 is simple and straightforward. You start by choosing a game mode and entering a match with other players. You can either play solo or team up with your friends. Then, you parachute onto the map and search for weapons and resources. You have to be quick and careful, as the map will shrink over time and force you to move closer to your enemies. You also have to avoid the poison circle and enemy attacks, while looking for opportunities to defeat them. The last player or team alive wins the match.
- Why download the mod apk version?
- While Battle Royale 3D - Warrior63 is free to play, it also contains some in-app purchases that require real money. These include coins and diamonds that can be used to buy new weapons, skins, vehicles, and other items. If you want to enjoy the game without spending any money, you might want to download the mod apk version instead.
- Benefits of the mod apk
- The mod apk version of Battle Royale 3D - Warrior63 is a modified version that gives you some advantages over the original version. Some of the benefits of the mod apk are:
-
-- Unlimited coins and diamonds that can be used to buy anything you want in the game.
-- Optimized graphics and performance that make the game run faster and smoother on your device.
-- Weak enemy mode that makes your opponents easier to kill.
-
- How to download and install the mod apk
- To download and install the mod apk version of Battle Royale 3D - Warrior63, you need to follow these steps:
-
-- Go to a trusted website that provides the mod apk file for Battle Royale 3D - Warrior63. You can search for it on Google or use this link: .
-- Download the mod apk file to your device. Make sure you have enough storage space and a stable internet connection.
-- Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
-- Locate the mod apk file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for it to finish.
-- Launch the game and enjoy the mod features.
-
- Tips and tricks for playing Battle Royale 3D - Warrior63
- Now that you have downloaded and installed the mod apk version of Battle Royale 3D - Warrior63, you might want to know some tips and tricks that can help you improve your skills and win more matches. Here are some of them:
-battle royale 3d warrior 63 mod apk free download
-download battle royale 3d warrior 63 mod apk latest version
-battle royale 3d warrior 63 mod apk unlimited money
-how to install battle royale 3d warrior 63 mod apk
-battle royale 3d warrior 63 mod apk android
-battle royale 3d warrior 63 mod apk happymod
-battle royale 3d warrior 63 mod apk offline
-battle royale 3d warrior 63 mod apk no root
-battle royale 3d warrior 63 mod apk online
-battle royale 3d warrior 63 mod apk obb
-battle royale 3d warrior 63 mod apk revdl
-battle royale 3d warrior 63 mod apk rexdl
-battle royale 3d warrior 63 mod apk update
-battle royale 3d warrior 63 mod apk hack
-battle royale 3d warrior 63 mod apk cheat
-battle royale 3d warrior 63 mod apk gameplay
-battle royale 3d warrior 63 mod apk review
-battle royale 3d warrior 63 mod apk features
-battle royale 3d warrior 63 mod apk tips and tricks
-battle royale 3d warrior 63 mod apk guide
-battle royale 3d warrior 63 mod apk best weapons
-battle royale 3d warrior 63 mod apk new map
-battle royale 3d warrior 63 mod apk custom key mapping
-battle royale 3d warrior 63 mod apk weak enemy
-battle royale 3d warrior 63 mod apk optimized
-battle royale 3d warrior 63 mod apk net energy gain
-battle royale 3d warrior 63 mod apk death battle mode
-battle royale 3d warrior 63 mod apk team battle mode
-battle royale 3d warrior 63 mod apk training challenge mode
-battle royale 3d warrior 63 mod apk player level system
-battle royale 3d warrior 63 mod apk windows crossover
-battle royale 3d warrior 63 mod apk vehicle control mode
-download game battle royale 3d warrior 63 mod apk for android
-download game battle royale 3d warrior mega map for android free with unlimited money and weak enemy hack cheat offline online latest version obb rexdl revdl happymod net energy gain vehicle control mode windows crossover player level system death team training challenge modes custom key mapping optimized features tips and tricks guide best weapons new map review gameplay no root install unlimited money and weak enemy hack cheat offline online latest version obb rexdl revdl happymod net energy gain vehicle control mode windows crossover player level system death team training challenge modes custom key mapping optimized features tips and tricks guide best weapons new map review gameplay no root install
- Customize your controls
- One of the best features of Battle Royale 3D - Warrior63 is that it allows you to customize your controls according to your preference. You can access the custom key mapping by tapping on the gear icon on the top right corner of the screen and then selecting Control. You can adjust the size, position, and transparency of the buttons, as well as switch between different control modes. You can also save different control schemes for different game modes. Experiment with different settings until you find the one that suits you best.
- Use vehicles and weapons wisely
- Vehicles and weapons are essential tools for survival in Battle Royale 3D - Warrior63. You can find them scattered around the map, or buy them with coins and diamonds in the shop. Vehicles can help you move faster and escape from danger, but they also make you more visible and vulnerable to enemy fire. Weapons can help you eliminate your enemies, but they also have different characteristics such as range, accuracy, recoil, and ammo capacity. You should always choose the vehicle and weapon that match your play style and situation. For example, if you want to snipe from a distance, you should use a sniper rifle and a motorcycle. If you want to rush into close combat, you should use a submachine gun and a car.
- Avoid the poison circle and enemy attacks
- The poison circle is a deadly mechanic that forces you to move closer to your enemies as the match progresses. It appears as a blue circle on the map that shrinks over time. If you are outside of it, you will lose health gradually until you die. You should always pay attention to the poison circle and plan your movements accordingly. You should also avoid staying in one place for too long, as you might attract enemy attention and get ambushed. You should always be alert and aware of your surroundings, and use cover and camouflage to hide from enemy sight.
- Conclusion
- Battle Royale 3D - Warrior63 is a fun and exciting game that will test your survival skills and reflexes in a realistic 3D environment. You can download the mod apk version of the game for free and enjoy unlimited coins, diamonds, weak enemies, and more. You can also follow our tips and tricks to improve your gameplay and win more matches. Download Battle Royale 3D - Warrior63 mod apk now and join the ultimate battle royale!
- FAQs
- Here are some frequently asked questions about Battle Royale 3D - Warrior63 mod apk:
-
-- Is Battle Royale 3D - Warrior63 mod apk safe to download?
-Yes, Battle Royale 3D - Warrior63 mod apk is safe to download as long as you use a trusted website that provides the file. You should also scan the file with an antivirus program before installing it.
-- Do I need to root my device to use Battle Royale 3D - Warrior63 mod apk?
-No, you do not need to root your device to use Battle Royale 3D - Warrior63 mod apk. You just need to enable the installation of apps from unknown sources on your device.
-- Will I get banned for using Battle Royale 3D - Warrior63 mod apk?
-There is a low risk of getting banned for using Battle Royale 3D - Warrior63 mod apk, as the game does not have a strict anti-cheat system. However, you should still be careful and avoid using obvious cheats such as flying or teleporting.
-- Can I play Battle Royale 3D - Warrior63 mod apk with my friends?< br>
-Yes, you can play Battle Royale 3D - Warrior63 mod apk with your friends. You can either join a team battle mode or create a private room and invite your friends to join. You can also chat with your teammates and coordinate your strategies.
-- How can I update Battle Royale 3D - Warrior63 mod apk?
-To update Battle Royale 3D - Warrior63 mod apk, you need to download the latest version of the mod apk file from the same website that you used before. Then, you need to uninstall the old version of the game and install the new one. You should also backup your game data before updating to avoid losing your progress.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/README.md b/spaces/1toTree/lora_test/README.md
deleted file mode 100644
index cb700f3ea029e9d1b882c5490b3421fe0f742605..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: LoRa ppdiffusers dreambooth
-emoji: 🎨🎞️
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/232labs/VToonify/vtoonify/model/raft/core/utils/__init__.py b/spaces/232labs/VToonify/vtoonify/model/raft/core/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AI-ANK/PaLM-Kosmos-Vision/README.md b/spaces/AI-ANK/PaLM-Kosmos-Vision/README.md
deleted file mode 100644
index 8fab6847eb3e792cd39e8681a50920cba3599267..0000000000000000000000000000000000000000
--- a/spaces/AI-ANK/PaLM-Kosmos-Vision/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: PaLM Kosmos Vision
-emoji: 🚀
-colorFrom: blue
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.28.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/base_vocoder.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/base_vocoder.py
deleted file mode 100644
index fe49a9e4f790ecdc5e76d60a23f96602b59fc48d..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/base_vocoder.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import importlib
-VOCODERS = {}
-
-
-def register_vocoder(cls):
- VOCODERS[cls.__name__.lower()] = cls
- VOCODERS[cls.__name__] = cls
- return cls
-
-
-def get_vocoder_cls(hparams):
- if hparams['vocoder'] in VOCODERS:
- return VOCODERS[hparams['vocoder']]
- else:
- vocoder_cls = hparams['vocoder']
- pkg = ".".join(vocoder_cls.split(".")[:-1])
- cls_name = vocoder_cls.split(".")[-1]
- vocoder_cls = getattr(importlib.import_module(pkg), cls_name)
- return vocoder_cls
-
-
-class BaseVocoder:
- def spec2wav(self, mel):
- """
-
- :param mel: [T, 80]
- :return: wav: [T']
- """
-
- raise NotImplementedError
-
- @staticmethod
- def wav2spec(wav_fn):
- """
-
- :param wav_fn: str
- :return: wav, mel: [T, 80]
- """
- raise NotImplementedError
diff --git a/spaces/AILab-CVC/SEED-LLaMA/start.sh b/spaces/AILab-CVC/SEED-LLaMA/start.sh
deleted file mode 100644
index 4578d224c519561a97c0d6642f36d652da6f1b5c..0000000000000000000000000000000000000000
--- a/spaces/AILab-CVC/SEED-LLaMA/start.sh
+++ /dev/null
@@ -1,11 +0,0 @@
-
-nohup python3 gradio_demo/seed_llama_flask.py \
- --image_transform configs/transform/clip_transform.yaml \
- --tokenizer configs/tokenizer/seed_llama_tokenizer_hf.yaml \
- --model configs/llm/seed_llama_14b_8bit.yaml \
- --port 7890 \
- --llm_device cuda:0 \
- --tokenizer_device cuda:0 \
- --offload_encoder >./output.out &
-
-python3 gradio_demo/seed_llama_gradio.py --server_port 7860 --request_address http://127.0.0.1:7890/generate --model_type seed-llama-14b
\ No newline at end of file
diff --git a/spaces/Abrish-Aadi/Chest-Xray-anomaly-detection/README.md b/spaces/Abrish-Aadi/Chest-Xray-anomaly-detection/README.md
deleted file mode 100644
index 6cabc36152ee5a8a7042a9eda2140eafbf162cce..0000000000000000000000000000000000000000
--- a/spaces/Abrish-Aadi/Chest-Xray-anomaly-detection/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Chest Xray Anomaly Detection
-emoji: 🌖
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AchyuthGamer/MagicPrompt-Stable-Diffusion/README.md b/spaces/AchyuthGamer/MagicPrompt-Stable-Diffusion/README.md
deleted file mode 100644
index 4572e614f2314c27b0b9bfab0cc886f8db757c09..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/MagicPrompt-Stable-Diffusion/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: MagicPrompt Stable Diffusion
-emoji: 🍄
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: Gustavosta/MagicPrompt-Stable-Diffusion
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Adapter/T2I-Adapter/test_adapter.py b/spaces/Adapter/T2I-Adapter/test_adapter.py
deleted file mode 100644
index aa8f7ae0cd5817eac836b3ab66d51480aa7bede4..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/test_adapter.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import os
-
-import cv2
-import torch
-from basicsr.utils import tensor2img
-from pytorch_lightning import seed_everything
-from torch import autocast
-
-from ldm.inference_base import (diffusion_inference, get_adapters, get_base_argument_parser, get_sd_models)
-from ldm.modules.extra_condition import api
-from ldm.modules.extra_condition.api import (ExtraCondition, get_adapter_feature, get_cond_model)
-
-torch.set_grad_enabled(False)
-
-
-def main():
- supported_cond = [e.name for e in ExtraCondition]
- parser = get_base_argument_parser()
- parser.add_argument(
- '--which_cond',
- type=str,
- required=True,
- choices=supported_cond,
- help='which condition modality you want to test',
- )
- opt = parser.parse_args()
- which_cond = opt.which_cond
- if opt.outdir is None:
- opt.outdir = f'outputs/test-{which_cond}'
- os.makedirs(opt.outdir, exist_ok=True)
- if opt.resize_short_edge is None:
- print(f"you don't specify the resize_shot_edge, so the maximum resolution is set to {opt.max_resolution}")
- opt.device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
-
- # support two test mode: single image test, and batch test (through a txt file)
- if opt.prompt.endswith('.txt'):
- assert opt.prompt.endswith('.txt')
- image_paths = []
- prompts = []
- with open(opt.prompt, 'r') as f:
- lines = f.readlines()
- for line in lines:
- line = line.strip()
- image_paths.append(line.split('; ')[0])
- prompts.append(line.split('; ')[1])
- else:
- image_paths = [opt.cond_path]
- prompts = [opt.prompt]
- print(image_paths)
-
- # prepare models
- sd_model, sampler = get_sd_models(opt)
- adapter = get_adapters(opt, getattr(ExtraCondition, which_cond))
- cond_model = None
- if opt.cond_inp_type == 'image':
- cond_model = get_cond_model(opt, getattr(ExtraCondition, which_cond))
-
- process_cond_module = getattr(api, f'get_cond_{which_cond}')
-
- # inference
- with torch.inference_mode(), \
- sd_model.ema_scope(), \
- autocast('cuda'):
- for test_idx, (cond_path, prompt) in enumerate(zip(image_paths, prompts)):
- seed_everything(opt.seed)
- for v_idx in range(opt.n_samples):
- # seed_everything(opt.seed+v_idx+test_idx)
- cond = process_cond_module(opt, cond_path, opt.cond_inp_type, cond_model)
-
- base_count = len(os.listdir(opt.outdir)) // 2
- cv2.imwrite(os.path.join(opt.outdir, f'{base_count:05}_{which_cond}.png'), tensor2img(cond))
-
- adapter_features, append_to_context = get_adapter_feature(cond, adapter)
- opt.prompt = prompt
- result = diffusion_inference(opt, sd_model, sampler, adapter_features, append_to_context)
- cv2.imwrite(os.path.join(opt.outdir, f'{base_count:05}_result.png'), tensor2img(result))
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/shockwavepipeline-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/shockwavepipeline-plugin.js
deleted file mode 100644
index 2a85870ebed3858d7b943bb25adbdc6f1337c9fb..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/shockwavepipeline-plugin.js
+++ /dev/null
@@ -1,14 +0,0 @@
-import ShockwavePostFxPipeline from './shockwavepipeline.js';
-import BasePostFxPipelinePlugin from './utils/renderer/postfxpipeline/BasePostFxPipelinePlugin.js';
-import SetValue from './utils/object/SetValue.js';
-
-class ShockwavePipelinePlugin extends BasePostFxPipelinePlugin {
- constructor(pluginManager) {
- super(pluginManager);
- this.setPostPipelineClass(ShockwavePostFxPipeline, 'rexShockwavePostFx');
- }
-}
-
-SetValue(window, 'RexPlugins.Pipelines.ShockwavePostFx', ShockwavePostFxPipeline);
-
-export default ShockwavePipelinePlugin;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/SetTextureProperties.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/SetTextureProperties.js
deleted file mode 100644
index 6f3012e4087b95d485bf50a89e20d2330d476d29..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/SetTextureProperties.js
+++ /dev/null
@@ -1,22 +0,0 @@
-const ProperiteList = ['tint', 'alpha', 'visible', 'flipX', 'flipY'];
-
-var SetTextureProperties = function (gameObject, data) {
- for (var i = 0, cnt = ProperiteList.length; i < cnt; i++) {
- var key = ProperiteList[i];
- var value = data[key];
- if (value !== undefined) {
- gameObject[key] = value;
- }
- }
-
- if (data.cropResize && !gameObject.resize) {
- gameObject.resize = function (width, height) {
- gameObject.setCrop(0, 0, width, height);
- return gameObject;
- }
- }
-
- return gameObject;
-}
-
-export default SetTextureProperties;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/methods/HasPage.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/methods/HasPage.js
deleted file mode 100644
index 6435d95219a42447fc434716fc49b65f4566cf15..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/methods/HasPage.js
+++ /dev/null
@@ -1,5 +0,0 @@
-var HasPage = function (key) {
- return this.sizerChildren.hasOwnProperty(key);
-}
-
-export default HasPage;
\ No newline at end of file
diff --git a/spaces/AlekseyKorshuk/accompaniment-generator/app.py b/spaces/AlekseyKorshuk/accompaniment-generator/app.py
deleted file mode 100644
index e540a93511be8a1c196aac324090f39bb7172cfa..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/accompaniment-generator/app.py
+++ /dev/null
@@ -1,108 +0,0 @@
-import streamlit as st
-import numpy as np
-import pretty_midi
-from accompaniment_generator.generator.base import Generator
-import os
-import uuid
-import time
-from midi2audio import FluidSynth
-from scipy.io import wavfile
-
-ABOUT_TEXT = "🤗 Accompaniment Generator - generate accompaniment part with chord using Evolutionary algorithm."
-CONTACT_TEXT = """
-_Built by Aleksey Korshuk with love_ ❤️
-[](https://github.com/AlekseyKorshuk)
-
-[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
-
-Star project repository:
-[](https://github.com/AlekseyKorshuk/accompaniment-generator)
-"""
-st.sidebar.markdown(
- """
-
-
-
-
-""",
- unsafe_allow_html=True,
-)
-
-st.sidebar.markdown(ABOUT_TEXT)
-st.sidebar.markdown(CONTACT_TEXT)
-
-
-def inference(audio, num_epoch, chord_duration):
- generator = Generator()
- if chord_duration == 0.0:
- chord_duration = None
- output_midi_data = generator(audio, num_epoch=int(num_epoch), chord_duration=chord_duration)[0]
- name = uuid.uuid4()
- output_midi_data.write(f'{name}.mid')
- fs = FluidSynth("font.sf2")
- fs.midi_to_audio(f'{name}.mid', f'{name}.wav')
- fs.midi_to_audio(audio, f'{name}-init.wav')
- # time.sleep(2)
- print([f'{name}-init.wav', f'{name}.wav'])
- return f'{name}-init.wav', f'{name}.wav'
-
-
-st.title("Accompaniment Generator")
-
-st.markdown(
- "App to generate accompaniment for MIDI music file with Evolutionary algorithm. Check out [project repository](https://github.com/AlekseyKorshuk/accompaniment-generator).")
-
-article = "" \
- "Github Repo" \
- "
"
-
-from os import listdir
-from os.path import isfile, join
-
-onlyfiles = [f for f in listdir("./examples") if isfile(join("./examples", f))]
-
-model_name = st.selectbox(
- 'Select example MIDI file (will be used only for empty file field):',
- onlyfiles
-)
-
-uploaded_file = st.file_uploader(
- 'Upload MIDI file:'
-)
-
-num_epoch = st.number_input("Number of epochs:",
- min_value=1,
- max_value=1000,
- step=1,
- value=1,
- )
-
-chord_duration = st.number_input("Custom chord duration is seconds (leave zero for auto-calculation):",
- min_value=0.0,
- max_value=1000.0,
- step=0.0001,
- value=0.0,
- format="%.4f"
- )
-
-generate_image_button = st.button("Generate")
-
-if generate_image_button:
- input_file = f"./examples/{model_name}"
- if uploaded_file is not None:
- input_file = uploaded_file.name
- with open(input_file, 'wb') as f:
- f.write(uploaded_file.getvalue())
- # print(uploaded_file.getvalue())
- with st.spinner(text=f"Generating, this may take some time..."):
- before, after = inference(input_file, num_epoch, chord_duration)
- st.markdown("Before:")
- st.audio(before)
- st.markdown("After:")
- st.audio(after)
- if uploaded_file is not None:
- os.remove(input_file)
diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/mel_processing.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/mel_processing.py
deleted file mode 100644
index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000
--- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/mel_processing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/multilingual_stable_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/multilingual_stable_diffusion.py
deleted file mode 100644
index ff6c7e68f783519dc64ede847a6fd2a26209da33..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/multilingual_stable_diffusion.py
+++ /dev/null
@@ -1,436 +0,0 @@
-import inspect
-from typing import Callable, List, Optional, Union
-
-import torch
-from transformers import (
- CLIPImageProcessor,
- CLIPTextModel,
- CLIPTokenizer,
- MBart50TokenizerFast,
- MBartForConditionalGeneration,
- pipeline,
-)
-
-from diffusers import DiffusionPipeline
-from diffusers.configuration_utils import FrozenDict
-from diffusers.models import AutoencoderKL, UNet2DConditionModel
-from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
-from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
-from diffusers.utils import deprecate, logging
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def detect_language(pipe, prompt, batch_size):
- """helper function to detect language(s) of prompt"""
-
- if batch_size == 1:
- preds = pipe(prompt, top_k=1, truncation=True, max_length=128)
- return preds[0]["label"]
- else:
- detected_languages = []
- for p in prompt:
- preds = pipe(p, top_k=1, truncation=True, max_length=128)
- detected_languages.append(preds[0]["label"])
-
- return detected_languages
-
-
-def translate_prompt(prompt, translation_tokenizer, translation_model, device):
- """helper function to translate prompt to English"""
-
- encoded_prompt = translation_tokenizer(prompt, return_tensors="pt").to(device)
- generated_tokens = translation_model.generate(**encoded_prompt, max_new_tokens=1000)
- en_trans = translation_tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
-
- return en_trans[0]
-
-
-class MultilingualStableDiffusion(DiffusionPipeline):
- r"""
- Pipeline for text-to-image generation using Stable Diffusion in different languages.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- detection_pipeline ([`pipeline`]):
- Transformers pipeline to detect prompt's language.
- translation_model ([`MBartForConditionalGeneration`]):
- Model to translate prompt to English, if necessary. Please refer to the
- [model card](https://huggingface.co/docs/transformers/model_doc/mbart) for details.
- translation_tokenizer ([`MBart50TokenizerFast`]):
- Tokenizer of the translation model.
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
- feature_extractor ([`CLIPImageProcessor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
-
- def __init__(
- self,
- detection_pipeline: pipeline,
- translation_model: MBartForConditionalGeneration,
- translation_tokenizer: MBart50TokenizerFast,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPImageProcessor,
- ):
- super().__init__()
-
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
- " file"
- )
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["steps_offset"] = 1
- scheduler._internal_dict = FrozenDict(new_config)
-
- if safety_checker is None:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- self.register_modules(
- detection_pipeline=detection_pipeline,
- translation_model=translation_model,
- translation_tokenizer=translation_tokenizer,
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
-
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
- r"""
- Enable sliced attention computation.
-
- When this option is enabled, the attention module will split the input tensor in slices, to compute attention
- in several steps. This is useful to save some memory in exchange for a small speed decrease.
-
- Args:
- slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
- When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
- a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
- `attention_head_dim` must be a multiple of `slice_size`.
- """
- if slice_size == "auto":
- # half the attention head size is usually a good trade-off between
- # speed and memory
- slice_size = self.unet.config.attention_head_dim // 2
- self.unet.set_attention_slice(slice_size)
-
- def disable_attention_slicing(self):
- r"""
- Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
- back to computing attention in one step.
- """
- # set slice_size = `None` to disable `attention slicing`
- self.enable_attention_slicing(None)
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]],
- height: int = 512,
- width: int = 512,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[torch.Generator] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation. Can be in different languages.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- if isinstance(prompt, str):
- batch_size = 1
- elif isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- # detect language and translate if necessary
- prompt_language = detect_language(self.detection_pipeline, prompt, batch_size)
- if batch_size == 1 and prompt_language != "en":
- prompt = translate_prompt(prompt, self.translation_tokenizer, self.translation_model, self.device)
-
- if isinstance(prompt, list):
- for index in range(batch_size):
- if prompt_language[index] != "en":
- p = translate_prompt(
- prompt[index], self.translation_tokenizer, self.translation_model, self.device
- )
- prompt[index] = p
-
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
-
- if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
- removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
- text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
- text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
-
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- bs_embed, seq_len, _ = text_embeddings.shape
- text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
- text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- # detect language and translate it if necessary
- negative_prompt_language = detect_language(self.detection_pipeline, negative_prompt, batch_size)
- if negative_prompt_language != "en":
- negative_prompt = translate_prompt(
- negative_prompt, self.translation_tokenizer, self.translation_model, self.device
- )
- if isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- # detect language and translate it if necessary
- if isinstance(negative_prompt, list):
- negative_prompt_languages = detect_language(self.detection_pipeline, negative_prompt, batch_size)
- for index in range(batch_size):
- if negative_prompt_languages[index] != "en":
- p = translate_prompt(
- negative_prompt[index], self.translation_tokenizer, self.translation_model, self.device
- )
- negative_prompt[index] = p
- uncond_tokens = negative_prompt
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = uncond_embeddings.shape[1]
- uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
- uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
-
- # get the initial random noise unless the user supplied it
-
- # Unlike in other pipelines, latents need to be generated in the target device
- # for 1-to-1 results reproducibility with the CompVis implementation.
- # However this currently doesn't work in `mps`.
- latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
- latents_dtype = text_embeddings.dtype
- if latents is None:
- if self.device.type == "mps":
- # randn does not work reproducibly on mps
- latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
- self.device
- )
- else:
- latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
- else:
- if latents.shape != latents_shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
- latents = latents.to(self.device)
-
- # set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
-
- # Some schedulers like PNDM have timesteps as arrays
- # It's more optimized to move all timesteps to correct device beforehand
- timesteps_tensor = self.scheduler.timesteps.to(self.device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- for i, t in enumerate(self.progress_bar(timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- latents = 1 / 0.18215 * latents
- image = self.vae.decode(latents).sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
-
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
- self.device
- )
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
- )
- else:
- has_nsfw_concept = None
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/commands/fp16_safetensors.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/commands/fp16_safetensors.py
deleted file mode 100644
index 19553c752dce116d01f9816f90ddd3275d8cc302..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/commands/fp16_safetensors.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Usage example:
- diffusers-cli fp16_safetensors --ckpt_id=openai/shap-e --fp16 --use_safetensors
-"""
-
-import glob
-import json
-from argparse import ArgumentParser, Namespace
-from importlib import import_module
-
-import huggingface_hub
-import torch
-from huggingface_hub import hf_hub_download
-from packaging import version
-
-from ..utils import is_safetensors_available, logging
-from . import BaseDiffusersCLICommand
-
-
-def conversion_command_factory(args: Namespace):
- return FP16SafetensorsCommand(
- args.ckpt_id,
- args.fp16,
- args.use_safetensors,
- args.use_auth_token,
- )
-
-
-class FP16SafetensorsCommand(BaseDiffusersCLICommand):
- @staticmethod
- def register_subcommand(parser: ArgumentParser):
- conversion_parser = parser.add_parser("fp16_safetensors")
- conversion_parser.add_argument(
- "--ckpt_id",
- type=str,
- help="Repo id of the checkpoints on which to run the conversion. Example: 'openai/shap-e'.",
- )
- conversion_parser.add_argument(
- "--fp16", action="store_true", help="If serializing the variables in FP16 precision."
- )
- conversion_parser.add_argument(
- "--use_safetensors", action="store_true", help="If serializing in the safetensors format."
- )
- conversion_parser.add_argument(
- "--use_auth_token",
- action="store_true",
- help="When working with checkpoints having private visibility. When used `huggingface-cli login` needs to be run beforehand.",
- )
- conversion_parser.set_defaults(func=conversion_command_factory)
-
- def __init__(self, ckpt_id: str, fp16: bool, use_safetensors: bool, use_auth_token: bool):
- self.logger = logging.get_logger("diffusers-cli/fp16_safetensors")
- self.ckpt_id = ckpt_id
- self.local_ckpt_dir = f"/tmp/{ckpt_id}"
- self.fp16 = fp16
-
- if is_safetensors_available():
- self.use_safetensors = use_safetensors
- else:
- raise ImportError(
- "When `use_safetensors` is set to True, the `safetensors` library needs to be installed. Install it via `pip install safetensors`."
- )
-
- if not self.use_safetensors and not self.fp16:
- raise NotImplementedError(
- "When `use_safetensors` and `fp16` both are False, then this command is of no use."
- )
-
- self.use_auth_token = use_auth_token
-
- def run(self):
- if version.parse(huggingface_hub.__version__) < version.parse("0.9.0"):
- raise ImportError(
- "The huggingface_hub version must be >= 0.9.0 to use this command. Please update your huggingface_hub"
- " installation."
- )
- else:
- from huggingface_hub import create_commit
- from huggingface_hub._commit_api import CommitOperationAdd
-
- model_index = hf_hub_download(repo_id=self.ckpt_id, filename="model_index.json", token=self.use_auth_token)
- with open(model_index, "r") as f:
- pipeline_class_name = json.load(f)["_class_name"]
- pipeline_class = getattr(import_module("diffusers"), pipeline_class_name)
- self.logger.info(f"Pipeline class imported: {pipeline_class_name}.")
-
- # Load the appropriate pipeline. We could have use `DiffusionPipeline`
- # here, but just to avoid any rough edge cases.
- pipeline = pipeline_class.from_pretrained(
- self.ckpt_id, torch_dtype=torch.float16 if self.fp16 else torch.float32, use_auth_token=self.use_auth_token
- )
- pipeline.save_pretrained(
- self.local_ckpt_dir,
- safe_serialization=True if self.use_safetensors else False,
- variant="fp16" if self.fp16 else None,
- )
- self.logger.info(f"Pipeline locally saved to {self.local_ckpt_dir}.")
-
- # Fetch all the paths.
- if self.fp16:
- modified_paths = glob.glob(f"{self.local_ckpt_dir}/*/*.fp16.*")
- elif self.use_safetensors:
- modified_paths = glob.glob(f"{self.local_ckpt_dir}/*/*.safetensors")
-
- # Prepare for the PR.
- commit_message = f"Serialize variables with FP16: {self.fp16} and safetensors: {self.use_safetensors}."
- operations = []
- for path in modified_paths:
- operations.append(CommitOperationAdd(path_in_repo="/".join(path.split("/")[4:]), path_or_fileobj=path))
-
- # Open the PR.
- commit_description = (
- "Variables converted by the [`diffusers`' `fp16_safetensors`"
- " CLI](https://github.com/huggingface/diffusers/blob/main/src/diffusers/commands/fp16_safetensors.py)."
- )
- hub_pr_url = create_commit(
- repo_id=self.ckpt_id,
- operations=operations,
- commit_message=commit_message,
- commit_description=commit_description,
- repo_type="model",
- create_pr=True,
- ).pr_url
- self.logger.info(f"PR created here: {hub_pr_url}.")
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/cascade_mask_rcnn_hrnetv2p_w40_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/hrnet/cascade_mask_rcnn_hrnetv2p_w40_20e_coco.py
deleted file mode 100644
index 29b1469fa9f455a3235b323fa3b1e39d5c095f3d..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/cascade_mask_rcnn_hrnetv2p_w40_20e_coco.py
+++ /dev/null
@@ -1,11 +0,0 @@
-_base_ = './cascade_mask_rcnn_hrnetv2p_w32_20e_coco.py'
-# model settings
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w40',
- backbone=dict(
- type='HRNet',
- extra=dict(
- stage2=dict(num_channels=(40, 80)),
- stage3=dict(num_channels=(40, 80, 160)),
- stage4=dict(num_channels=(40, 80, 160, 320)))),
- neck=dict(type='HRFPN', in_channels=[40, 80, 160, 320], out_channels=256))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/mask_rcnn_hrnetv2p_w32_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/hrnet/mask_rcnn_hrnetv2p_w32_2x_coco.py
deleted file mode 100644
index 63d5d139e7b56843f5dcc85bda48945d56cfc49e..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/mask_rcnn_hrnetv2p_w32_2x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './mask_rcnn_hrnetv2p_w32_1x_coco.py'
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py
deleted file mode 100644
index d0016d1f1df4534ae27de95c4f7ec9976b3ab6d0..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './mask_rcnn_r101_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r101_caffe_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r101_caffe_fpn_1x_coco.py
deleted file mode 100644
index 21d227b044728a30890b93fc769743d2124956c1..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r101_caffe_fpn_1x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './retinanet_r50_caffe_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron2/resnet101_caffe',
- backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_512x512_160k_ade20k.py
deleted file mode 100644
index 22a3ce0b38f36efc96595fe1c3ef428fc1575eb0..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './fcn_hr18_512x512_160k_ade20k.py'
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w18_small',
- backbone=dict(
- extra=dict(
- stage1=dict(num_blocks=(2, )),
- stage2=dict(num_blocks=(2, 2)),
- stage3=dict(num_modules=3, num_blocks=(2, 2, 2)),
- stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2)))))
diff --git a/spaces/AnimalEquality/chatbot/nbs/styles.css b/spaces/AnimalEquality/chatbot/nbs/styles.css
deleted file mode 100644
index 66ccc49ee8f0e73901dac02dc4e9224b7d1b2c78..0000000000000000000000000000000000000000
--- a/spaces/AnimalEquality/chatbot/nbs/styles.css
+++ /dev/null
@@ -1,37 +0,0 @@
-.cell {
- margin-bottom: 1rem;
-}
-
-.cell > .sourceCode {
- margin-bottom: 0;
-}
-
-.cell-output > pre {
- margin-bottom: 0;
-}
-
-.cell-output > pre, .cell-output > .sourceCode > pre, .cell-output-stdout > pre {
- margin-left: 0.8rem;
- margin-top: 0;
- background: none;
- border-left: 2px solid lightsalmon;
- border-top-left-radius: 0;
- border-top-right-radius: 0;
-}
-
-.cell-output > .sourceCode {
- border: none;
-}
-
-.cell-output > .sourceCode {
- background: none;
- margin-top: 0;
-}
-
-div.description {
- padding-left: 2px;
- padding-top: 5px;
- font-style: italic;
- font-size: 135%;
- opacity: 70%;
-}
diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/modules.py b/spaces/Artrajz/vits-simple-api/bert_vits2/modules.py
deleted file mode 100644
index 9206f95b0037251225eddc1d64b60f749155135c..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/bert_vits2/modules.py
+++ /dev/null
@@ -1,459 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from bert_vits2 import commons
-from bert_vits2.commons import init_weights, get_padding
-from bert_vits2.transforms import piecewise_rational_quadratic_transform
-from bert_vits2.attentions import Encoder
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert (kernel_size % 2 == 1)
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2 * hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset:cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, :self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels:, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout,
- gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2 * self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
-
-
-class TransformerCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- n_layers,
- n_heads,
- p_dropout=0,
- filter_channels=0,
- mean_only=False,
- wn_sharing_parameter=None,
- gin_channels=0
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow=True,
- gin_channels=gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/cmdline.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/cmdline.py
deleted file mode 100644
index de73b06b4cfa3b68a25455148c7e086b32676e95..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/cmdline.py
+++ /dev/null
@@ -1,668 +0,0 @@
-"""
- pygments.cmdline
- ~~~~~~~~~~~~~~~~
-
- Command line interface.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import os
-import sys
-import shutil
-import argparse
-from textwrap import dedent
-
-from pip._vendor.pygments import __version__, highlight
-from pip._vendor.pygments.util import ClassNotFound, OptionError, docstring_headline, \
- guess_decode, guess_decode_from_terminal, terminal_encoding, \
- UnclosingTextIOWrapper
-from pip._vendor.pygments.lexers import get_all_lexers, get_lexer_by_name, guess_lexer, \
- load_lexer_from_file, get_lexer_for_filename, find_lexer_class_for_filename
-from pip._vendor.pygments.lexers.special import TextLexer
-from pip._vendor.pygments.formatters.latex import LatexEmbeddedLexer, LatexFormatter
-from pip._vendor.pygments.formatters import get_all_formatters, get_formatter_by_name, \
- load_formatter_from_file, get_formatter_for_filename, find_formatter_class
-from pip._vendor.pygments.formatters.terminal import TerminalFormatter
-from pip._vendor.pygments.formatters.terminal256 import Terminal256Formatter, TerminalTrueColorFormatter
-from pip._vendor.pygments.filters import get_all_filters, find_filter_class
-from pip._vendor.pygments.styles import get_all_styles, get_style_by_name
-
-
-def _parse_options(o_strs):
- opts = {}
- if not o_strs:
- return opts
- for o_str in o_strs:
- if not o_str.strip():
- continue
- o_args = o_str.split(',')
- for o_arg in o_args:
- o_arg = o_arg.strip()
- try:
- o_key, o_val = o_arg.split('=', 1)
- o_key = o_key.strip()
- o_val = o_val.strip()
- except ValueError:
- opts[o_arg] = True
- else:
- opts[o_key] = o_val
- return opts
-
-
-def _parse_filters(f_strs):
- filters = []
- if not f_strs:
- return filters
- for f_str in f_strs:
- if ':' in f_str:
- fname, fopts = f_str.split(':', 1)
- filters.append((fname, _parse_options([fopts])))
- else:
- filters.append((f_str, {}))
- return filters
-
-
-def _print_help(what, name):
- try:
- if what == 'lexer':
- cls = get_lexer_by_name(name)
- print("Help on the %s lexer:" % cls.name)
- print(dedent(cls.__doc__))
- elif what == 'formatter':
- cls = find_formatter_class(name)
- print("Help on the %s formatter:" % cls.name)
- print(dedent(cls.__doc__))
- elif what == 'filter':
- cls = find_filter_class(name)
- print("Help on the %s filter:" % name)
- print(dedent(cls.__doc__))
- return 0
- except (AttributeError, ValueError):
- print("%s not found!" % what, file=sys.stderr)
- return 1
-
-
-def _print_list(what):
- if what == 'lexer':
- print()
- print("Lexers:")
- print("~~~~~~~")
-
- info = []
- for fullname, names, exts, _ in get_all_lexers():
- tup = (', '.join(names)+':', fullname,
- exts and '(filenames ' + ', '.join(exts) + ')' or '')
- info.append(tup)
- info.sort()
- for i in info:
- print(('* %s\n %s %s') % i)
-
- elif what == 'formatter':
- print()
- print("Formatters:")
- print("~~~~~~~~~~~")
-
- info = []
- for cls in get_all_formatters():
- doc = docstring_headline(cls)
- tup = (', '.join(cls.aliases) + ':', doc, cls.filenames and
- '(filenames ' + ', '.join(cls.filenames) + ')' or '')
- info.append(tup)
- info.sort()
- for i in info:
- print(('* %s\n %s %s') % i)
-
- elif what == 'filter':
- print()
- print("Filters:")
- print("~~~~~~~~")
-
- for name in get_all_filters():
- cls = find_filter_class(name)
- print("* " + name + ':')
- print(" %s" % docstring_headline(cls))
-
- elif what == 'style':
- print()
- print("Styles:")
- print("~~~~~~~")
-
- for name in get_all_styles():
- cls = get_style_by_name(name)
- print("* " + name + ':')
- print(" %s" % docstring_headline(cls))
-
-
-def _print_list_as_json(requested_items):
- import json
- result = {}
- if 'lexer' in requested_items:
- info = {}
- for fullname, names, filenames, mimetypes in get_all_lexers():
- info[fullname] = {
- 'aliases': names,
- 'filenames': filenames,
- 'mimetypes': mimetypes
- }
- result['lexers'] = info
-
- if 'formatter' in requested_items:
- info = {}
- for cls in get_all_formatters():
- doc = docstring_headline(cls)
- info[cls.name] = {
- 'aliases': cls.aliases,
- 'filenames': cls.filenames,
- 'doc': doc
- }
- result['formatters'] = info
-
- if 'filter' in requested_items:
- info = {}
- for name in get_all_filters():
- cls = find_filter_class(name)
- info[name] = {
- 'doc': docstring_headline(cls)
- }
- result['filters'] = info
-
- if 'style' in requested_items:
- info = {}
- for name in get_all_styles():
- cls = get_style_by_name(name)
- info[name] = {
- 'doc': docstring_headline(cls)
- }
- result['styles'] = info
-
- json.dump(result, sys.stdout)
-
-def main_inner(parser, argns):
- if argns.help:
- parser.print_help()
- return 0
-
- if argns.V:
- print('Pygments version %s, (c) 2006-2022 by Georg Brandl, Matthäus '
- 'Chajdas and contributors.' % __version__)
- return 0
-
- def is_only_option(opt):
- return not any(v for (k, v) in vars(argns).items() if k != opt)
-
- # handle ``pygmentize -L``
- if argns.L is not None:
- arg_set = set()
- for k, v in vars(argns).items():
- if v:
- arg_set.add(k)
-
- arg_set.discard('L')
- arg_set.discard('json')
-
- if arg_set:
- parser.print_help(sys.stderr)
- return 2
-
- # print version
- if not argns.json:
- main(['', '-V'])
- allowed_types = {'lexer', 'formatter', 'filter', 'style'}
- largs = [arg.rstrip('s') for arg in argns.L]
- if any(arg not in allowed_types for arg in largs):
- parser.print_help(sys.stderr)
- return 0
- if not largs:
- largs = allowed_types
- if not argns.json:
- for arg in largs:
- _print_list(arg)
- else:
- _print_list_as_json(largs)
- return 0
-
- # handle ``pygmentize -H``
- if argns.H:
- if not is_only_option('H'):
- parser.print_help(sys.stderr)
- return 2
- what, name = argns.H
- if what not in ('lexer', 'formatter', 'filter'):
- parser.print_help(sys.stderr)
- return 2
- return _print_help(what, name)
-
- # parse -O options
- parsed_opts = _parse_options(argns.O or [])
-
- # parse -P options
- for p_opt in argns.P or []:
- try:
- name, value = p_opt.split('=', 1)
- except ValueError:
- parsed_opts[p_opt] = True
- else:
- parsed_opts[name] = value
-
- # encodings
- inencoding = parsed_opts.get('inencoding', parsed_opts.get('encoding'))
- outencoding = parsed_opts.get('outencoding', parsed_opts.get('encoding'))
-
- # handle ``pygmentize -N``
- if argns.N:
- lexer = find_lexer_class_for_filename(argns.N)
- if lexer is None:
- lexer = TextLexer
-
- print(lexer.aliases[0])
- return 0
-
- # handle ``pygmentize -C``
- if argns.C:
- inp = sys.stdin.buffer.read()
- try:
- lexer = guess_lexer(inp, inencoding=inencoding)
- except ClassNotFound:
- lexer = TextLexer
-
- print(lexer.aliases[0])
- return 0
-
- # handle ``pygmentize -S``
- S_opt = argns.S
- a_opt = argns.a
- if S_opt is not None:
- f_opt = argns.f
- if not f_opt:
- parser.print_help(sys.stderr)
- return 2
- if argns.l or argns.INPUTFILE:
- parser.print_help(sys.stderr)
- return 2
-
- try:
- parsed_opts['style'] = S_opt
- fmter = get_formatter_by_name(f_opt, **parsed_opts)
- except ClassNotFound as err:
- print(err, file=sys.stderr)
- return 1
-
- print(fmter.get_style_defs(a_opt or ''))
- return 0
-
- # if no -S is given, -a is not allowed
- if argns.a is not None:
- parser.print_help(sys.stderr)
- return 2
-
- # parse -F options
- F_opts = _parse_filters(argns.F or [])
-
- # -x: allow custom (eXternal) lexers and formatters
- allow_custom_lexer_formatter = bool(argns.x)
-
- # select lexer
- lexer = None
-
- # given by name?
- lexername = argns.l
- if lexername:
- # custom lexer, located relative to user's cwd
- if allow_custom_lexer_formatter and '.py' in lexername:
- try:
- filename = None
- name = None
- if ':' in lexername:
- filename, name = lexername.rsplit(':', 1)
-
- if '.py' in name:
- # This can happen on Windows: If the lexername is
- # C:\lexer.py -- return to normal load path in that case
- name = None
-
- if filename and name:
- lexer = load_lexer_from_file(filename, name,
- **parsed_opts)
- else:
- lexer = load_lexer_from_file(lexername, **parsed_opts)
- except ClassNotFound as err:
- print('Error:', err, file=sys.stderr)
- return 1
- else:
- try:
- lexer = get_lexer_by_name(lexername, **parsed_opts)
- except (OptionError, ClassNotFound) as err:
- print('Error:', err, file=sys.stderr)
- return 1
-
- # read input code
- code = None
-
- if argns.INPUTFILE:
- if argns.s:
- print('Error: -s option not usable when input file specified',
- file=sys.stderr)
- return 2
-
- infn = argns.INPUTFILE
- try:
- with open(infn, 'rb') as infp:
- code = infp.read()
- except Exception as err:
- print('Error: cannot read infile:', err, file=sys.stderr)
- return 1
- if not inencoding:
- code, inencoding = guess_decode(code)
-
- # do we have to guess the lexer?
- if not lexer:
- try:
- lexer = get_lexer_for_filename(infn, code, **parsed_opts)
- except ClassNotFound as err:
- if argns.g:
- try:
- lexer = guess_lexer(code, **parsed_opts)
- except ClassNotFound:
- lexer = TextLexer(**parsed_opts)
- else:
- print('Error:', err, file=sys.stderr)
- return 1
- except OptionError as err:
- print('Error:', err, file=sys.stderr)
- return 1
-
- elif not argns.s: # treat stdin as full file (-s support is later)
- # read code from terminal, always in binary mode since we want to
- # decode ourselves and be tolerant with it
- code = sys.stdin.buffer.read() # use .buffer to get a binary stream
- if not inencoding:
- code, inencoding = guess_decode_from_terminal(code, sys.stdin)
- # else the lexer will do the decoding
- if not lexer:
- try:
- lexer = guess_lexer(code, **parsed_opts)
- except ClassNotFound:
- lexer = TextLexer(**parsed_opts)
-
- else: # -s option needs a lexer with -l
- if not lexer:
- print('Error: when using -s a lexer has to be selected with -l',
- file=sys.stderr)
- return 2
-
- # process filters
- for fname, fopts in F_opts:
- try:
- lexer.add_filter(fname, **fopts)
- except ClassNotFound as err:
- print('Error:', err, file=sys.stderr)
- return 1
-
- # select formatter
- outfn = argns.o
- fmter = argns.f
- if fmter:
- # custom formatter, located relative to user's cwd
- if allow_custom_lexer_formatter and '.py' in fmter:
- try:
- filename = None
- name = None
- if ':' in fmter:
- # Same logic as above for custom lexer
- filename, name = fmter.rsplit(':', 1)
-
- if '.py' in name:
- name = None
-
- if filename and name:
- fmter = load_formatter_from_file(filename, name,
- **parsed_opts)
- else:
- fmter = load_formatter_from_file(fmter, **parsed_opts)
- except ClassNotFound as err:
- print('Error:', err, file=sys.stderr)
- return 1
- else:
- try:
- fmter = get_formatter_by_name(fmter, **parsed_opts)
- except (OptionError, ClassNotFound) as err:
- print('Error:', err, file=sys.stderr)
- return 1
-
- if outfn:
- if not fmter:
- try:
- fmter = get_formatter_for_filename(outfn, **parsed_opts)
- except (OptionError, ClassNotFound) as err:
- print('Error:', err, file=sys.stderr)
- return 1
- try:
- outfile = open(outfn, 'wb')
- except Exception as err:
- print('Error: cannot open outfile:', err, file=sys.stderr)
- return 1
- else:
- if not fmter:
- if os.environ.get('COLORTERM','') in ('truecolor', '24bit'):
- fmter = TerminalTrueColorFormatter(**parsed_opts)
- elif '256' in os.environ.get('TERM', ''):
- fmter = Terminal256Formatter(**parsed_opts)
- else:
- fmter = TerminalFormatter(**parsed_opts)
- outfile = sys.stdout.buffer
-
- # determine output encoding if not explicitly selected
- if not outencoding:
- if outfn:
- # output file? use lexer encoding for now (can still be None)
- fmter.encoding = inencoding
- else:
- # else use terminal encoding
- fmter.encoding = terminal_encoding(sys.stdout)
-
- # provide coloring under Windows, if possible
- if not outfn and sys.platform in ('win32', 'cygwin') and \
- fmter.name in ('Terminal', 'Terminal256'): # pragma: no cover
- # unfortunately colorama doesn't support binary streams on Py3
- outfile = UnclosingTextIOWrapper(outfile, encoding=fmter.encoding)
- fmter.encoding = None
- try:
- import pip._vendor.colorama.initialise as colorama_initialise
- except ImportError:
- pass
- else:
- outfile = colorama_initialise.wrap_stream(
- outfile, convert=None, strip=None, autoreset=False, wrap=True)
-
- # When using the LaTeX formatter and the option `escapeinside` is
- # specified, we need a special lexer which collects escaped text
- # before running the chosen language lexer.
- escapeinside = parsed_opts.get('escapeinside', '')
- if len(escapeinside) == 2 and isinstance(fmter, LatexFormatter):
- left = escapeinside[0]
- right = escapeinside[1]
- lexer = LatexEmbeddedLexer(left, right, lexer)
-
- # ... and do it!
- if not argns.s:
- # process whole input as per normal...
- try:
- highlight(code, lexer, fmter, outfile)
- finally:
- if outfn:
- outfile.close()
- return 0
- else:
- # line by line processing of stdin (eg: for 'tail -f')...
- try:
- while 1:
- line = sys.stdin.buffer.readline()
- if not line:
- break
- if not inencoding:
- line = guess_decode_from_terminal(line, sys.stdin)[0]
- highlight(line, lexer, fmter, outfile)
- if hasattr(outfile, 'flush'):
- outfile.flush()
- return 0
- except KeyboardInterrupt: # pragma: no cover
- return 0
- finally:
- if outfn:
- outfile.close()
-
-
-class HelpFormatter(argparse.HelpFormatter):
- def __init__(self, prog, indent_increment=2, max_help_position=16, width=None):
- if width is None:
- try:
- width = shutil.get_terminal_size().columns - 2
- except Exception:
- pass
- argparse.HelpFormatter.__init__(self, prog, indent_increment,
- max_help_position, width)
-
-
-def main(args=sys.argv):
- """
- Main command line entry point.
- """
- desc = "Highlight an input file and write the result to an output file."
- parser = argparse.ArgumentParser(description=desc, add_help=False,
- formatter_class=HelpFormatter)
-
- operation = parser.add_argument_group('Main operation')
- lexersel = operation.add_mutually_exclusive_group()
- lexersel.add_argument(
- '-l', metavar='LEXER',
- help='Specify the lexer to use. (Query names with -L.) If not '
- 'given and -g is not present, the lexer is guessed from the filename.')
- lexersel.add_argument(
- '-g', action='store_true',
- help='Guess the lexer from the file contents, or pass through '
- 'as plain text if nothing can be guessed.')
- operation.add_argument(
- '-F', metavar='FILTER[:options]', action='append',
- help='Add a filter to the token stream. (Query names with -L.) '
- 'Filter options are given after a colon if necessary.')
- operation.add_argument(
- '-f', metavar='FORMATTER',
- help='Specify the formatter to use. (Query names with -L.) '
- 'If not given, the formatter is guessed from the output filename, '
- 'and defaults to the terminal formatter if the output is to the '
- 'terminal or an unknown file extension.')
- operation.add_argument(
- '-O', metavar='OPTION=value[,OPTION=value,...]', action='append',
- help='Give options to the lexer and formatter as a comma-separated '
- 'list of key-value pairs. '
- 'Example: `-O bg=light,python=cool`.')
- operation.add_argument(
- '-P', metavar='OPTION=value', action='append',
- help='Give a single option to the lexer and formatter - with this '
- 'you can pass options whose value contains commas and equal signs. '
- 'Example: `-P "heading=Pygments, the Python highlighter"`.')
- operation.add_argument(
- '-o', metavar='OUTPUTFILE',
- help='Where to write the output. Defaults to standard output.')
-
- operation.add_argument(
- 'INPUTFILE', nargs='?',
- help='Where to read the input. Defaults to standard input.')
-
- flags = parser.add_argument_group('Operation flags')
- flags.add_argument(
- '-v', action='store_true',
- help='Print a detailed traceback on unhandled exceptions, which '
- 'is useful for debugging and bug reports.')
- flags.add_argument(
- '-s', action='store_true',
- help='Process lines one at a time until EOF, rather than waiting to '
- 'process the entire file. This only works for stdin, only for lexers '
- 'with no line-spanning constructs, and is intended for streaming '
- 'input such as you get from `tail -f`. '
- 'Example usage: `tail -f sql.log | pygmentize -s -l sql`.')
- flags.add_argument(
- '-x', action='store_true',
- help='Allow custom lexers and formatters to be loaded from a .py file '
- 'relative to the current working directory. For example, '
- '`-l ./customlexer.py -x`. By default, this option expects a file '
- 'with a class named CustomLexer or CustomFormatter; you can also '
- 'specify your own class name with a colon (`-l ./lexer.py:MyLexer`). '
- 'Users should be very careful not to use this option with untrusted '
- 'files, because it will import and run them.')
- flags.add_argument('--json', help='Output as JSON. This can '
- 'be only used in conjunction with -L.',
- default=False,
- action='store_true')
-
- special_modes_group = parser.add_argument_group(
- 'Special modes - do not do any highlighting')
- special_modes = special_modes_group.add_mutually_exclusive_group()
- special_modes.add_argument(
- '-S', metavar='STYLE -f formatter',
- help='Print style definitions for STYLE for a formatter '
- 'given with -f. The argument given by -a is formatter '
- 'dependent.')
- special_modes.add_argument(
- '-L', nargs='*', metavar='WHAT',
- help='List lexers, formatters, styles or filters -- '
- 'give additional arguments for the thing(s) you want to list '
- '(e.g. "styles"), or omit them to list everything.')
- special_modes.add_argument(
- '-N', metavar='FILENAME',
- help='Guess and print out a lexer name based solely on the given '
- 'filename. Does not take input or highlight anything. If no specific '
- 'lexer can be determined, "text" is printed.')
- special_modes.add_argument(
- '-C', action='store_true',
- help='Like -N, but print out a lexer name based solely on '
- 'a given content from standard input.')
- special_modes.add_argument(
- '-H', action='store', nargs=2, metavar=('NAME', 'TYPE'),
- help='Print detailed help for the object of type , '
- 'where is one of "lexer", "formatter" or "filter".')
- special_modes.add_argument(
- '-V', action='store_true',
- help='Print the package version.')
- special_modes.add_argument(
- '-h', '--help', action='store_true',
- help='Print this help.')
- special_modes_group.add_argument(
- '-a', metavar='ARG',
- help='Formatter-specific additional argument for the -S (print '
- 'style sheet) mode.')
-
- argns = parser.parse_args(args[1:])
-
- try:
- return main_inner(parser, argns)
- except BrokenPipeError:
- # someone closed our stdout, e.g. by quitting a pager.
- return 0
- except Exception:
- if argns.v:
- print(file=sys.stderr)
- print('*' * 65, file=sys.stderr)
- print('An unhandled exception occurred while highlighting.',
- file=sys.stderr)
- print('Please report the whole traceback to the issue tracker at',
- file=sys.stderr)
- print('.',
- file=sys.stderr)
- print('*' * 65, file=sys.stderr)
- print(file=sys.stderr)
- raise
- import traceback
- info = traceback.format_exception(*sys.exc_info())
- msg = info[-1].strip()
- if len(info) >= 3:
- # extract relevant file and position info
- msg += '\n (f%s)' % info[-2].split('\n')[0].strip()[1:]
- print(file=sys.stderr)
- print('*** Error while highlighting:', file=sys.stderr)
- print(msg, file=sys.stderr)
- print('*** If this is a bug you want to report, please rerun with -v.',
- file=sys.stderr)
- return 1
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/style.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/style.py
deleted file mode 100644
index 84abbc20599f034626779702abc2303901d83ee5..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/style.py
+++ /dev/null
@@ -1,197 +0,0 @@
-"""
- pygments.style
- ~~~~~~~~~~~~~~
-
- Basic style object.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pip._vendor.pygments.token import Token, STANDARD_TYPES
-
-# Default mapping of ansixxx to RGB colors.
-_ansimap = {
- # dark
- 'ansiblack': '000000',
- 'ansired': '7f0000',
- 'ansigreen': '007f00',
- 'ansiyellow': '7f7fe0',
- 'ansiblue': '00007f',
- 'ansimagenta': '7f007f',
- 'ansicyan': '007f7f',
- 'ansigray': 'e5e5e5',
- # normal
- 'ansibrightblack': '555555',
- 'ansibrightred': 'ff0000',
- 'ansibrightgreen': '00ff00',
- 'ansibrightyellow': 'ffff00',
- 'ansibrightblue': '0000ff',
- 'ansibrightmagenta': 'ff00ff',
- 'ansibrightcyan': '00ffff',
- 'ansiwhite': 'ffffff',
-}
-# mapping of deprecated #ansixxx colors to new color names
-_deprecated_ansicolors = {
- # dark
- '#ansiblack': 'ansiblack',
- '#ansidarkred': 'ansired',
- '#ansidarkgreen': 'ansigreen',
- '#ansibrown': 'ansiyellow',
- '#ansidarkblue': 'ansiblue',
- '#ansipurple': 'ansimagenta',
- '#ansiteal': 'ansicyan',
- '#ansilightgray': 'ansigray',
- # normal
- '#ansidarkgray': 'ansibrightblack',
- '#ansired': 'ansibrightred',
- '#ansigreen': 'ansibrightgreen',
- '#ansiyellow': 'ansibrightyellow',
- '#ansiblue': 'ansibrightblue',
- '#ansifuchsia': 'ansibrightmagenta',
- '#ansiturquoise': 'ansibrightcyan',
- '#ansiwhite': 'ansiwhite',
-}
-ansicolors = set(_ansimap)
-
-
-class StyleMeta(type):
-
- def __new__(mcs, name, bases, dct):
- obj = type.__new__(mcs, name, bases, dct)
- for token in STANDARD_TYPES:
- if token not in obj.styles:
- obj.styles[token] = ''
-
- def colorformat(text):
- if text in ansicolors:
- return text
- if text[0:1] == '#':
- col = text[1:]
- if len(col) == 6:
- return col
- elif len(col) == 3:
- return col[0] * 2 + col[1] * 2 + col[2] * 2
- elif text == '':
- return ''
- elif text.startswith('var') or text.startswith('calc'):
- return text
- assert False, "wrong color format %r" % text
-
- _styles = obj._styles = {}
-
- for ttype in obj.styles:
- for token in ttype.split():
- if token in _styles:
- continue
- ndef = _styles.get(token.parent, None)
- styledefs = obj.styles.get(token, '').split()
- if not ndef or token is None:
- ndef = ['', 0, 0, 0, '', '', 0, 0, 0]
- elif 'noinherit' in styledefs and token is not Token:
- ndef = _styles[Token][:]
- else:
- ndef = ndef[:]
- _styles[token] = ndef
- for styledef in obj.styles.get(token, '').split():
- if styledef == 'noinherit':
- pass
- elif styledef == 'bold':
- ndef[1] = 1
- elif styledef == 'nobold':
- ndef[1] = 0
- elif styledef == 'italic':
- ndef[2] = 1
- elif styledef == 'noitalic':
- ndef[2] = 0
- elif styledef == 'underline':
- ndef[3] = 1
- elif styledef == 'nounderline':
- ndef[3] = 0
- elif styledef[:3] == 'bg:':
- ndef[4] = colorformat(styledef[3:])
- elif styledef[:7] == 'border:':
- ndef[5] = colorformat(styledef[7:])
- elif styledef == 'roman':
- ndef[6] = 1
- elif styledef == 'sans':
- ndef[7] = 1
- elif styledef == 'mono':
- ndef[8] = 1
- else:
- ndef[0] = colorformat(styledef)
-
- return obj
-
- def style_for_token(cls, token):
- t = cls._styles[token]
- ansicolor = bgansicolor = None
- color = t[0]
- if color in _deprecated_ansicolors:
- color = _deprecated_ansicolors[color]
- if color in ansicolors:
- ansicolor = color
- color = _ansimap[color]
- bgcolor = t[4]
- if bgcolor in _deprecated_ansicolors:
- bgcolor = _deprecated_ansicolors[bgcolor]
- if bgcolor in ansicolors:
- bgansicolor = bgcolor
- bgcolor = _ansimap[bgcolor]
-
- return {
- 'color': color or None,
- 'bold': bool(t[1]),
- 'italic': bool(t[2]),
- 'underline': bool(t[3]),
- 'bgcolor': bgcolor or None,
- 'border': t[5] or None,
- 'roman': bool(t[6]) or None,
- 'sans': bool(t[7]) or None,
- 'mono': bool(t[8]) or None,
- 'ansicolor': ansicolor,
- 'bgansicolor': bgansicolor,
- }
-
- def list_styles(cls):
- return list(cls)
-
- def styles_token(cls, ttype):
- return ttype in cls._styles
-
- def __iter__(cls):
- for token in cls._styles:
- yield token, cls.style_for_token(token)
-
- def __len__(cls):
- return len(cls._styles)
-
-
-class Style(metaclass=StyleMeta):
-
- #: overall background color (``None`` means transparent)
- background_color = '#ffffff'
-
- #: highlight background color
- highlight_color = '#ffffcc'
-
- #: line number font color
- line_number_color = 'inherit'
-
- #: line number background color
- line_number_background_color = 'transparent'
-
- #: special line number font color
- line_number_special_color = '#000000'
-
- #: special line number background color
- line_number_special_background_color = '#ffffc0'
-
- #: Style definitions for individual token types.
- styles = {}
-
- # Attribute for lexers defined within Pygments. If set
- # to True, the style is not shown in the style gallery
- # on the website. This is intended for language-specific
- # styles.
- web_style_gallery_exclude = False
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_extension.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_extension.py
deleted file mode 100644
index cbd6da9be4956ce8558304ed72ffbe88ccd22ba5..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_extension.py
+++ /dev/null
@@ -1,10 +0,0 @@
-from typing import Any
-
-
-def load_ipython_extension(ip: Any) -> None: # pragma: no cover
- # prevent circular import
- from pip._vendor.rich.pretty import install
- from pip._vendor.rich.traceback import install as tr_install
-
- install()
- tr_install()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_wrap.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_wrap.py
deleted file mode 100644
index c45f193f74ad7385c84f3b935663198415cfaa4b..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_wrap.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import re
-from typing import Iterable, List, Tuple
-
-from ._loop import loop_last
-from .cells import cell_len, chop_cells
-
-re_word = re.compile(r"\s*\S+\s*")
-
-
-def words(text: str) -> Iterable[Tuple[int, int, str]]:
- position = 0
- word_match = re_word.match(text, position)
- while word_match is not None:
- start, end = word_match.span()
- word = word_match.group(0)
- yield start, end, word
- word_match = re_word.match(text, end)
-
-
-def divide_line(text: str, width: int, fold: bool = True) -> List[int]:
- divides: List[int] = []
- append = divides.append
- line_position = 0
- _cell_len = cell_len
- for start, _end, word in words(text):
- word_length = _cell_len(word.rstrip())
- if line_position + word_length > width:
- if word_length > width:
- if fold:
- chopped_words = chop_cells(word, max_size=width, position=0)
- for last, line in loop_last(chopped_words):
- if start:
- append(start)
-
- if last:
- line_position = _cell_len(line)
- else:
- start += len(line)
- else:
- if start:
- append(start)
- line_position = _cell_len(word)
- elif line_position and start:
- append(start)
- line_position = _cell_len(word)
- else:
- line_position += _cell_len(word)
- return divides
-
-
-if __name__ == "__main__": # pragma: no cover
- from .console import Console
-
- console = Console(width=10)
- console.print("12345 abcdefghijklmnopqrstuvwyxzABCDEFGHIJKLMNOPQRSTUVWXYZ 12345")
- print(chop_cells("abcdefghijklmnopqrstuvwxyz", 10, position=2))
diff --git a/spaces/Awesimo/jojogan/e4e/criteria/lpips/networks.py b/spaces/Awesimo/jojogan/e4e/criteria/lpips/networks.py
deleted file mode 100644
index 3a0d13ad2d560278f16586da68d3a5eadb26e746..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/criteria/lpips/networks.py
+++ /dev/null
@@ -1,96 +0,0 @@
-from typing import Sequence
-
-from itertools import chain
-
-import torch
-import torch.nn as nn
-from torchvision import models
-
-from criteria.lpips.utils import normalize_activation
-
-
-def get_network(net_type: str):
- if net_type == 'alex':
- return AlexNet()
- elif net_type == 'squeeze':
- return SqueezeNet()
- elif net_type == 'vgg':
- return VGG16()
- else:
- raise NotImplementedError('choose net_type from [alex, squeeze, vgg].')
-
-
-class LinLayers(nn.ModuleList):
- def __init__(self, n_channels_list: Sequence[int]):
- super(LinLayers, self).__init__([
- nn.Sequential(
- nn.Identity(),
- nn.Conv2d(nc, 1, 1, 1, 0, bias=False)
- ) for nc in n_channels_list
- ])
-
- for param in self.parameters():
- param.requires_grad = False
-
-
-class BaseNet(nn.Module):
- def __init__(self):
- super(BaseNet, self).__init__()
-
- # register buffer
- self.register_buffer(
- 'mean', torch.Tensor([-.030, -.088, -.188])[None, :, None, None])
- self.register_buffer(
- 'std', torch.Tensor([.458, .448, .450])[None, :, None, None])
-
- def set_requires_grad(self, state: bool):
- for param in chain(self.parameters(), self.buffers()):
- param.requires_grad = state
-
- def z_score(self, x: torch.Tensor):
- return (x - self.mean) / self.std
-
- def forward(self, x: torch.Tensor):
- x = self.z_score(x)
-
- output = []
- for i, (_, layer) in enumerate(self.layers._modules.items(), 1):
- x = layer(x)
- if i in self.target_layers:
- output.append(normalize_activation(x))
- if len(output) == len(self.target_layers):
- break
- return output
-
-
-class SqueezeNet(BaseNet):
- def __init__(self):
- super(SqueezeNet, self).__init__()
-
- self.layers = models.squeezenet1_1(True).features
- self.target_layers = [2, 5, 8, 10, 11, 12, 13]
- self.n_channels_list = [64, 128, 256, 384, 384, 512, 512]
-
- self.set_requires_grad(False)
-
-
-class AlexNet(BaseNet):
- def __init__(self):
- super(AlexNet, self).__init__()
-
- self.layers = models.alexnet(True).features
- self.target_layers = [2, 5, 8, 10, 12]
- self.n_channels_list = [64, 192, 384, 256, 256]
-
- self.set_requires_grad(False)
-
-
-class VGG16(BaseNet):
- def __init__(self):
- super(VGG16, self).__init__()
-
- self.layers = models.vgg16(True).features
- self.target_layers = [4, 9, 16, 23, 30]
- self.n_channels_list = [64, 128, 256, 512, 512]
-
- self.set_requires_grad(False)
\ No newline at end of file
diff --git a/spaces/Ayemos/highlight_text_based_on_surprisals/README.md b/spaces/Ayemos/highlight_text_based_on_surprisals/README.md
deleted file mode 100644
index 46e0ece540f9204cee0877b6b628aa4cb4f1aee1..0000000000000000000000000000000000000000
--- a/spaces/Ayemos/highlight_text_based_on_surprisals/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Highlight text based on readability (=surprisal)
-emoji: 🐠
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.9
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Bart92/RVC_HF/go-tensorboard.bat b/spaces/Bart92/RVC_HF/go-tensorboard.bat
deleted file mode 100644
index cb81c17d3865513adec8eb0b832b7888cd1e4078..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/go-tensorboard.bat
+++ /dev/null
@@ -1,2 +0,0 @@
-python fixes/tensor-launch.py
-pause
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyproject_hooks/_impl.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyproject_hooks/_impl.py
deleted file mode 100644
index 37b0e6531f1544e1ba9b5895c48939fc97441ce7..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyproject_hooks/_impl.py
+++ /dev/null
@@ -1,330 +0,0 @@
-import json
-import os
-import sys
-import tempfile
-from contextlib import contextmanager
-from os.path import abspath
-from os.path import join as pjoin
-from subprocess import STDOUT, check_call, check_output
-
-from ._in_process import _in_proc_script_path
-
-
-def write_json(obj, path, **kwargs):
- with open(path, 'w', encoding='utf-8') as f:
- json.dump(obj, f, **kwargs)
-
-
-def read_json(path):
- with open(path, encoding='utf-8') as f:
- return json.load(f)
-
-
-class BackendUnavailable(Exception):
- """Will be raised if the backend cannot be imported in the hook process."""
- def __init__(self, traceback):
- self.traceback = traceback
-
-
-class BackendInvalid(Exception):
- """Will be raised if the backend is invalid."""
- def __init__(self, backend_name, backend_path, message):
- super().__init__(message)
- self.backend_name = backend_name
- self.backend_path = backend_path
-
-
-class HookMissing(Exception):
- """Will be raised on missing hooks (if a fallback can't be used)."""
- def __init__(self, hook_name):
- super().__init__(hook_name)
- self.hook_name = hook_name
-
-
-class UnsupportedOperation(Exception):
- """May be raised by build_sdist if the backend indicates that it can't."""
- def __init__(self, traceback):
- self.traceback = traceback
-
-
-def default_subprocess_runner(cmd, cwd=None, extra_environ=None):
- """The default method of calling the wrapper subprocess.
-
- This uses :func:`subprocess.check_call` under the hood.
- """
- env = os.environ.copy()
- if extra_environ:
- env.update(extra_environ)
-
- check_call(cmd, cwd=cwd, env=env)
-
-
-def quiet_subprocess_runner(cmd, cwd=None, extra_environ=None):
- """Call the subprocess while suppressing output.
-
- This uses :func:`subprocess.check_output` under the hood.
- """
- env = os.environ.copy()
- if extra_environ:
- env.update(extra_environ)
-
- check_output(cmd, cwd=cwd, env=env, stderr=STDOUT)
-
-
-def norm_and_check(source_tree, requested):
- """Normalise and check a backend path.
-
- Ensure that the requested backend path is specified as a relative path,
- and resolves to a location under the given source tree.
-
- Return an absolute version of the requested path.
- """
- if os.path.isabs(requested):
- raise ValueError("paths must be relative")
-
- abs_source = os.path.abspath(source_tree)
- abs_requested = os.path.normpath(os.path.join(abs_source, requested))
- # We have to use commonprefix for Python 2.7 compatibility. So we
- # normalise case to avoid problems because commonprefix is a character
- # based comparison :-(
- norm_source = os.path.normcase(abs_source)
- norm_requested = os.path.normcase(abs_requested)
- if os.path.commonprefix([norm_source, norm_requested]) != norm_source:
- raise ValueError("paths must be inside source tree")
-
- return abs_requested
-
-
-class BuildBackendHookCaller:
- """A wrapper to call the build backend hooks for a source directory.
- """
-
- def __init__(
- self,
- source_dir,
- build_backend,
- backend_path=None,
- runner=None,
- python_executable=None,
- ):
- """
- :param source_dir: The source directory to invoke the build backend for
- :param build_backend: The build backend spec
- :param backend_path: Additional path entries for the build backend spec
- :param runner: The :ref:`subprocess runner ` to use
- :param python_executable:
- The Python executable used to invoke the build backend
- """
- if runner is None:
- runner = default_subprocess_runner
-
- self.source_dir = abspath(source_dir)
- self.build_backend = build_backend
- if backend_path:
- backend_path = [
- norm_and_check(self.source_dir, p) for p in backend_path
- ]
- self.backend_path = backend_path
- self._subprocess_runner = runner
- if not python_executable:
- python_executable = sys.executable
- self.python_executable = python_executable
-
- @contextmanager
- def subprocess_runner(self, runner):
- """A context manager for temporarily overriding the default
- :ref:`subprocess runner `.
-
- .. code-block:: python
-
- hook_caller = BuildBackendHookCaller(...)
- with hook_caller.subprocess_runner(quiet_subprocess_runner):
- ...
- """
- prev = self._subprocess_runner
- self._subprocess_runner = runner
- try:
- yield
- finally:
- self._subprocess_runner = prev
-
- def _supported_features(self):
- """Return the list of optional features supported by the backend."""
- return self._call_hook('_supported_features', {})
-
- def get_requires_for_build_wheel(self, config_settings=None):
- """Get additional dependencies required for building a wheel.
-
- :returns: A list of :pep:`dependency specifiers <508>`.
- :rtype: list[str]
-
- .. admonition:: Fallback
-
- If the build backend does not defined a hook with this name, an
- empty list will be returned.
- """
- return self._call_hook('get_requires_for_build_wheel', {
- 'config_settings': config_settings
- })
-
- def prepare_metadata_for_build_wheel(
- self, metadata_directory, config_settings=None,
- _allow_fallback=True):
- """Prepare a ``*.dist-info`` folder with metadata for this project.
-
- :returns: Name of the newly created subfolder within
- ``metadata_directory``, containing the metadata.
- :rtype: str
-
- .. admonition:: Fallback
-
- If the build backend does not define a hook with this name and
- ``_allow_fallback`` is truthy, the backend will be asked to build a
- wheel via the ``build_wheel`` hook and the dist-info extracted from
- that will be returned.
- """
- return self._call_hook('prepare_metadata_for_build_wheel', {
- 'metadata_directory': abspath(metadata_directory),
- 'config_settings': config_settings,
- '_allow_fallback': _allow_fallback,
- })
-
- def build_wheel(
- self, wheel_directory, config_settings=None,
- metadata_directory=None):
- """Build a wheel from this project.
-
- :returns:
- The name of the newly created wheel within ``wheel_directory``.
-
- .. admonition:: Interaction with fallback
-
- If the ``build_wheel`` hook was called in the fallback for
- :meth:`prepare_metadata_for_build_wheel`, the build backend would
- not be invoked. Instead, the previously built wheel will be copied
- to ``wheel_directory`` and the name of that file will be returned.
- """
- if metadata_directory is not None:
- metadata_directory = abspath(metadata_directory)
- return self._call_hook('build_wheel', {
- 'wheel_directory': abspath(wheel_directory),
- 'config_settings': config_settings,
- 'metadata_directory': metadata_directory,
- })
-
- def get_requires_for_build_editable(self, config_settings=None):
- """Get additional dependencies required for building an editable wheel.
-
- :returns: A list of :pep:`dependency specifiers <508>`.
- :rtype: list[str]
-
- .. admonition:: Fallback
-
- If the build backend does not defined a hook with this name, an
- empty list will be returned.
- """
- return self._call_hook('get_requires_for_build_editable', {
- 'config_settings': config_settings
- })
-
- def prepare_metadata_for_build_editable(
- self, metadata_directory, config_settings=None,
- _allow_fallback=True):
- """Prepare a ``*.dist-info`` folder with metadata for this project.
-
- :returns: Name of the newly created subfolder within
- ``metadata_directory``, containing the metadata.
- :rtype: str
-
- .. admonition:: Fallback
-
- If the build backend does not define a hook with this name and
- ``_allow_fallback`` is truthy, the backend will be asked to build a
- wheel via the ``build_editable`` hook and the dist-info
- extracted from that will be returned.
- """
- return self._call_hook('prepare_metadata_for_build_editable', {
- 'metadata_directory': abspath(metadata_directory),
- 'config_settings': config_settings,
- '_allow_fallback': _allow_fallback,
- })
-
- def build_editable(
- self, wheel_directory, config_settings=None,
- metadata_directory=None):
- """Build an editable wheel from this project.
-
- :returns:
- The name of the newly created wheel within ``wheel_directory``.
-
- .. admonition:: Interaction with fallback
-
- If the ``build_editable`` hook was called in the fallback for
- :meth:`prepare_metadata_for_build_editable`, the build backend
- would not be invoked. Instead, the previously built wheel will be
- copied to ``wheel_directory`` and the name of that file will be
- returned.
- """
- if metadata_directory is not None:
- metadata_directory = abspath(metadata_directory)
- return self._call_hook('build_editable', {
- 'wheel_directory': abspath(wheel_directory),
- 'config_settings': config_settings,
- 'metadata_directory': metadata_directory,
- })
-
- def get_requires_for_build_sdist(self, config_settings=None):
- """Get additional dependencies required for building an sdist.
-
- :returns: A list of :pep:`dependency specifiers <508>`.
- :rtype: list[str]
- """
- return self._call_hook('get_requires_for_build_sdist', {
- 'config_settings': config_settings
- })
-
- def build_sdist(self, sdist_directory, config_settings=None):
- """Build an sdist from this project.
-
- :returns:
- The name of the newly created sdist within ``wheel_directory``.
- """
- return self._call_hook('build_sdist', {
- 'sdist_directory': abspath(sdist_directory),
- 'config_settings': config_settings,
- })
-
- def _call_hook(self, hook_name, kwargs):
- extra_environ = {'PEP517_BUILD_BACKEND': self.build_backend}
-
- if self.backend_path:
- backend_path = os.pathsep.join(self.backend_path)
- extra_environ['PEP517_BACKEND_PATH'] = backend_path
-
- with tempfile.TemporaryDirectory() as td:
- hook_input = {'kwargs': kwargs}
- write_json(hook_input, pjoin(td, 'input.json'), indent=2)
-
- # Run the hook in a subprocess
- with _in_proc_script_path() as script:
- python = self.python_executable
- self._subprocess_runner(
- [python, abspath(str(script)), hook_name, td],
- cwd=self.source_dir,
- extra_environ=extra_environ
- )
-
- data = read_json(pjoin(td, 'output.json'))
- if data.get('unsupported'):
- raise UnsupportedOperation(data.get('traceback', ''))
- if data.get('no_backend'):
- raise BackendUnavailable(data.get('traceback', ''))
- if data.get('backend_invalid'):
- raise BackendInvalid(
- backend_name=self.build_backend,
- backend_path=self.backend_path,
- message=data.get('backend_error', '')
- )
- if data.get('hook_missing'):
- raise HookMissing(data.get('missing_hook_name') or hook_name)
- return data['return_val']
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/tornadoweb.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/tornadoweb.py
deleted file mode 100644
index e19c30b18905a39466ab6b51403438605e706caf..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/tornadoweb.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# Copyright 2017 Elisey Zanko
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import sys
-import typing
-
-from pip._vendor.tenacity import BaseRetrying
-from pip._vendor.tenacity import DoAttempt
-from pip._vendor.tenacity import DoSleep
-from pip._vendor.tenacity import RetryCallState
-
-from tornado import gen
-
-if typing.TYPE_CHECKING:
- from tornado.concurrent import Future
-
-_RetValT = typing.TypeVar("_RetValT")
-
-
-class TornadoRetrying(BaseRetrying):
- def __init__(self, sleep: "typing.Callable[[float], Future[None]]" = gen.sleep, **kwargs: typing.Any) -> None:
- super().__init__(**kwargs)
- self.sleep = sleep
-
- @gen.coroutine # type: ignore[misc]
- def __call__(
- self,
- fn: "typing.Callable[..., typing.Union[typing.Generator[typing.Any, typing.Any, _RetValT], Future[_RetValT]]]",
- *args: typing.Any,
- **kwargs: typing.Any,
- ) -> "typing.Generator[typing.Any, typing.Any, _RetValT]":
- self.begin()
-
- retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
- while True:
- do = self.iter(retry_state=retry_state)
- if isinstance(do, DoAttempt):
- try:
- result = yield fn(*args, **kwargs)
- except BaseException: # noqa: B902
- retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
- else:
- retry_state.set_result(result)
- elif isinstance(do, DoSleep):
- retry_state.prepare_for_next_attempt()
- yield self.sleep(do)
- else:
- raise gen.Return(do)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/spawn.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/spawn.py
deleted file mode 100644
index b18ba9db7d2e5919c853e7dcf8d5b7c180607c3f..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/spawn.py
+++ /dev/null
@@ -1,109 +0,0 @@
-"""distutils.spawn
-
-Provides the 'spawn()' function, a front-end to various platform-
-specific functions for launching another program in a sub-process.
-Also provides the 'find_executable()' to search the path for a given
-executable name.
-"""
-
-import sys
-import os
-import subprocess
-
-from distutils.errors import DistutilsExecError
-from distutils.debug import DEBUG
-from distutils import log
-
-
-def spawn(cmd, search_path=1, verbose=0, dry_run=0, env=None): # noqa: C901
- """Run another program, specified as a command list 'cmd', in a new process.
-
- 'cmd' is just the argument list for the new process, ie.
- cmd[0] is the program to run and cmd[1:] are the rest of its arguments.
- There is no way to run a program with a name different from that of its
- executable.
-
- If 'search_path' is true (the default), the system's executable
- search path will be used to find the program; otherwise, cmd[0]
- must be the exact path to the executable. If 'dry_run' is true,
- the command will not actually be run.
-
- Raise DistutilsExecError if running the program fails in any way; just
- return on success.
- """
- # cmd is documented as a list, but just in case some code passes a tuple
- # in, protect our %-formatting code against horrible death
- cmd = list(cmd)
-
- log.info(subprocess.list2cmdline(cmd))
- if dry_run:
- return
-
- if search_path:
- executable = find_executable(cmd[0])
- if executable is not None:
- cmd[0] = executable
-
- env = env if env is not None else dict(os.environ)
-
- if sys.platform == 'darwin':
- from distutils.util import MACOSX_VERSION_VAR, get_macosx_target_ver
-
- macosx_target_ver = get_macosx_target_ver()
- if macosx_target_ver:
- env[MACOSX_VERSION_VAR] = macosx_target_ver
-
- try:
- proc = subprocess.Popen(cmd, env=env)
- proc.wait()
- exitcode = proc.returncode
- except OSError as exc:
- if not DEBUG:
- cmd = cmd[0]
- raise DistutilsExecError(
- "command {!r} failed: {}".format(cmd, exc.args[-1])
- ) from exc
-
- if exitcode:
- if not DEBUG:
- cmd = cmd[0]
- raise DistutilsExecError(
- "command {!r} failed with exit code {}".format(cmd, exitcode)
- )
-
-
-def find_executable(executable, path=None):
- """Tries to find 'executable' in the directories listed in 'path'.
-
- A string listing directories separated by 'os.pathsep'; defaults to
- os.environ['PATH']. Returns the complete filename or None if not found.
- """
- _, ext = os.path.splitext(executable)
- if (sys.platform == 'win32') and (ext != '.exe'):
- executable = executable + '.exe'
-
- if os.path.isfile(executable):
- return executable
-
- if path is None:
- path = os.environ.get('PATH', None)
- if path is None:
- try:
- path = os.confstr("CS_PATH")
- except (AttributeError, ValueError):
- # os.confstr() or CS_PATH is not available
- path = os.defpath
- # bpo-35755: Don't use os.defpath if the PATH environment variable is
- # set to an empty string
-
- # PATH='' doesn't match, whereas PATH=':' looks in the current directory
- if not path:
- return None
-
- paths = path.split(os.pathsep)
- for p in paths:
- f = os.path.join(p, executable)
- if os.path.isfile(f):
- # the file exists, we have a shot at spawn working
- return f
- return None
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/reference.h b/spaces/CVPR/LIVE/thrust/thrust/detail/reference.h
deleted file mode 100644
index 89bcf63ca7a5d9ba91d242ddaec318a02a832c65..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/reference.h
+++ /dev/null
@@ -1,178 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-
-namespace thrust
-{
-namespace detail
-{
-
-template struct is_wrapped_reference;
-
-}
-
-// the base type for all of thrust's system-annotated references.
-// for reasonable reference-like semantics, derived types must reimplement the following:
-// 1. constructor from pointer
-// 2. copy constructor
-// 3. templated copy constructor from other reference
-// 4. templated assignment from other reference
-// 5. assignment from value_type
-template
- class reference
-{
- private:
- typedef typename thrust::detail::eval_if<
- thrust::detail::is_same::value,
- thrust::detail::identity_,
- thrust::detail::identity_
- >::type derived_type;
-
- // hint for is_wrapped_reference lets it know that this type (or a derived type)
- // is a wrapped reference
- struct wrapped_reference_hint {};
- template friend struct thrust::detail::is_wrapped_reference;
-
- public:
- typedef Pointer pointer;
- typedef typename thrust::detail::remove_const::type value_type;
-
- __host__ __device__
- explicit reference(const pointer &ptr);
-
-#if THRUST_CPP_DIALECT >= 2011
- reference(const reference &) = default;
-#endif
-
- template
- __host__ __device__
- reference(const reference &other,
- typename thrust::detail::enable_if_convertible<
- typename reference::pointer,
- pointer
- >::type * = 0);
-
- __host__ __device__
- derived_type &operator=(const reference &other);
-
- // XXX this may need an enable_if
- template
- __host__ __device__
- derived_type &operator=(const reference &other);
-
- __host__ __device__
- derived_type &operator=(const value_type &x);
-
- __host__ __device__
- pointer operator&() const;
-
- __host__ __device__
- operator value_type () const;
-
- __host__ __device__
- void swap(derived_type &other);
-
- derived_type &operator++();
-
- value_type operator++(int);
-
- // XXX parameterize the type of rhs
- derived_type &operator+=(const value_type &rhs);
-
- derived_type &operator--();
-
- value_type operator--(int);
-
- // XXX parameterize the type of rhs
- derived_type &operator-=(const value_type &rhs);
-
- // XXX parameterize the type of rhs
- derived_type &operator*=(const value_type &rhs);
-
- // XXX parameterize the type of rhs
- derived_type &operator/=(const value_type &rhs);
-
- // XXX parameterize the type of rhs
- derived_type &operator%=(const value_type &rhs);
-
- // XXX parameterize the type of rhs
- derived_type &operator<<=(const value_type &rhs);
-
- // XXX parameterize the type of rhs
- derived_type &operator>>=(const value_type &rhs);
-
- // XXX parameterize the type of rhs
- derived_type &operator&=(const value_type &rhs);
-
- // XXX parameterize the type of rhs
- derived_type &operator|=(const value_type &rhs);
-
- // XXX parameterize the type of rhs
- derived_type &operator^=(const value_type &rhs);
-
- private:
- const pointer m_ptr;
-
- // allow access to m_ptr for other references
- template friend class reference;
-
- template
- __host__ __device__
- inline value_type strip_const_get_value(const System &system) const;
-
- template
- __host__ __device__
- inline void assign_from(OtherPointer src);
-
- // XXX this helper exists only to avoid warnings about null references from the other assign_from
- template
- inline __host__ __device__
- void assign_from(System1 *system1, System2 *system2, OtherPointer src);
-
- template
- __host__ __device__
- inline void strip_const_assign_value(const System &system, OtherPointer src);
-
- // XXX this helper exists only to avoid warnings about null references from the other swap
- template
- inline __host__ __device__
- void swap(System *system, derived_type &other);
-
- // XXX this helper exists only to avoid warnings about null references from operator value_type ()
- template
- inline __host__ __device__
- value_type convert_to_value_type(System *system) const;
-}; // end reference
-
-// Output stream operator
-template
-std::basic_ostream &
-operator<<(std::basic_ostream &os,
- const reference &y);
-
-} // end thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/partition.h b/spaces/CVPR/LIVE/thrust/thrust/partition.h
deleted file mode 100644
index 3c493e0881639d75faa9516a34588dcfa2ea0fa2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/partition.h
+++ /dev/null
@@ -1,1439 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file partition.h
- * \brief Reorganizes a range based on a predicate
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-
-
-/*! \addtogroup reordering
- * \ingroup algorithms
- *
- * \addtogroup partitioning
- * \ingroup reordering
- * \{
- */
-
-
-/*! \p partition reorders the elements [first, last) based on the function
- * object \p pred, such that all of the elements that satisfy \p pred precede the
- * elements that fail to satisfy it. The postcondition is that, for some iterator
- * \c middle in the range [first, last), pred(*i) is \c true for every
- * iterator \c i in the range [first,middle) and \c false for every iterator
- * \c i in the range [middle, last). The return value of \p partition is
- * \c middle.
- *
- * Note that the relative order of elements in the two reordered sequences is not
- * necessarily the same as it was in the original sequence. A different algorithm,
- * \p stable_partition, does guarantee to preserve the relative order.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the sequence to reorder.
- * \param last The end of the sequence to reorder.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return An iterator referring to the first element of the second partition, that is,
- * the sequence of the elements which do not satisfy \p pred.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator's \c value_type is convertible to \p Predicate's \c argument_type,
- * and \p ForwardIterator is mutable.
- * \tparam Predicate is a model of Predicate.
- *
- * The following code snippet demonstrates how to use \p partition to reorder a
- * sequence so that even numbers precede odd numbers using the \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * const int N = sizeof(A)/sizeof(int);
- * thrust::partition(thrust::host,
- * A, A + N,
- * is_even());
- * // A is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/partition.html
- * \see \p stable_partition
- * \see \p partition_copy
- */
-template
-__host__ __device__
- ForwardIterator partition(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- Predicate pred);
-
-
-/*! \p partition reorders the elements [first, last) based on the function
- * object \p pred, such that all of the elements that satisfy \p pred precede the
- * elements that fail to satisfy it. The postcondition is that, for some iterator
- * \c middle in the range [first, last), pred(*i) is \c true for every
- * iterator \c i in the range [first,middle) and \c false for every iterator
- * \c i in the range [middle, last). The return value of \p partition is
- * \c middle.
- *
- * Note that the relative order of elements in the two reordered sequences is not
- * necessarily the same as it was in the original sequence. A different algorithm,
- * \p stable_partition, does guarantee to preserve the relative order.
- *
- * \param first The beginning of the sequence to reorder.
- * \param last The end of the sequence to reorder.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return An iterator referring to the first element of the second partition, that is,
- * the sequence of the elements which do not satisfy \p pred.
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator's \c value_type is convertible to \p Predicate's \c argument_type,
- * and \p ForwardIterator is mutable.
- * \tparam Predicate is a model of Predicate.
- *
- * The following code snippet demonstrates how to use \p partition to reorder a
- * sequence so that even numbers precede odd numbers.
- *
- * \code
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * const int N = sizeof(A)/sizeof(int);
- * thrust::partition(A, A + N,
- * is_even());
- * // A is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/partition.html
- * \see \p stable_partition
- * \see \p partition_copy
- */
-template
- ForwardIterator partition(ForwardIterator first,
- ForwardIterator last,
- Predicate pred);
-
-
-/*! \p partition reorders the elements [first, last) based on the function
- * object \p pred applied to a stencil range [stencil, stencil + (last - first)),
- * such that all of the elements whose corresponding stencil element satisfies \p pred precede all of the elements whose
- * corresponding stencil element fails to satisfy it. The postcondition is that, for some iterator
- * \c middle in the range [first, last), pred(*stencil_i) is \c true for every iterator
- * \c stencil_i in the range [stencil,stencil + (middle - first)) and \c false for every iterator \c stencil_i
- * in the range [stencil + (middle - first), stencil + (last - first)).
- * The return value of \p stable_partition is \c middle.
- *
- * Note that the relative order of elements in the two reordered sequences is not
- * necessarily the same as it was in the original sequence. A different algorithm,
- * \p stable_partition, does guarantee to preserve the relative order.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the sequence to reorder.
- * \param last The end of the sequence to reorder.
- * \param stencil The beginning of the stencil sequence.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return An iterator referring to the first element of the second partition, that is,
- * the sequence of the elements whose stencil elements do not satisfy \p pred.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable.
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The ranges [first,last) and [stencil, stencil + (last - first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p partition to reorder a
- * sequence so that even numbers precede odd numbers using the \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1};
- * int S[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * const int N = sizeof(A)/sizeof(int);
- * thrust::partition(thrust::host, A, A + N, S, is_even());
- * // A is now {1, 1, 1, 1, 1, 0, 0, 0, 0, 0}
- * // S is unmodified
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/partition.html
- * \see \p stable_partition
- * \see \p partition_copy
- */
-template
-__host__ __device__
- ForwardIterator partition(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- InputIterator stencil,
- Predicate pred);
-
-
-/*! \p partition reorders the elements [first, last) based on the function
- * object \p pred applied to a stencil range [stencil, stencil + (last - first)),
- * such that all of the elements whose corresponding stencil element satisfies \p pred precede all of the elements whose
- * corresponding stencil element fails to satisfy it. The postcondition is that, for some iterator
- * \c middle in the range [first, last), pred(*stencil_i) is \c true for every iterator
- * \c stencil_i in the range [stencil,stencil + (middle - first)) and \c false for every iterator \c stencil_i
- * in the range [stencil + (middle - first), stencil + (last - first)).
- * The return value of \p stable_partition is \c middle.
- *
- * Note that the relative order of elements in the two reordered sequences is not
- * necessarily the same as it was in the original sequence. A different algorithm,
- * \p stable_partition, does guarantee to preserve the relative order.
- *
- * \param first The beginning of the sequence to reorder.
- * \param last The end of the sequence to reorder.
- * \param stencil The beginning of the stencil sequence.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return An iterator referring to the first element of the second partition, that is,
- * the sequence of the elements whose stencil elements do not satisfy \p pred.
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable.
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The ranges [first,last) and [stencil, stencil + (last - first)) shall not overlap.
- *
- * The following code snippet demonstrates how to use \p partition to reorder a
- * sequence so that even numbers precede odd numbers.
- *
- * \code
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1};
- * int S[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * const int N = sizeof(A)/sizeof(int);
- * thrust::partition(A, A + N, S, is_even());
- * // A is now {1, 1, 1, 1, 1, 0, 0, 0, 0, 0}
- * // S is unmodified
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/partition.html
- * \see \p stable_partition
- * \see \p partition_copy
- */
-template
- ForwardIterator partition(ForwardIterator first,
- ForwardIterator last,
- InputIterator stencil,
- Predicate pred);
-
-
-/*! \p partition_copy differs from \p partition only in that the reordered
- * sequence is written to difference output sequences, rather than in place.
- *
- * \p partition_copy copies the elements [first, last) based on the
- * function object \p pred. All of the elements that satisfy \p pred are copied
- * to the range beginning at \p out_true and all the elements that fail to satisfy it
- * are copied to the range beginning at \p out_false.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the sequence to reorder.
- * \param last The end of the sequence to reorder.
- * \param out_true The destination of the resulting sequence of elements which satisfy \p pred.
- * \param out_false The destination of the resulting sequence of elements which fail to satisfy \p pred.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return A \p pair p such that p.first is the end of the output range beginning
- * at \p out_true and p.second is the end of the output range beginning at
- * \p out_false.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type and \p InputIterator's \c value_type
- * is convertible to \p OutputIterator1 and \p OutputIterator2's \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The input range shall not overlap with either output range.
- *
- * The following code snippet demonstrates how to use \p partition_copy to separate a
- * sequence into two output sequences of even and odd numbers using the \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * int result[10];
- * const int N = sizeof(A)/sizeof(int);
- * int *evens = result;
- * int *odds = result + 5;
- * thrust::partition_copy(thrust::host, A, A + N, evens, odds, is_even());
- * // A remains {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * // result is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * // evens points to {2, 4, 6, 8, 10}
- * // odds points to {1, 3, 5, 7, 9}
- * \endcode
- *
- * \note The relative order of elements in the two reordered sequences is not
- * necessarily the same as it was in the original sequence. A different algorithm,
- * \p stable_partition_copy, does guarantee to preserve the relative order.
- *
- * \see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2569.pdf
- * \see \p stable_partition_copy
- * \see \p partition
- */
-template
-__host__ __device__
- thrust::pair
- partition_copy(const thrust::detail::execution_policy_base &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-/*! \p partition_copy differs from \p partition only in that the reordered
- * sequence is written to difference output sequences, rather than in place.
- *
- * \p partition_copy copies the elements [first, last) based on the
- * function object \p pred. All of the elements that satisfy \p pred are copied
- * to the range beginning at \p out_true and all the elements that fail to satisfy it
- * are copied to the range beginning at \p out_false.
- *
- * \param first The beginning of the sequence to reorder.
- * \param last The end of the sequence to reorder.
- * \param out_true The destination of the resulting sequence of elements which satisfy \p pred.
- * \param out_false The destination of the resulting sequence of elements which fail to satisfy \p pred.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return A \p pair p such that p.first is the end of the output range beginning
- * at \p out_true and p.second is the end of the output range beginning at
- * \p out_false.
- *
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type and \p InputIterator's \c value_type
- * is convertible to \p OutputIterator1 and \p OutputIterator2's \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The input range shall not overlap with either output range.
- *
- * The following code snippet demonstrates how to use \p partition_copy to separate a
- * sequence into two output sequences of even and odd numbers.
- *
- * \code
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * int result[10];
- * const int N = sizeof(A)/sizeof(int);
- * int *evens = result;
- * int *odds = result + 5;
- * thrust::partition_copy(A, A + N, evens, odds, is_even());
- * // A remains {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * // result is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * // evens points to {2, 4, 6, 8, 10}
- * // odds points to {1, 3, 5, 7, 9}
- * \endcode
- *
- * \note The relative order of elements in the two reordered sequences is not
- * necessarily the same as it was in the original sequence. A different algorithm,
- * \p stable_partition_copy, does guarantee to preserve the relative order.
- *
- * \see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2569.pdf
- * \see \p stable_partition_copy
- * \see \p partition
- */
-template
- thrust::pair
- partition_copy(InputIterator first,
- InputIterator last,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-/*! \p partition_copy differs from \p partition only in that the reordered
- * sequence is written to difference output sequences, rather than in place.
- *
- * \p partition_copy copies the elements [first, last) based on the
- * function object \p pred which is applied to a range of stencil elements. All of the elements
- * whose corresponding stencil element satisfies \p pred are copied to the range beginning at \p out_true
- * and all the elements whose stencil element fails to satisfy it are copied to the range beginning
- * at \p out_false.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the sequence to reorder.
- * \param last The end of the sequence to reorder.
- * \param stencil The beginning of the stencil sequence.
- * \param out_true The destination of the resulting sequence of elements which satisfy \p pred.
- * \param out_false The destination of the resulting sequence of elements which fail to satisfy \p pred.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return A \p pair p such that p.first is the end of the output range beginning
- * at \p out_true and p.second is the end of the output range beginning at
- * \p out_false.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p OutputIterator1 and \p OutputIterator2's \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * and \p InputIterator2's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The input ranges shall not overlap with either output range.
- *
- * The following code snippet demonstrates how to use \p partition_copy to separate a
- * sequence into two output sequences of even and odd numbers using the \p thrust::host execution
- * policy for parallelization.
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * int S[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1};
- * int result[10];
- * const int N = sizeof(A)/sizeof(int);
- * int *evens = result;
- * int *odds = result + 5;
- * thrust::stable_partition_copy(thrust::host, A, A + N, S, evens, odds, thrust::identity());
- * // A remains {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * // S remains {0, 1, 0, 1, 0, 1, 0, 1, 0, 1}
- * // result is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * // evens points to {2, 4, 6, 8, 10}
- * // odds points to {1, 3, 5, 7, 9}
- * \endcode
- *
- * \note The relative order of elements in the two reordered sequences is not
- * necessarily the same as it was in the original sequence. A different algorithm,
- * \p stable_partition_copy, does guarantee to preserve the relative order.
- *
- * \see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2569.pdf
- * \see \p stable_partition_copy
- * \see \p partition
- */
-template
-__host__ __device__
- thrust::pair
- partition_copy(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 stencil,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-/*! \p partition_copy differs from \p partition only in that the reordered
- * sequence is written to difference output sequences, rather than in place.
- *
- * \p partition_copy copies the elements [first, last) based on the
- * function object \p pred which is applied to a range of stencil elements. All of the elements
- * whose corresponding stencil element satisfies \p pred are copied to the range beginning at \p out_true
- * and all the elements whose stencil element fails to satisfy it are copied to the range beginning
- * at \p out_false.
- *
- * \param first The beginning of the sequence to reorder.
- * \param last The end of the sequence to reorder.
- * \param stencil The beginning of the stencil sequence.
- * \param out_true The destination of the resulting sequence of elements which satisfy \p pred.
- * \param out_false The destination of the resulting sequence of elements which fail to satisfy \p pred.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return A \p pair p such that p.first is the end of the output range beginning
- * at \p out_true and p.second is the end of the output range beginning at
- * \p out_false.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p OutputIterator1 and \p OutputIterator2's \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * and \p InputIterator2's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The input ranges shall not overlap with either output range.
- *
- * The following code snippet demonstrates how to use \p partition_copy to separate a
- * sequence into two output sequences of even and odd numbers.
- *
- * \code
- * #include
- * #include
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * int S[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1};
- * int result[10];
- * const int N = sizeof(A)/sizeof(int);
- * int *evens = result;
- * int *odds = result + 5;
- * thrust::stable_partition_copy(A, A + N, S, evens, odds, thrust::identity());
- * // A remains {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * // S remains {0, 1, 0, 1, 0, 1, 0, 1, 0, 1}
- * // result is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * // evens points to {2, 4, 6, 8, 10}
- * // odds points to {1, 3, 5, 7, 9}
- * \endcode
- *
- * \note The relative order of elements in the two reordered sequences is not
- * necessarily the same as it was in the original sequence. A different algorithm,
- * \p stable_partition_copy, does guarantee to preserve the relative order.
- *
- * \see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2569.pdf
- * \see \p stable_partition_copy
- * \see \p partition
- */
-template
- thrust::pair
- partition_copy(InputIterator1 first,
- InputIterator1 last,
- InputIterator2 stencil,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-/*! \p stable_partition is much like \p partition : it reorders the elements in the
- * range [first, last) based on the function object \p pred, such that all of
- * the elements that satisfy \p pred precede all of the elements that fail to satisfy
- * it. The postcondition is that, for some iterator \p middle in the range
- * [first, last), pred(*i) is \c true for every iterator \c i in the
- * range [first,middle) and \c false for every iterator \c i in the range
- * [middle, last). The return value of \p stable_partition is \c middle.
- *
- * \p stable_partition differs from \p partition in that \p stable_partition is
- * guaranteed to preserve relative order. That is, if \c x and \c y are elements in
- * [first, last), and \c stencil_x and \c stencil_y are the stencil elements
- * in corresponding positions within [stencil, stencil + (last - first)),
- * and pred(stencil_x) == pred(stencil_y), and if \c x precedes
- * \c y, then it will still be true after \p stable_partition that \c x precedes \c y.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The first element of the sequence to reorder.
- * \param last One position past the last element of the sequence to reorder.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return An iterator referring to the first element of the second partition, that is,
- * the sequence of the elements which do not satisfy pred.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator's \c value_type is convertible to \p Predicate's \c argument_type,
- * and \p ForwardIterator is mutable.
- * \tparam Predicate is a model of Predicate.
- *
- * The following code snippet demonstrates how to use \p stable_partition to reorder a
- * sequence so that even numbers precede odd numbers using the \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * const int N = sizeof(A)/sizeof(int);
- * thrust::stable_partition(thrust::host,
- * A, A + N,
- * is_even());
- * // A is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/stable_partition.html
- * \see \p partition
- * \see \p stable_partition_copy
- */
-template
-__host__ __device__
- ForwardIterator stable_partition(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- Predicate pred);
-
-
-/*! \p stable_partition is much like \p partition : it reorders the elements in the
- * range [first, last) based on the function object \p pred, such that all of
- * the elements that satisfy \p pred precede all of the elements that fail to satisfy
- * it. The postcondition is that, for some iterator \p middle in the range
- * [first, last), pred(*i) is \c true for every iterator \c i in the
- * range [first,middle) and \c false for every iterator \c i in the range
- * [middle, last). The return value of \p stable_partition is \c middle.
- *
- * \p stable_partition differs from \p partition in that \p stable_partition is
- * guaranteed to preserve relative order. That is, if \c x and \c y are elements in
- * [first, last), and \c stencil_x and \c stencil_y are the stencil elements
- * in corresponding positions within [stencil, stencil + (last - first)),
- * and pred(stencil_x) == pred(stencil_y), and if \c x precedes
- * \c y, then it will still be true after \p stable_partition that \c x precedes \c y.
- *
- * \param first The first element of the sequence to reorder.
- * \param last One position past the last element of the sequence to reorder.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return An iterator referring to the first element of the second partition, that is,
- * the sequence of the elements which do not satisfy pred.
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator's \c value_type is convertible to \p Predicate's \c argument_type,
- * and \p ForwardIterator is mutable.
- * \tparam Predicate is a model of Predicate.
- *
- * The following code snippet demonstrates how to use \p stable_partition to reorder a
- * sequence so that even numbers precede odd numbers.
- *
- * \code
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * const int N = sizeof(A)/sizeof(int);
- * thrust::stable_partition(A, A + N,
- * is_even());
- * // A is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/stable_partition.html
- * \see \p partition
- * \see \p stable_partition_copy
- */
-template
- ForwardIterator stable_partition(ForwardIterator first,
- ForwardIterator last,
- Predicate pred);
-
-
-/*! \p stable_partition is much like \p partition: it reorders the elements in the
- * range [first, last) based on the function object \p pred applied to a stencil
- * range [stencil, stencil + (last - first)), such that all of
- * the elements whose corresponding stencil element satisfies \p pred precede all of the elements whose
- * corresponding stencil element fails to satisfy it. The postcondition is that, for some iterator
- * \c middle in the range [first, last), pred(*stencil_i) is \c true for every iterator
- * \c stencil_i in the range [stencil,stencil + (middle - first)) and \c false for every iterator \c stencil_i
- * in the range [stencil + (middle - first), stencil + (last - first)).
- * The return value of \p stable_partition is \c middle.
- *
- * \p stable_partition differs from \p partition in that \p stable_partition is
- * guaranteed to preserve relative order. That is, if \c x and \c y are elements in
- * [first, last), such that pred(x) == pred(y), and if \c x precedes
- * \c y, then it will still be true after \p stable_partition that \c x precedes \c y.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The first element of the sequence to reorder.
- * \param last One position past the last element of the sequence to reorder.
- * \param stencil The beginning of the stencil sequence.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return An iterator referring to the first element of the second partition, that is,
- * the sequence of the elements whose stencil elements do not satisfy \p pred.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable.
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The range [first, last) shall not overlap with the range [stencil, stencil + (last - first)).
- *
- * The following code snippet demonstrates how to use \p stable_partition to reorder a
- * sequence so that even numbers precede odd numbers using the \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1};
- * int S[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * const int N = sizeof(A)/sizeof(int);
- * thrust::stable_partition(thrust::host, A, A + N, S, is_even());
- * // A is now {1, 1, 1, 1, 1, 0, 0, 0, 0, 0}
- * // S is unmodified
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/stable_partition.html
- * \see \p partition
- * \see \p stable_partition_copy
- */
-template
-__host__ __device__
- ForwardIterator stable_partition(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- InputIterator stencil,
- Predicate pred);
-
-
-/*! \p stable_partition is much like \p partition: it reorders the elements in the
- * range [first, last) based on the function object \p pred applied to a stencil
- * range [stencil, stencil + (last - first)), such that all of
- * the elements whose corresponding stencil element satisfies \p pred precede all of the elements whose
- * corresponding stencil element fails to satisfy it. The postcondition is that, for some iterator
- * \c middle in the range [first, last), pred(*stencil_i) is \c true for every iterator
- * \c stencil_i in the range [stencil,stencil + (middle - first)) and \c false for every iterator \c stencil_i
- * in the range [stencil + (middle - first), stencil + (last - first)).
- * The return value of \p stable_partition is \c middle.
- *
- * \p stable_partition differs from \p partition in that \p stable_partition is
- * guaranteed to preserve relative order. That is, if \c x and \c y are elements in
- * [first, last), such that pred(x) == pred(y), and if \c x precedes
- * \c y, then it will still be true after \p stable_partition that \c x precedes \c y.
- *
- * \param first The first element of the sequence to reorder.
- * \param last One position past the last element of the sequence to reorder.
- * \param stencil The beginning of the stencil sequence.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return An iterator referring to the first element of the second partition, that is,
- * the sequence of the elements whose stencil elements do not satisfy \p pred.
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable.
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The range [first, last) shall not overlap with the range [stencil, stencil + (last - first)).
- *
- * The following code snippet demonstrates how to use \p stable_partition to reorder a
- * sequence so that even numbers precede odd numbers.
- *
- * \code
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1};
- * int S[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * const int N = sizeof(A)/sizeof(int);
- * thrust::stable_partition(A, A + N, S, is_even());
- * // A is now {1, 1, 1, 1, 1, 0, 0, 0, 0, 0}
- * // S is unmodified
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/stable_partition.html
- * \see \p partition
- * \see \p stable_partition_copy
- */
-template
- ForwardIterator stable_partition(ForwardIterator first,
- ForwardIterator last,
- InputIterator stencil,
- Predicate pred);
-
-
-/*! \p stable_partition_copy differs from \p stable_partition only in that the reordered
- * sequence is written to different output sequences, rather than in place.
- *
- * \p stable_partition_copy copies the elements [first, last) based on the
- * function object \p pred. All of the elements that satisfy \p pred are copied
- * to the range beginning at \p out_true and all the elements that fail to satisfy it
- * are copied to the range beginning at \p out_false.
- *
- * \p stable_partition_copy differs from \p partition_copy in that
- * \p stable_partition_copy is guaranteed to preserve relative order. That is, if
- * \c x and \c y are elements in [first, last), such that
- * pred(x) == pred(y), and if \c x precedes \c y, then it will still be true
- * after \p stable_partition_copy that \c x precedes \c y in the output.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The first element of the sequence to reorder.
- * \param last One position past the last element of the sequence to reorder.
- * \param out_true The destination of the resulting sequence of elements which satisfy \p pred.
- * \param out_false The destination of the resulting sequence of elements which fail to satisfy \p pred.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return A \p pair p such that p.first is the end of the output range beginning
- * at \p out_true and p.second is the end of the output range beginning at
- * \p out_false.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type and \p InputIterator's \c value_type
- * is convertible to \p OutputIterator1 and \p OutputIterator2's \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The input ranges shall not overlap with either output range.
- *
- * The following code snippet demonstrates how to use \p stable_partition_copy to
- * reorder a sequence so that even numbers precede odd numbers using the \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * int result[10];
- * const int N = sizeof(A)/sizeof(int);
- * int *evens = result;
- * int *odds = result + 5;
- * thrust::stable_partition_copy(thrust::host, A, A + N, evens, odds, is_even());
- * // A remains {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * // result is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * // evens points to {2, 4, 6, 8, 10}
- * // odds points to {1, 3, 5, 7, 9}
- * \endcode
- *
- * \see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2569.pdf
- * \see \p partition_copy
- * \see \p stable_partition
- */
-template
-__host__ __device__
- thrust::pair
- stable_partition_copy(const thrust::detail::execution_policy_base &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-/*! \p stable_partition_copy differs from \p stable_partition only in that the reordered
- * sequence is written to different output sequences, rather than in place.
- *
- * \p stable_partition_copy copies the elements [first, last) based on the
- * function object \p pred. All of the elements that satisfy \p pred are copied
- * to the range beginning at \p out_true and all the elements that fail to satisfy it
- * are copied to the range beginning at \p out_false.
- *
- * \p stable_partition_copy differs from \p partition_copy in that
- * \p stable_partition_copy is guaranteed to preserve relative order. That is, if
- * \c x and \c y are elements in [first, last), such that
- * pred(x) == pred(y), and if \c x precedes \c y, then it will still be true
- * after \p stable_partition_copy that \c x precedes \c y in the output.
- *
- * \param first The first element of the sequence to reorder.
- * \param last One position past the last element of the sequence to reorder.
- * \param out_true The destination of the resulting sequence of elements which satisfy \p pred.
- * \param out_false The destination of the resulting sequence of elements which fail to satisfy \p pred.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return A \p pair p such that p.first is the end of the output range beginning
- * at \p out_true and p.second is the end of the output range beginning at
- * \p out_false.
- *
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type and \p InputIterator's \c value_type
- * is convertible to \p OutputIterator1 and \p OutputIterator2's \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The input ranges shall not overlap with either output range.
- *
- * The following code snippet demonstrates how to use \p stable_partition_copy to
- * reorder a sequence so that even numbers precede odd numbers.
- *
- * \code
- * #include
- * ...
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * int result[10];
- * const int N = sizeof(A)/sizeof(int);
- * int *evens = result;
- * int *odds = result + 5;
- * thrust::stable_partition_copy(A, A + N, evens, odds, is_even());
- * // A remains {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * // result is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * // evens points to {2, 4, 6, 8, 10}
- * // odds points to {1, 3, 5, 7, 9}
- * \endcode
- *
- * \see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2569.pdf
- * \see \p partition_copy
- * \see \p stable_partition
- */
-template
- thrust::pair
- stable_partition_copy(InputIterator first,
- InputIterator last,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-/*! \p stable_partition_copy differs from \p stable_partition only in that the reordered
- * sequence is written to different output sequences, rather than in place.
- *
- * \p stable_partition_copy copies the elements [first, last) based on the
- * function object \p pred which is applied to a range of stencil elements. All of the elements
- * whose corresponding stencil element satisfies \p pred are copied to the range beginning at \p out_true
- * and all the elements whose stencil element fails to satisfy it are copied to the range beginning
- * at \p out_false.
- *
- * \p stable_partition_copy differs from \p partition_copy in that
- * \p stable_partition_copy is guaranteed to preserve relative order. That is, if
- * \c x and \c y are elements in [first, last), such that
- * pred(x) == pred(y), and if \c x precedes \c y, then it will still be true
- * after \p stable_partition_copy that \c x precedes \c y in the output.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The first element of the sequence to reorder.
- * \param last One position past the last element of the sequence to reorder.
- * \param stencil The beginning of the stencil sequence.
- * \param out_true The destination of the resulting sequence of elements which satisfy \p pred.
- * \param out_false The destination of the resulting sequence of elements which fail to satisfy \p pred.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return A \p pair p such that p.first is the end of the output range beginning
- * at \p out_true and p.second is the end of the output range beginning at
- * \p out_false.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p OutputIterator1 and \p OutputIterator2's \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * and \p InputIterator2's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The input ranges shall not overlap with either output range.
- *
- * The following code snippet demonstrates how to use \p stable_partition_copy to
- * reorder a sequence so that even numbers precede odd numbers using the \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * int S[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1};
- * int result[10];
- * const int N = sizeof(A)/sizeof(int);
- * int *evens = result;
- * int *odds = result + 5;
- * thrust::stable_partition_copy(thrust::host, A, A + N, S, evens, odds, thrust::identity());
- * // A remains {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * // S remains {0, 1, 0, 1, 0, 1, 0, 1, 0, 1}
- * // result is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * // evens points to {2, 4, 6, 8, 10}
- * // odds points to {1, 3, 5, 7, 9}
- * \endcode
- *
- * \see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2569.pdf
- * \see \p partition_copy
- * \see \p stable_partition
- */
-template
-__host__ __device__
- thrust::pair
- stable_partition_copy(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 stencil,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-/*! \p stable_partition_copy differs from \p stable_partition only in that the reordered
- * sequence is written to different output sequences, rather than in place.
- *
- * \p stable_partition_copy copies the elements [first, last) based on the
- * function object \p pred which is applied to a range of stencil elements. All of the elements
- * whose corresponding stencil element satisfies \p pred are copied to the range beginning at \p out_true
- * and all the elements whose stencil element fails to satisfy it are copied to the range beginning
- * at \p out_false.
- *
- * \p stable_partition_copy differs from \p partition_copy in that
- * \p stable_partition_copy is guaranteed to preserve relative order. That is, if
- * \c x and \c y are elements in [first, last), such that
- * pred(x) == pred(y), and if \c x precedes \c y, then it will still be true
- * after \p stable_partition_copy that \c x precedes \c y in the output.
- *
- * \param first The first element of the sequence to reorder.
- * \param last One position past the last element of the sequence to reorder.
- * \param stencil The beginning of the stencil sequence.
- * \param out_true The destination of the resulting sequence of elements which satisfy \p pred.
- * \param out_false The destination of the resulting sequence of elements which fail to satisfy \p pred.
- * \param pred A function object which decides to which partition each element of the
- * sequence [first, last) belongs.
- * \return A \p pair p such that p.first is the end of the output range beginning
- * at \p out_true and p.second is the end of the output range beginning at
- * \p out_false.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p OutputIterator1 and \p OutputIterator2's \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * and \p InputIterator2's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The input ranges shall not overlap with either output range.
- *
- * The following code snippet demonstrates how to use \p stable_partition_copy to
- * reorder a sequence so that even numbers precede odd numbers.
- *
- * \code
- * #include
- * #include
- * ...
- * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- * int S[] = {0, 1, 0, 1, 0, 1, 0, 1, 0, 1};
- * int result[10];
- * const int N = sizeof(A)/sizeof(int);
- * int *evens = result;
- * int *odds = result + 5;
- * thrust::stable_partition_copy(A, A + N, S, evens, odds, thrust::identity());
- * // A remains {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * // S remains {0, 1, 0, 1, 0, 1, 0, 1, 0, 1}
- * // result is now {2, 4, 6, 8, 10, 1, 3, 5, 7, 9}
- * // evens points to {2, 4, 6, 8, 10}
- * // odds points to {1, 3, 5, 7, 9}
- * \endcode
- *
- * \see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2569.pdf
- * \see \p partition_copy
- * \see \p stable_partition
- */
-template
- thrust::pair
- stable_partition_copy(InputIterator1 first,
- InputIterator1 last,
- InputIterator2 stencil,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred);
-
-
-/*! \} // end stream_compaction
- */
-
-/*! \} // end reordering
- */
-
-/*! \addtogroup searching
- * \{
- */
-
-
-/*! \p partition_point returns an iterator pointing to the end of the true
- * partition of a partitioned range. \p partition_point requires the input range
- * [first,last) to be a partition; that is, all elements which satisfy
- * pred shall appear before those that do not.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the range to consider.
- * \param last The end of the range to consider.
- * \param pred A function object which decides to which partition each element of the
- * range [first, last) belongs.
- * \return An iterator \c mid such that all_of(first, mid, pred)
- * and none_of(mid, last, pred) are both true.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The range [first, last) shall be partitioned by \p pred.
- *
- * \note Though similar, \p partition_point is not redundant with \p find_if_not.
- * \p partition_point's precondition provides an opportunity for a
- * faster implemention.
- *
- * \code
- * #include
- * #include
- *
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- *
- * ...
- *
- * int A[] = {2, 4, 6, 8, 10, 1, 3, 5, 7, 9};
- * int * B = thrust::partition_point(thrust::host, A, A + 10, is_even());
- * // B - A is 5
- * // [A, B) contains only even values
- * \endcode
- *
- * \see \p partition
- * \see \p find_if_not
- */
-template
-__host__ __device__
- ForwardIterator partition_point(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- Predicate pred);
-
-
-/*! \p partition_point returns an iterator pointing to the end of the true
- * partition of a partitioned range. \p partition_point requires the input range
- * [first,last) to be a partition; that is, all elements which satisfy
- * pred shall appear before those that do not.
- * \param first The beginning of the range to consider.
- * \param last The end of the range to consider.
- * \param pred A function object which decides to which partition each element of the
- * range [first, last) belongs.
- * \return An iterator \c mid such that all_of(first, mid, pred)
- * and none_of(mid, last, pred) are both true.
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam Predicate is a model of Predicate.
- *
- * \pre The range [first, last) shall be partitioned by \p pred.
- *
- * \note Though similar, \p partition_point is not redundant with \p find_if_not.
- * \p partition_point's precondition provides an opportunity for a
- * faster implemention.
- *
- * \code
- * #include
- *
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- *
- * ...
- *
- * int A[] = {2, 4, 6, 8, 10, 1, 3, 5, 7, 9};
- * int * B = thrust::partition_point(A, A + 10, is_even());
- * // B - A is 5
- * // [A, B) contains only even values
- * \endcode
- *
- * \see \p partition
- * \see \p find_if_not
- */
-template
- ForwardIterator partition_point(ForwardIterator first,
- ForwardIterator last,
- Predicate pred);
-
-/*! \} // searching
- */
-
-/*! \addtogroup reductions
- * \{
- * \addtogroup predicates
- * \{
- */
-
-
-/*! \p is_partitioned returns \c true if the given range
- * is partitioned with respect to a predicate, and \c false otherwise.
- *
- * Specifically, \p is_partitioned returns \c true if [first, last)
- * is empty of if [first, last) is partitioned by \p pred, i.e. if
- * all elements that satisfy \p pred appear before those that do not.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the range to consider.
- * \param last The end of the range to consider.
- * \param pred A function object which decides to which partition each element of the
- * range [first, last) belongs.
- * \return \c true if the range [first, last) is partitioned with respect
- * to \p pred, or if [first, last) is empty. \c false, otherwise.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam Predicate is a model of Predicate.
- *
- * \code
- * #include
- * #include
- *
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- *
- * ...
- *
- * int A[] = {2, 4, 6, 8, 10, 1, 3, 5, 7, 9};
- * int B[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- *
- * thrust::is_partitioned(thrust::host, A, A + 10, is_even()); // returns true
- * thrust::is_partitioned(thrust::host, B, B + 10, is_even()); // returns false
- * \endcode
- *
- * \see \p partition
- */
-template
-__host__ __device__
- bool is_partitioned(const thrust::detail::execution_policy_base &exec,
- InputIterator first,
- InputIterator last,
- Predicate pred);
-
-
-/*! \p is_partitioned returns \c true if the given range
- * is partitioned with respect to a predicate, and \c false otherwise.
- *
- * Specifically, \p is_partitioned returns \c true if [first, last)
- * is empty of if [first, last) is partitioned by \p pred, i.e. if
- * all elements that satisfy \p pred appear before those that do not.
- *
- * \param first The beginning of the range to consider.
- * \param last The end of the range to consider.
- * \param pred A function object which decides to which partition each element of the
- * range [first, last) belongs.
- * \return \c true if the range [first, last) is partitioned with respect
- * to \p pred, or if [first, last) is empty. \c false, otherwise.
- *
- * \tparam InputIterator is a model of Input Iterator,
- * and \p InputIterator's \c value_type is convertible to \p Predicate's \c argument_type.
- * \tparam Predicate is a model of Predicate.
- *
- * \code
- * #include
- *
- * struct is_even
- * {
- * __host__ __device__
- * bool operator()(const int &x)
- * {
- * return (x % 2) == 0;
- * }
- * };
- *
- * ...
- *
- * int A[] = {2, 4, 6, 8, 10, 1, 3, 5, 7, 9};
- * int B[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
- *
- * thrust::is_partitioned(A, A + 10, is_even()); // returns true
- * thrust::is_partitioned(B, B + 10, is_even()); // returns false
- * \endcode
- *
- * \see \p partition
- */
-template
- bool is_partitioned(InputIterator first,
- InputIterator last,
- Predicate pred);
-
-
-/*! \} // end predicates
- * \} // end reductions
- */
-
-
-} // end thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/random/detail/normal_distribution_base.h b/spaces/CVPR/LIVE/thrust/thrust/random/detail/normal_distribution_base.h
deleted file mode 100644
index 2a3bd4470b576465a77a289fee9f959d027e5b03..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/random/detail/normal_distribution_base.h
+++ /dev/null
@@ -1,149 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*
- * Copyright Jens Maurer 2000-2001
- * Distributed under the Boost Software License, Version 1.0. (See
- * accompanying file LICENSE_1_0.txt or copy at
- * http://www.boost.org/LICENSE_1_0.txt)
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace random
-{
-namespace detail
-{
-
-// this version samples the normal distribution directly
-// and uses the non-standard math function erfcinv
-template
- class normal_distribution_nvcc
-{
- protected:
- template
- __host__ __device__
- RealType sample(UniformRandomNumberGenerator &urng, const RealType mean, const RealType stddev)
- {
- typedef typename UniformRandomNumberGenerator::result_type uint_type;
- const uint_type urng_range = UniformRandomNumberGenerator::max - UniformRandomNumberGenerator::min;
-
- // Constants for conversion
- const RealType S1 = static_cast(1) / urng_range;
- const RealType S2 = S1 / 2;
-
- RealType S3 = static_cast(-1.4142135623730950488016887242097); // -sqrt(2)
-
- // Get the integer value
- uint_type u = urng() - UniformRandomNumberGenerator::min;
-
- // Ensure the conversion to float will give a value in the range [0,0.5)
- if(u > (urng_range / 2))
- {
- u = urng_range - u;
- S3 = -S3;
- }
-
- // Convert to floating point in [0,0.5)
- RealType p = u*S1 + S2;
-
- // Apply inverse error function
- return mean + stddev * S3 * erfcinv(2 * p);
- }
-
- // no-op
- __host__ __device__
- void reset() {}
-};
-
-// this version samples the normal distribution using
-// Marsaglia's "polar method"
-template
- class normal_distribution_portable
-{
- protected:
- normal_distribution_portable()
- : m_r1(), m_r2(), m_cached_rho(), m_valid(false)
- {}
-
- normal_distribution_portable(const normal_distribution_portable &other)
- : m_r1(other.m_r1), m_r2(other.m_r2), m_cached_rho(other.m_cached_rho), m_valid(other.m_valid)
- {}
-
- void reset()
- {
- m_valid = false;
- }
-
- // note that we promise to call this member function with the same mean and stddev
- template
- __host__ __device__
- RealType sample(UniformRandomNumberGenerator &urng, const RealType mean, const RealType stddev)
- {
- // implementation from Boost
- // allow for Koenig lookup
- using std::sqrt; using std::log; using std::sin; using std::cos;
-
- if(!m_valid)
- {
- uniform_real_distribution u01;
- m_r1 = u01(urng);
- m_r2 = u01(urng);
- m_cached_rho = sqrt(-RealType(2) * log(RealType(1)-m_r2));
-
- m_valid = true;
- }
- else
- {
- m_valid = false;
- }
-
- const RealType pi = RealType(3.14159265358979323846);
-
- RealType result = m_cached_rho * (m_valid ?
- cos(RealType(2)*pi*m_r1) :
- sin(RealType(2)*pi*m_r1));
-
- return mean + stddev * result;
- }
-
- private:
- RealType m_r1, m_r2, m_cached_rho;
- bool m_valid;
-};
-
-template
- struct normal_distribution_base
-{
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC && !defined(__NVCOMPILER_CUDA__)
- typedef normal_distribution_nvcc type;
-#else
- typedef normal_distribution_portable type;
-#endif
-};
-
-} // end detail
-} // end random
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/assign_value.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/assign_value.h
deleted file mode 100644
index f6fd987bf3f814f389b01499a06b313517b69733..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/assign_value.h
+++ /dev/null
@@ -1,102 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-#include
-#include
-#include
-#include
-#include
-
-
-namespace thrust
-{
-namespace cuda_cub {
-
-
-template
-inline __host__ __device__
- void assign_value(thrust::cuda::execution_policy &exec, Pointer1 dst, Pointer2 src)
-{
- // XXX war nvbugs/881631
- struct war_nvbugs_881631
- {
- __host__ inline static void host_path(thrust::cuda::execution_policy &exec, Pointer1 dst, Pointer2 src)
- {
- cuda_cub::copy(exec, src, src + 1, dst);
- }
-
- __device__ inline static void device_path(thrust::cuda::execution_policy &, Pointer1 dst, Pointer2 src)
- {
- *thrust::raw_pointer_cast(dst) = *thrust::raw_pointer_cast(src);
- }
- };
-
- if (THRUST_IS_HOST_CODE) {
- #if THRUST_INCLUDE_HOST_CODE
- war_nvbugs_881631::host_path(exec,dst,src);
- #endif
- } else {
- #if THRUST_INCLUDE_DEVICE_CODE
- war_nvbugs_881631::device_path(exec,dst,src);
- #endif
- }
-} // end assign_value()
-
-
-template
-inline __host__ __device__
- void assign_value(cross_system &systems, Pointer1 dst, Pointer2 src)
-{
- // XXX war nvbugs/881631
- struct war_nvbugs_881631
- {
- __host__ inline static void host_path(cross_system &systems, Pointer1 dst, Pointer2 src)
- {
- // rotate the systems so that they are ordered the same as (src, dst)
- // for the call to thrust::copy
- cross_system rotated_systems = systems.rotate();
- cuda_cub::copy(rotated_systems, src, src + 1, dst);
- }
-
- __device__ inline static void device_path(cross_system &, Pointer1 dst, Pointer2 src)
- {
- // XXX forward the true cuda::execution_policy inside systems here
- // instead of materializing a tag
- thrust::cuda::tag cuda_tag;
- thrust::cuda_cub::assign_value(cuda_tag, dst, src);
- }
- };
-
- if (THRUST_IS_HOST_CODE) {
- #if THRUST_INCLUDE_HOST_CODE
- war_nvbugs_881631::host_path(systems,dst,src);
- #endif
- } else {
- #if THRUST_INCLUDE_DEVICE_CODE
- war_nvbugs_881631::device_path(systems,dst,src);
- #endif
- }
-} // end assign_value()
-
-
-
-
-} // end cuda_cub
-} // end namespace thrust
-#endif
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/assign_value.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/assign_value.h
deleted file mode 100644
index 699bcbcd7847ccfa14f8fb8ffe1591f7ced8f957..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/assign_value.h
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace sequential
-{
-
-template
-__host__ __device__
- void assign_value(sequential::execution_policy &, Pointer1 dst, Pointer2 src)
-{
- *thrust::raw_pointer_cast(dst) = *thrust::raw_pointer_cast(src);
-} // end assign_value()
-
-} // end sequential
-} // end detail
-} // end system
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/get_value.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/get_value.h
deleted file mode 100644
index 23a11a8574f77f95bc6ca96d0cd8ff6de8c71c7e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/get_value.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits get_value
-#include
-
diff --git a/spaces/CVPR/lama-example/bin/blur_predicts.py b/spaces/CVPR/lama-example/bin/blur_predicts.py
deleted file mode 100644
index a14fcc28d5a906ad3a21ab4ba482f38b4fc411cb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/bin/blur_predicts.py
+++ /dev/null
@@ -1,57 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-
-import cv2
-import numpy as np
-import tqdm
-
-from saicinpainting.evaluation.data import PrecomputedInpaintingResultsDataset
-from saicinpainting.evaluation.utils import load_yaml
-
-
-def main(args):
- config = load_yaml(args.config)
-
- if not args.predictdir.endswith('/'):
- args.predictdir += '/'
-
- dataset = PrecomputedInpaintingResultsDataset(args.datadir, args.predictdir, **config.dataset_kwargs)
-
- os.makedirs(os.path.dirname(args.outpath), exist_ok=True)
-
- for img_i in tqdm.trange(len(dataset)):
- pred_fname = dataset.pred_filenames[img_i]
- cur_out_fname = os.path.join(args.outpath, pred_fname[len(args.predictdir):])
- os.makedirs(os.path.dirname(cur_out_fname), exist_ok=True)
-
- sample = dataset[img_i]
- img = sample['image']
- mask = sample['mask']
- inpainted = sample['inpainted']
-
- inpainted_blurred = cv2.GaussianBlur(np.transpose(inpainted, (1, 2, 0)),
- ksize=(args.k, args.k),
- sigmaX=args.s, sigmaY=args.s,
- borderType=cv2.BORDER_REFLECT)
-
- cur_res = (1 - mask) * np.transpose(img, (1, 2, 0)) + mask * inpainted_blurred
- cur_res = np.clip(cur_res * 255, 0, 255).astype('uint8')
- cur_res = cv2.cvtColor(cur_res, cv2.COLOR_RGB2BGR)
- cv2.imwrite(cur_out_fname, cur_res)
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('config', type=str, help='Path to evaluation config')
- aparser.add_argument('datadir', type=str,
- help='Path to folder with images and masks (output of gen_mask_dataset.py)')
- aparser.add_argument('predictdir', type=str,
- help='Path to folder with predicts (e.g. predict_hifill_baseline.py)')
- aparser.add_argument('outpath', type=str, help='Where to put results')
- aparser.add_argument('-s', type=float, default=0.1, help='Gaussian blur sigma')
- aparser.add_argument('-k', type=int, default=5, help='Kernel size in gaussian blur')
-
- main(aparser.parse_args())
diff --git a/spaces/CVPR/ml-talking-face/toxicity_estimator/module.py b/spaces/CVPR/ml-talking-face/toxicity_estimator/module.py
deleted file mode 100644
index ba281ee01a2bdf294af3f0c9b24cb5fbf30cc89e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/ml-talking-face/toxicity_estimator/module.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from googleapiclient import discovery
-import argparse
-import json
-import os
-
-API_KEY = os.environ['PERSPECTIVE_API_KEY']
-
-class PerspectiveAPI:
- def __init__(self):
- self.client = discovery.build(
- "commentanalyzer",
- "v1alpha1",
- developerKey=API_KEY,
- discoveryServiceUrl="https://commentanalyzer.googleapis.com/$discovery/rest?version=v1alpha1",
- static_discovery=False,
- )
- @staticmethod
- def _get_request(text):
- return {
- 'comment': {'text': text},
- 'requestedAttributes': {'TOXICITY': {}}
- }
-
- def _infer(self, text):
- request = self._get_request(text)
- response = self.client.comments().analyze(body=request).execute()
- return response
-
- def infer(self, text):
- return self._infer(text)
-
- def get_score(self, text, label='TOXICITY'):
- response = self._infer(text)
- return response['attributeScores'][label]['spanScores'][0]['score']['value']
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='Perspective API Test.')
- parser.add_argument('-i', '--input-text', type=str, required=True)
- args = parser.parse_args()
- return args
-
-
-if __name__ == '__main__':
- args = parse_args()
-
- perspective_api = PerspectiveAPI()
- score = perspective_api.get_score(args.input_text)
-
- print(score)
diff --git a/spaces/Chloe0222/Chloe/README.md b/spaces/Chloe0222/Chloe/README.md
deleted file mode 100644
index be1b863e7eda0aa831dc9cfaa9f8ab0e92959739..0000000000000000000000000000000000000000
--- a/spaces/Chloe0222/Chloe/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Chloe
-emoji: 🐢
-colorFrom: gray
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/TabItem.svelte_svelte_type_style_lang-1276453b.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/TabItem.svelte_svelte_type_style_lang-1276453b.js
deleted file mode 100644
index 714fec29587654bff506316edd0822c3d5bd9cc8..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/TabItem.svelte_svelte_type_style_lang-1276453b.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as G,e as H,s as K,G as w,a9 as O,N as j,O as T,K as k,U as A,p as g,M as v,H as P,ay as Q,ab as R,ac as U,ad as F,z as J,v as L,A as p,w as I,a4 as S,B as V,D as W,m as B,aA as C,P as N,Q as X,R as z}from"./index-1d65707a.js";function D(n,e,l){const s=n.slice();return s[14]=e[l],s[16]=l,s}function Y(n){let e,l=n[14].name+"",s,f,d,_;function i(){return n[12](n[14],n[16])}return{c(){e=j("button"),s=N(l),f=T(),k(e,"class","svelte-kqij2n")},m(u,m){g(u,e,m),v(e,s),v(e,f),d||(_=X(e,"click",i),d=!0)},p(u,m){n=u,m&8&&l!==(l=n[14].name+"")&&z(s,l)},d(u){u&&p(e),d=!1,_()}}}function Z(n){let e,l=n[14].name+"",s,f;return{c(){e=j("button"),s=N(l),f=T(),k(e,"class","selected svelte-kqij2n")},m(d,_){g(d,e,_),v(e,s),v(e,f)},p(d,_){_&8&&l!==(l=d[14].name+"")&&z(s,l)},d(d){d&&p(e)}}}function M(n,e){let l,s;function f(i,u){return i[14].id===i[4]?Z:Y}let d=f(e),_=d(e);return{key:n,first:null,c(){l=B(),_.c(),s=B(),this.first=l},m(i,u){g(i,l,u),_.m(i,u),g(i,s,u)},p(i,u){e=i,d===(d=f(e))&&_?_.p(e,u):(_.d(1),_=d(e),_&&(_.c(),_.m(s.parentNode,s)))},d(i){i&&(p(l),p(s)),_.d(i)}}}function x(n){let e,l,s=[],f=new Map,d,_,i,u=w(n[3]);const m=t=>t[14].id;for(let t=0;tl(4,f=a));const o=I(0);S(n,o,a=>l(13,s=a));const r=V();W($,{register_tab:a=>(c.push({name:a.name,id:a.id}),t.update(h=>h??a.id),l(3,c),c.length-1),unregister_tab:a=>{const h=c.findIndex(y=>y.id===a.id);c.splice(h,1),t.update(y=>y===a.id?c[h]?.id||c[c.length-1]?.id:y)},selected_tab:t,selected_tab_index:o});function q(a){l(9,b=a),C(t,f=a,f),C(o,s=c.findIndex(h=>h.id===a),s),r("change")}const E=(a,h)=>{q(a.id),r("select",{value:a.name,index:h})};return n.$$set=a=>{"visible"in a&&l(0,i=a.visible),"elem_id"in a&&l(1,u=a.elem_id),"elem_classes"in a&&l(2,m=a.elem_classes),"selected"in a&&l(9,b=a.selected),"$$scope"in a&&l(10,_=a.$$scope)},n.$$.update=()=>{n.$$.dirty&512&&b!==null&&q(b)},[i,u,m,c,f,t,o,r,q,b,_,d,E]}class le extends G{constructor(e){super(),H(this,e,ee,x,K,{visible:0,elem_id:1,elem_classes:2,selected:9})}}export{le as T,$ as a};
-//# sourceMappingURL=TabItem.svelte_svelte_type_style_lang-1276453b.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/app.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/app.py
deleted file mode 100644
index 8a8631fcd39a8d929ab2e7c4c573fe988039fc77..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/app.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import time
-
-import gradio as gr
-from gradio.themes.utils.theme_dropdown import create_theme_dropdown
-
-dropdown, js = create_theme_dropdown()
-
-with gr.Blocks(theme=gr.themes.Default()) as demo:
- with gr.Row().style(equal_height=True):
- with gr.Column(scale=10):
- gr.Markdown(
- """
- # Theme preview: `{THEME}`
- To use this theme, set `theme='{AUTHOR}/{SPACE_NAME}'` in `gr.Blocks()` or `gr.Interface()`.
- You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version
- of this theme.
- """
- )
- with gr.Column(scale=3):
- with gr.Box():
- dropdown.render()
- toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True)
-
- dropdown.change(None, dropdown, None, _js=js)
- toggle_dark.click(
- None,
- _js="""
- () => {
- document.body.classList.toggle('dark');
- }
- """,
- )
-
- name = gr.Textbox(
- label="Name",
- info="Full name, including middle name. No special characters.",
- placeholder="John Doe",
- value="John Doe",
- interactive=True,
- )
-
- with gr.Row():
- slider1 = gr.Slider(label="Slider 1")
- slider2 = gr.Slider(label="Slider 2")
- gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group")
-
- with gr.Row():
- with gr.Column(variant="panel", scale=1):
- gr.Markdown("## Panel 1")
- radio = gr.Radio(
- ["A", "B", "C"],
- label="Radio",
- info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.",
- )
- drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False)
- drop_2 = gr.Dropdown(
- ["Option A", "Option B", "Option C"],
- multiselect=True,
- value=["Option A"],
- label="Dropdown",
- interactive=True,
- )
- check = gr.Checkbox(label="Go")
- with gr.Column(variant="panel", scale=2):
- img = gr.Image(
- "https://raw.githubusercontent.com/gradio-app/gradio/main/js/_website/src/assets/img/header-image.jpg",
- label="Image",
- ).style(height=320)
- with gr.Row():
- go_btn = gr.Button("Go", label="Primary Button", variant="primary")
- clear_btn = gr.Button(
- "Clear", label="Secondary Button", variant="secondary"
- )
-
- def go(*args):
- time.sleep(3)
- return "https://raw.githubusercontent.com/gradio-app/gradio/main/js/_website/src/assets/img/header-image.jpg"
-
- go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go")
-
- def clear():
- time.sleep(0.2)
- return None
-
- clear_btn.click(clear, None, img)
-
- with gr.Row():
- btn1 = gr.Button("Button 1").style(size="sm")
- btn2 = gr.UploadButton().style(size="sm")
- stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style(
- size="sm"
- )
-
- with gr.Row():
- gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe")
- gr.JSON(
- value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON"
- )
- gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1})
- gr.File()
- with gr.Row():
- gr.ColorPicker()
- gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4")
- gr.Gallery(
- [
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg",
- "lion",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png",
- "logo",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg",
- "tower",
- ),
- ]
- ).style(height="200px", grid=2)
-
- with gr.Row():
- with gr.Column(scale=2):
- chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot")
- chat_btn = gr.Button("Add messages")
-
- def chat(history):
- time.sleep(2)
- yield [["How are you?", "I am good."]]
-
- chat_btn.click(
- lambda history: history
- + [["How are you?", "I am good."]]
- + (time.sleep(2) or []),
- chatbot,
- chatbot,
- )
- with gr.Column(scale=1):
- with gr.Accordion("Advanced Settings"):
- gr.Markdown("Hello")
- gr.Number(label="Chatbot control 1")
- gr.Number(label="Chatbot control 2")
- gr.Number(label="Chatbot control 3")
-
-
-if __name__ == "__main__":
- demo.queue().launch()
diff --git a/spaces/DeepLabCut/DeepLabCutModelZoo-SuperAnimals/MD_models/read.md b/spaces/DeepLabCut/DeepLabCutModelZoo-SuperAnimals/MD_models/read.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/easydict.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/easydict.py
deleted file mode 100644
index 0188f524b87eef75c175772ff262b93b47919ba7..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/easydict.py
+++ /dev/null
@@ -1,126 +0,0 @@
-'''
-From https://github.com/makinacorpus/easydict.
-'''
-
-class EasyDict(dict):
- """
- Get attributes
-
- >>> d = EasyDict({'foo':3})
- >>> d['foo']
- 3
- >>> d.foo
- 3
- >>> d.bar
- Traceback (most recent call last):
- ...
- AttributeError: 'EasyDict' object has no attribute 'bar'
-
- Works recursively
-
- >>> d = EasyDict({'foo':3, 'bar':{'x':1, 'y':2}})
- >>> isinstance(d.bar, dict)
- True
- >>> d.bar.x
- 1
-
- Bullet-proof
-
- >>> EasyDict({})
- {}
- >>> EasyDict(d={})
- {}
- >>> EasyDict(None)
- {}
- >>> d = {'a': 1}
- >>> EasyDict(**d)
- {'a': 1}
-
- Set attributes
-
- >>> d = EasyDict()
- >>> d.foo = 3
- >>> d.foo
- 3
- >>> d.bar = {'prop': 'value'}
- >>> d.bar.prop
- 'value'
- >>> d
- {'foo': 3, 'bar': {'prop': 'value'}}
- >>> d.bar.prop = 'newer'
- >>> d.bar.prop
- 'newer'
-
-
- Values extraction
-
- >>> d = EasyDict({'foo':0, 'bar':[{'x':1, 'y':2}, {'x':3, 'y':4}]})
- >>> isinstance(d.bar, list)
- True
- >>> from operator import attrgetter
- >>> map(attrgetter('x'), d.bar)
- [1, 3]
- >>> map(attrgetter('y'), d.bar)
- [2, 4]
- >>> d = EasyDict()
- >>> d.keys()
- []
- >>> d = EasyDict(foo=3, bar=dict(x=1, y=2))
- >>> d.foo
- 3
- >>> d.bar.x
- 1
-
- Still like a dict though
-
- >>> o = EasyDict({'clean':True})
- >>> o.items()
- [('clean', True)]
-
- And like a class
-
- >>> class Flower(EasyDict):
- ... power = 1
- ...
- >>> f = Flower()
- >>> f.power
- 1
- >>> f = Flower({'height': 12})
- >>> f.height
- 12
- >>> f['power']
- 1
- >>> sorted(f.keys())
- ['height', 'power']
- """
- def __init__(self, d=None, **kwargs):
- if d is None:
- d = {}
- if kwargs:
- d.update(**kwargs)
- for k, v in d.items():
- setattr(self, k, v)
- # Class attributes
- for k in self.__class__.__dict__.keys():
- if not (k.startswith('__') and k.endswith('__')):
- setattr(self, k, getattr(self, k))
-
- def __setattr__(self, name, value):
- if isinstance(value, (list, tuple)):
- value = [self.__class__(x)
- if isinstance(x, dict) else x for x in value]
- elif isinstance(value, dict) and not isinstance(value, self.__class__):
- value = self.__class__(value)
- super(EasyDict, self).__setattr__(name, value)
- super(EasyDict, self).__setitem__(name, value)
-
- __setitem__ = __setattr__
-
-def load_json(filename):
- import json
- with open(filename) as f:
- return EasyDict(json.load(f))
-
-if __name__ == "__main__":
- import doctest
- doctest.testmod()
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/training/coaches/single_id_coach.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/training/coaches/single_id_coach.py
deleted file mode 100644
index f703573a522bdfc6fecd85f25fe2bfb2e0430e29..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/training/coaches/single_id_coach.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-import os
-import torch
-from tqdm import tqdm
-from pti.pti_configs import paths_config, hyperparameters, global_config
-from pti.training.coaches.base_coach import BaseCoach
-from utils.log_utils import log_images_from_w
-from torchvision.utils import save_image
-
-
-class SingleIDCoach(BaseCoach):
-
- def __init__(self, data_loader, use_wandb):
- super().__init__(data_loader, use_wandb)
-
- def train(self):
-
- w_path_dir = f'{paths_config.embedding_base_dir}/{paths_config.input_data_id}'
- os.makedirs(w_path_dir, exist_ok=True)
- os.makedirs(
- f'{w_path_dir}/{paths_config.pti_results_keyword}', exist_ok=True)
-
- use_ball_holder = True
-
- for fname, image in tqdm(self.data_loader):
- image_name = fname[0]
-
- self.restart_training()
-
- if self.image_counter >= hyperparameters.max_images_to_invert:
- break
-
- embedding_dir = f'{w_path_dir}/{paths_config.pti_results_keyword}/{image_name}'
- os.makedirs(embedding_dir, exist_ok=True)
-
- w_pivot = None
-
- if hyperparameters.use_last_w_pivots:
- w_pivot = self.load_inversions(w_path_dir, image_name)
-# Copyright (c) SenseTime Research. All rights reserved.
-
- elif not hyperparameters.use_last_w_pivots or w_pivot is None:
- w_pivot = self.calc_inversions(image, image_name)
-
- # w_pivot = w_pivot.detach().clone().to(global_config.device)
- w_pivot = w_pivot.to(global_config.device)
-
- torch.save(w_pivot, f'{embedding_dir}/0.pt')
- log_images_counter = 0
- real_images_batch = image.to(global_config.device)
-
- for i in range(hyperparameters.max_pti_steps):
-
- generated_images = self.forward(w_pivot)
- loss, l2_loss_val, loss_lpips = self.calc_loss(generated_images, real_images_batch, image_name,
- self.G, use_ball_holder, w_pivot)
- if i == 0:
- tmp1 = torch.clone(generated_images)
- if i % 10 == 0:
- print("pti loss: ", i, loss.data, loss_lpips.data)
- self.optimizer.zero_grad()
-
- if loss_lpips <= hyperparameters.LPIPS_value_threshold:
- break
-
- loss.backward()
- self.optimizer.step()
-
- use_ball_holder = global_config.training_step % hyperparameters.locality_regularization_interval == 0
-
- if self.use_wandb and log_images_counter % global_config.image_rec_result_log_snapshot == 0:
- log_images_from_w([w_pivot], self.G, [image_name])
-
- global_config.training_step += 1
- log_images_counter += 1
-
- # save output image
- tmp = torch.cat(
- [real_images_batch, tmp1, generated_images], axis=3)
- save_image(
- tmp, f"{paths_config.experiments_output_dir}/{image_name}.png", normalize=True)
-
- self.image_counter += 1
-
- # torch.save(self.G,
- # f'{paths_config.checkpoints_dir}/model_{image_name}.pt') #'.pt'
- snapshot_data = dict()
- snapshot_data['G_ema'] = self.G
- import pickle
- with open(f'{paths_config.checkpoints_dir}/model_{image_name}.pkl', 'wb') as f:
- pickle.dump(snapshot_data, f)
diff --git a/spaces/ECCV2022/bytetrack/tutorials/motr/mot_online/kalman_filter.py b/spaces/ECCV2022/bytetrack/tutorials/motr/mot_online/kalman_filter.py
deleted file mode 100644
index 82111a336d4d94bece171f2f95d9147bb7456285..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/tutorials/motr/mot_online/kalman_filter.py
+++ /dev/null
@@ -1,252 +0,0 @@
-# vim: expandtab:ts=4:sw=4
-import numpy as np
-import scipy.linalg
-
-"""
-Table for the 0.95 quantile of the chi-square distribution with N degrees of
-freedom (contains values for N=1, ..., 9). Taken from MATLAB/Octave's chi2inv
-function and used as Mahalanobis gating threshold.
-"""
-chi2inv95 = {
- 1: 3.8415,
- 2: 5.9915,
- 3: 7.8147,
- 4: 9.4877,
- 5: 11.070,
- 6: 12.592,
- 7: 14.067,
- 8: 15.507,
- 9: 16.919}
-
-
-class KalmanFilter(object):
- """
- A simple Kalman filter for tracking bounding boxes in image space.
- The 8-dimensional state space
- x, y, a, h, vx, vy, va, vh
- contains the bounding box center position (x, y), aspect ratio a, height h,
- and their respective velocities.
- Object motion follows a constant velocity model. The bounding box location
- (x, y, a, h) is taken as direct observation of the state space (linear
- observation model).
- """
-
- def __init__(self):
- ndim, dt = 4, 1.
-
- # Create Kalman filter model matrices.
- self._motion_mat = np.eye(2 * ndim, 2 * ndim)
- for i in range(ndim):
- self._motion_mat[i, ndim + i] = dt
- self._update_mat = np.eye(ndim, 2 * ndim)
-
- # Motion and observation uncertainty are chosen relative to the current
- # state estimate. These weights control the amount of uncertainty in
- # the model. This is a bit hacky.
- self._std_weight_position = 1. / 20
- self._std_weight_velocity = 1. / 160
-
- def initiate(self, measurement):
- """Create track from unassociated measurement.
- Parameters
- ----------
- measurement : ndarray
- Bounding box coordinates (x, y, a, h) with center position (x, y),
- aspect ratio a, and height h.
- Returns
- -------
- (ndarray, ndarray)
- Returns the mean vector (8 dimensional) and covariance matrix (8x8
- dimensional) of the new track. Unobserved velocities are initialized
- to 0 mean.
- """
- mean_pos = measurement
- mean_vel = np.zeros_like(mean_pos)
- mean = np.r_[mean_pos, mean_vel]
-
- std = [
- 2 * self._std_weight_position * measurement[3],
- 2 * self._std_weight_position * measurement[3],
- 1e-2,
- 2 * self._std_weight_position * measurement[3],
- 10 * self._std_weight_velocity * measurement[3],
- 10 * self._std_weight_velocity * measurement[3],
- 1e-5,
- 10 * self._std_weight_velocity * measurement[3]]
- covariance = np.diag(np.square(std))
- return mean, covariance
-
- def predict(self, mean, covariance):
- """Run Kalman filter prediction step.
- Parameters
- ----------
- mean : ndarray
- The 8 dimensional mean vector of the object state at the previous
- time step.
- covariance : ndarray
- The 8x8 dimensional covariance matrix of the object state at the
- previous time step.
- Returns
- -------
- (ndarray, ndarray)
- Returns the mean vector and covariance matrix of the predicted
- state. Unobserved velocities are initialized to 0 mean.
- """
- std_pos = [
- self._std_weight_position * mean[3],
- self._std_weight_position * mean[3],
- 1e-2,
- self._std_weight_position * mean[3]]
- std_vel = [
- self._std_weight_velocity * mean[3],
- self._std_weight_velocity * mean[3],
- 1e-5,
- self._std_weight_velocity * mean[3]]
- motion_cov = np.diag(np.square(np.r_[std_pos, std_vel]))
-
- #mean = np.dot(self._motion_mat, mean)
- mean = np.dot(mean, self._motion_mat.T)
- covariance = np.linalg.multi_dot((
- self._motion_mat, covariance, self._motion_mat.T)) + motion_cov
-
- return mean, covariance
-
- def project(self, mean, covariance):
- """Project state distribution to measurement space.
- Parameters
- ----------
- mean : ndarray
- The state's mean vector (8 dimensional array).
- covariance : ndarray
- The state's covariance matrix (8x8 dimensional).
- Returns
- -------
- (ndarray, ndarray)
- Returns the projected mean and covariance matrix of the given state
- estimate.
- """
- std = [
- self._std_weight_position * mean[3],
- self._std_weight_position * mean[3],
- 1e-1,
- self._std_weight_position * mean[3]]
- innovation_cov = np.diag(np.square(std))
-
- mean = np.dot(self._update_mat, mean)
- covariance = np.linalg.multi_dot((
- self._update_mat, covariance, self._update_mat.T))
- return mean, covariance + innovation_cov
-
- def multi_predict(self, mean, covariance):
- """Run Kalman filter prediction step (Vectorized version).
- Parameters
- ----------
- mean : ndarray
- The Nx8 dimensional mean matrix of the object states at the previous
- time step.
- covariance : ndarray
- The Nx8x8 dimensional covariance matrics of the object states at the
- previous time step.
- Returns
- -------
- (ndarray, ndarray)
- Returns the mean vector and covariance matrix of the predicted
- state. Unobserved velocities are initialized to 0 mean.
- """
- std_pos = [
- self._std_weight_position * mean[:, 3],
- self._std_weight_position * mean[:, 3],
- 1e-2 * np.ones_like(mean[:, 3]),
- self._std_weight_position * mean[:, 3]]
- std_vel = [
- self._std_weight_velocity * mean[:, 3],
- self._std_weight_velocity * mean[:, 3],
- 1e-5 * np.ones_like(mean[:, 3]),
- self._std_weight_velocity * mean[:, 3]]
- sqr = np.square(np.r_[std_pos, std_vel]).T
-
- motion_cov = []
- for i in range(len(mean)):
- motion_cov.append(np.diag(sqr[i]))
- motion_cov = np.asarray(motion_cov)
-
- mean = np.dot(mean, self._motion_mat.T)
- left = np.dot(self._motion_mat, covariance).transpose((1, 0, 2))
- covariance = np.dot(left, self._motion_mat.T) + motion_cov
-
- return mean, covariance
-
- def update(self, mean, covariance, measurement):
- """Run Kalman filter correction step.
- Parameters
- ----------
- mean : ndarray
- The predicted state's mean vector (8 dimensional).
- covariance : ndarray
- The state's covariance matrix (8x8 dimensional).
- measurement : ndarray
- The 4 dimensional measurement vector (x, y, a, h), where (x, y)
- is the center position, a the aspect ratio, and h the height of the
- bounding box.
- Returns
- -------
- (ndarray, ndarray)
- Returns the measurement-corrected state distribution.
- """
- projected_mean, projected_cov = self.project(mean, covariance)
-
- chol_factor, lower = scipy.linalg.cho_factor(
- projected_cov, lower=True, check_finite=False)
- kalman_gain = scipy.linalg.cho_solve(
- (chol_factor, lower), np.dot(covariance, self._update_mat.T).T,
- check_finite=False).T
- innovation = measurement - projected_mean
-
- new_mean = mean + np.dot(innovation, kalman_gain.T)
- new_covariance = covariance - np.linalg.multi_dot((
- kalman_gain, projected_cov, kalman_gain.T))
- return new_mean, new_covariance
-
- def gating_distance(self, mean, covariance, measurements,
- only_position=False, metric='maha'):
- """Compute gating distance between state distribution and measurements.
- A suitable distance threshold can be obtained from `chi2inv95`. If
- `only_position` is False, the chi-square distribution has 4 degrees of
- freedom, otherwise 2.
- Parameters
- ----------
- mean : ndarray
- Mean vector over the state distribution (8 dimensional).
- covariance : ndarray
- Covariance of the state distribution (8x8 dimensional).
- measurements : ndarray
- An Nx4 dimensional matrix of N measurements, each in
- format (x, y, a, h) where (x, y) is the bounding box center
- position, a the aspect ratio, and h the height.
- only_position : Optional[bool]
- If True, distance computation is done with respect to the bounding
- box center position only.
- Returns
- -------
- ndarray
- Returns an array of length N, where the i-th element contains the
- squared Mahalanobis distance between (mean, covariance) and
- `measurements[i]`.
- """
- mean, covariance = self.project(mean, covariance)
- if only_position:
- mean, covariance = mean[:2], covariance[:2, :2]
- measurements = measurements[:, :2]
-
- d = measurements - mean
- if metric == 'gaussian':
- return np.sum(d * d, axis=1)
- elif metric == 'maha':
- cholesky_factor = np.linalg.cholesky(covariance)
- z = scipy.linalg.solve_triangular(
- cholesky_factor, d.T, lower=True, check_finite=False,
- overwrite_b=True)
- squared_maha = np.sum(z * z, axis=0)
- return squared_maha
- else:
- raise ValueError('invalid distance metric')
diff --git a/spaces/EleutherAI/VQGAN_CLIP/CLIP/setup.py b/spaces/EleutherAI/VQGAN_CLIP/CLIP/setup.py
deleted file mode 100644
index c9ea7d0d2f3d2fcf66d6f6e2aa0eb1a97a524bb6..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/CLIP/setup.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import os
-
-import pkg_resources
-from setuptools import setup, find_packages
-
-setup(
- name="clip",
- py_modules=["clip"],
- version="1.0",
- description="",
- author="OpenAI",
- packages=find_packages(exclude=["tests*"]),
- install_requires=[
- str(r)
- for r in pkg_resources.parse_requirements(
- open(os.path.join(os.path.dirname(__file__), "requirements.txt"))
- )
- ],
- include_package_data=True,
- extras_require={'dev': ['pytest']},
-)
diff --git a/spaces/Epoching/GLIDE_Inpaint/glide_text2im/tokenizer/bpe.py b/spaces/Epoching/GLIDE_Inpaint/glide_text2im/tokenizer/bpe.py
deleted file mode 100644
index 5dcd56586a9c7bd974c1dd264152ecb70f909619..0000000000000000000000000000000000000000
--- a/spaces/Epoching/GLIDE_Inpaint/glide_text2im/tokenizer/bpe.py
+++ /dev/null
@@ -1,151 +0,0 @@
-"""
-Byte pair encoding utilities adapted from:
-https://github.com/openai/gpt-2/blob/master/src/encoder.py
-"""
-
-import gzip
-import json
-import os
-from functools import lru_cache
-from typing import List, Tuple
-
-import regex as re
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1))
- + list(range(ord("¡"), ord("¬") + 1))
- + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2 ** 8):
- if b not in bs:
- bs.append(b)
- cs.append(2 ** 8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-class Encoder:
- def __init__(self, encoder, bpe_merges, errors="replace"):
- self.encoder = encoder
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.errors = errors # how to handle errors in decoding
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
- self.cache = {}
-
- # Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
- self.pat = re.compile(
- r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+"""
- )
-
- @property
- def n_vocab(self) -> int:
- return len(self.encoder)
-
- @property
- def end_token(self) -> int:
- return self.n_vocab - 1
-
- def padded_tokens_and_mask(
- self, tokens: List[int], text_ctx: int
- ) -> Tuple[List[int], List[bool]]:
- tokens = tokens[:text_ctx]
- padding = text_ctx - len(tokens)
- padded_tokens = tokens + [self.end_token] * padding
- mask = [True] * len(tokens) + [False] * padding
- return padded_tokens, mask
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token)
- pairs = get_pairs(word)
-
- if not pairs:
- return token
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except: # pylint: disable=bare-except
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- text = text.lower()
- bpe_tokens = []
- for token in re.findall(self.pat, text):
- token = "".join(self.byte_encoder[b] for b in token.encode("utf-8"))
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" "))
- return bpe_tokens
-
- def decode(self, tokens):
- text = "".join([self.decoder[token] for token in tokens])
- text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
- return text
-
-
-def get_encoder():
- root_dir = os.path.dirname(os.path.abspath(__file__))
- with gzip.open(os.path.join(root_dir, "encoder.json.gz"), "r") as f:
- encoder = json.load(f)
- with gzip.open(os.path.join(root_dir, "vocab.bpe.gz"), "r") as f:
- bpe_data = str(f.read(), "utf-8")
- bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split("\n")[1:-1]]
- return Encoder(
- encoder=encoder,
- bpe_merges=bpe_merges,
- )
diff --git a/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca/dist/index.runtime.2846421e.js b/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca/dist/index.runtime.2846421e.js
deleted file mode 100644
index 1cf5f76a93b25d7fd928cbb4c31bcb3d6e435823..0000000000000000000000000000000000000000
--- a/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca/dist/index.runtime.2846421e.js
+++ /dev/null
@@ -1 +0,0 @@
-var e=globalThis,r={},t={},a=e.parcelRequireba71;null==a&&((a=function(e){if(e in r)return r[e].exports;if(e in t){var a=t[e];delete t[e];var n={id:e,exports:{}};return r[e]=n,a.call(n.exports,n,n.exports),n.exports}var o=Error("Cannot find module '"+e+"'");throw o.code="MODULE_NOT_FOUND",o}).register=function(e,r){t[e]=r},e.parcelRequireba71=a),(0,a.register)("dRo73",function(e,r){Object.defineProperty(e.exports,"register",{get:()=>t,set:e=>t=e,enumerable:!0,configurable:!0});var t,a=new Map;t=function(e,r){for(var t=0;tList[Document]:
- """
- creates the pipeline and runs the preprocessing pipeline,
- the params for pipeline are fetched from paramconfig
- Params
- ------------
- file_name: filename, in case of streamlit application use
- st.session_state['filename']
- file_path: filepath, in case of streamlit application use st.session_state['filepath']
- split_by: document splitting strategy either as word or sentence
- split_length: when synthetically creating the paragrpahs from document,
- it defines the length of paragraph.
- split_respect_sentence_boundary: Used when using 'word' strategy for
- splititng of text.
- split_overlap: Number of words or sentences that overlap when creating
- the paragraphs. This is done as one sentence or 'some words' make sense
- when read in together with others. Therefore the overlap is used.
- remove_punc: to remove all Punctuation including ',' and '.' or not
- Return
- --------------
- List[Document]: When preprocessing pipeline is run, the output dictionary
- has four objects. For the Haysatck implementation of SDG classification we,
- need to use the List of Haystack Document, which can be fetched by
- key = 'documents' on output.
- """
-
- processing_pipeline = processingpipeline()
-
- output_pre = processing_pipeline.run(file_paths = file_path,
- params= {"FileConverter": {"file_path": file_path, \
- "file_name": file_name},
- "UdfPreProcessor": {"remove_punc": remove_punc, \
- "split_by": split_by, \
- "split_length":split_length,\
- "split_overlap": split_overlap, \
- "split_respect_sentence_boundary":split_respect_sentence_boundary}})
-
- return output_pre
-
-
-def app():
- with st.container():
- if 'filepath' in st.session_state:
- file_name = st.session_state['filename']
- file_path = st.session_state['filepath']
-
-
- all_documents = runPreprocessingPipeline(file_name= file_name,
- file_path= file_path, split_by= params['split_by'],
- split_length= params['split_length'],
- split_respect_sentence_boundary= params['split_respect_sentence_boundary'],
- split_overlap= params['split_overlap'], remove_punc= params['remove_punc'])
- paralist = paraLengthCheck(all_documents['documents'], 100)
- df = pd.DataFrame(paralist,columns = ['text','page'])
- # saving the dataframe to session state
- st.session_state['key0'] = df
-
- else:
- st.info("🤔 No document found, please try to upload it at the sidebar!")
- logging.warning("Terminated as no document provided")
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py
deleted file mode 100644
index 009bd93d06b3284c7b31f33f82d636f774e86b74..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = [
- '../_base_/models/faster_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco.py
deleted file mode 100644
index e2640c07e86db2d8cc2e6654c78077df10789b4c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,12 +0,0 @@
-_base_ = './retinanet_free_anchor_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- style='pytorch'))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py
deleted file mode 100644
index 31fdd070595ac0512a39075bb045dd18035d3f14..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,11 +0,0 @@
-_base_ = '../cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- norm_eval=False,
- plugins=[
- dict(
- cfg=dict(type='ContextBlock', ratio=1. / 4),
- stages=(False, True, True, True),
- position='after_conv3')
- ]))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_512x512_80k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_512x512_80k_ade20k.py
deleted file mode 100644
index 9713b731a47df9c5e23d26a08ad17d03a0d5e9fe..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_512x512_80k_ade20k.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './dmnet_r50-d8_512x512_80k_ade20k.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/point_rend/pointrend_r50_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/point_rend/pointrend_r50_512x1024_80k_cityscapes.py
deleted file mode 100644
index 96cbaa48d61ee208117d074e9f06bf4218407d78..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/point_rend/pointrend_r50_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = [
- '../_base_/models/pointrend_r50.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
-lr_config = dict(warmup='linear', warmup_iters=200)
diff --git a/spaces/Guinnessgshep/AI_story_writing/app.py b/spaces/Guinnessgshep/AI_story_writing/app.py
deleted file mode 100644
index 59cebf54581a4717227d98d3a4cfb688eefded3e..0000000000000000000000000000000000000000
--- a/spaces/Guinnessgshep/AI_story_writing/app.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-title = "story Generator"
-
-# gpt-neo-2.7B gpt-j-6B
-
-def generate(text,the_model,max_length,temperature,repetition_penalty):
- generator = pipeline('text-generation', model=the_model)
- result = generator(text, num_return_sequences=3,
- max_length=max_length,
- temperature=temperature,
- repetition_penalty = repetition_penalty,
- no_repeat_ngram_size=2,early_stopping=False)
- return result[0]["generated_text"],result[1]["generated_text"],result[2]["generated_text"]
-
-
-def complete_with_gpt(text,context,the_model,max_length,temperature,repetition_penalty):
- # Use the last [context] characters of the text as context
- max_length = max_length+context
- return generate(text[-context:],the_model,max_length,temperature,repetition_penalty)
-
-def send(text1,context,text2):
- if len(text1) bool:
- if IOPathManager:
- return IOPathManager.copy(
- src_path=src_path, dst_path=dst_path, overwrite=overwrite
- )
- return shutil.copyfile(src_path, dst_path)
-
- @staticmethod
- def get_local_path(path: str, **kwargs) -> str:
- if IOPathManager:
- return IOPathManager.get_local_path(path, **kwargs)
- return path
-
- @staticmethod
- def exists(path: str) -> bool:
- if IOPathManager:
- return IOPathManager.exists(path)
- return os.path.exists(path)
-
- @staticmethod
- def isfile(path: str) -> bool:
- if IOPathManager:
- return IOPathManager.isfile(path)
- return os.path.isfile(path)
-
- @staticmethod
- def ls(path: str) -> List[str]:
- if IOPathManager:
- return IOPathManager.ls(path)
- return os.listdir(path)
-
- @staticmethod
- def mkdirs(path: str) -> None:
- if IOPathManager:
- return IOPathManager.mkdirs(path)
- os.makedirs(path, exist_ok=True)
-
- @staticmethod
- def rm(path: str) -> None:
- if IOPathManager:
- return IOPathManager.rm(path)
- os.remove(path)
-
- @staticmethod
- def chmod(path: str, mode: int) -> None:
- if not PathManager.path_requires_pathmanager(path):
- os.chmod(path, mode)
-
- @staticmethod
- def register_handler(handler) -> None:
- if IOPathManager:
- return IOPathManager.register_handler(handler=handler)
-
- @staticmethod
- def copy_from_local(
- local_path: str, dst_path: str, overwrite: bool = False, **kwargs
- ) -> None:
- if IOPathManager:
- return IOPathManager.copy_from_local(
- local_path=local_path, dst_path=dst_path, overwrite=overwrite, **kwargs
- )
- return shutil.copyfile(local_path, dst_path)
-
- @staticmethod
- def path_requires_pathmanager(path: str) -> bool:
- """Do we require PathManager to access given path?"""
- if IOPathManager:
- for p in IOPathManager._path_handlers.keys():
- if path.startswith(p):
- return True
- return False
-
- @staticmethod
- def supports_rename(path: str) -> bool:
- # PathManager doesn't yet support renames
- return not PathManager.path_requires_pathmanager(path)
-
- @staticmethod
- def rename(src: str, dst: str):
- os.rename(src, dst)
-
- """
- ioPath async PathManager methods:
- """
- @staticmethod
- def opena(
- path: str,
- mode: str = "r",
- buffering: int = -1,
- encoding: Optional[str] = None,
- errors: Optional[str] = None,
- newline: Optional[str] = None,
- ):
- """
- Return file descriptor with asynchronous write operations.
- """
- global IOPathManager
- if not IOPathManager:
- logging.info("ioPath is initializing PathManager.")
- try:
- from iopath.common.file_io import PathManager
- IOPathManager = PathManager()
- except Exception:
- logging.exception("Failed to initialize ioPath PathManager object.")
- return IOPathManager.opena(
- path=path,
- mode=mode,
- buffering=buffering,
- encoding=encoding,
- errors=errors,
- newline=newline,
- )
-
- @staticmethod
- def async_close() -> bool:
- """
- Wait for files to be written and clean up asynchronous PathManager.
- NOTE: `PathManager.async_close()` must be called at the end of any
- script that uses `PathManager.opena(...)`.
- """
- global IOPathManager
- if IOPathManager:
- return IOPathManager.async_close()
- return False
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/multilingual_translation.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/multilingual_translation.py
deleted file mode 100644
index 4f85ab4832a6c7cbe57a99a3efc6987125d956fc..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/multilingual_translation.py
+++ /dev/null
@@ -1,462 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import contextlib
-import logging
-import os
-from collections import OrderedDict
-from argparse import ArgumentError
-
-import torch
-from fairseq import metrics, options, utils
-from fairseq.data import (
- Dictionary,
- LanguagePairDataset,
- RoundRobinZipDatasets,
- TransformEosLangPairDataset,
-)
-from fairseq.models import FairseqMultiModel
-from fairseq.tasks.translation import load_langpair_dataset
-
-from . import LegacyFairseqTask, register_task
-
-
-logger = logging.getLogger(__name__)
-
-
-def _lang_token(lang: str):
- return "__{}__".format(lang)
-
-
-def _lang_token_index(dic: Dictionary, lang: str):
- """Return language token index."""
- idx = dic.index(_lang_token(lang))
- assert idx != dic.unk_index, "cannot find language token for lang {}".format(lang)
- return idx
-
-
-@register_task("multilingual_translation")
-class MultilingualTranslationTask(LegacyFairseqTask):
- """A task for training multiple translation models simultaneously.
-
- We iterate round-robin over batches from multiple language pairs, ordered
- according to the `--lang-pairs` argument.
-
- The training loop is roughly:
-
- for i in range(len(epoch)):
- for lang_pair in args.lang_pairs:
- batch = next_batch_for_lang_pair(lang_pair)
- loss = criterion(model_for_lang_pair(lang_pair), batch)
- loss.backward()
- optimizer.step()
-
- In practice, `next_batch_for_lang_pair` is abstracted in a FairseqDataset
- (e.g., `RoundRobinZipDatasets`) and `model_for_lang_pair` is a model that
- implements the `FairseqMultiModel` interface.
-
- During inference it is required to specify a single `--source-lang` and
- `--target-lang`, which indicates the inference langauge direction.
- `--lang-pairs`, `--encoder-langtok`, `--decoder-langtok` have to be set to
- the same value as training.
- """
-
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- # fmt: off
- parser.add_argument('data', metavar='DIR', help='path to data directory')
- parser.add_argument('--lang-pairs', default=None, metavar='PAIRS',
- help='comma-separated list of language pairs (in training order): en-de,en-fr,de-fr')
- parser.add_argument('-s', '--source-lang', default=None, metavar='SRC',
- help='source language (only needed for inference)')
- parser.add_argument('-t', '--target-lang', default=None, metavar='TARGET',
- help='target language (only needed for inference)')
- parser.add_argument('--left-pad-source', default='True', type=str, metavar='BOOL',
- help='pad the source on the left (default: True)')
- parser.add_argument('--left-pad-target', default='False', type=str, metavar='BOOL',
- help='pad the target on the left (default: False)')
- try:
- parser.add_argument('--max-source-positions', default=1024, type=int, metavar='N',
- help='max number of tokens in the source sequence')
- parser.add_argument('--max-target-positions', default=1024, type=int, metavar='N',
- help='max number of tokens in the target sequence')
- except ArgumentError:
- # this might have already been defined. Once we transition this to hydra it should be fine to add it here.
- pass
- parser.add_argument('--upsample-primary', default=1, type=int,
- help='amount to upsample primary dataset')
- parser.add_argument('--encoder-langtok', default=None, type=str, choices=['src', 'tgt'],
- metavar='SRCTGT',
- help='replace beginning-of-sentence in source sentence with source or target '
- 'language token. (src/tgt)')
- parser.add_argument('--decoder-langtok', action='store_true',
- help='replace beginning-of-sentence in target sentence with target language token')
- # fmt: on
-
- def __init__(self, args, dicts, training):
- super().__init__(args)
- self.dicts = dicts
- self.training = training
- if training:
- self.lang_pairs = args.lang_pairs
- else:
- self.lang_pairs = ["{}-{}".format(args.source_lang, args.target_lang)]
- # eval_lang_pairs for multilingual translation is usually all of the
- # lang_pairs. However for other multitask settings or when we want to
- # optimize for certain languages we want to use a different subset. Thus
- # the eval_lang_pairs class variable is provided for classes that extend
- # this class.
- self.eval_lang_pairs = self.lang_pairs
- # model_lang_pairs will be used to build encoder-decoder model pairs in
- # models.build_model(). This allows multitask type of sub-class can
- # build models other than the input lang_pairs
- self.model_lang_pairs = self.lang_pairs
- self.langs = list(dicts.keys())
-
- @classmethod
- def setup_task(cls, args, **kwargs):
- dicts, training = cls.prepare(args, **kwargs)
- return cls(args, dicts, training)
-
- @classmethod
- def update_args(cls, args):
- args.left_pad_source = utils.eval_bool(args.left_pad_source)
- args.left_pad_target = utils.eval_bool(args.left_pad_target)
-
- if args.lang_pairs is None:
- raise ValueError(
- "--lang-pairs is required. List all the language pairs in the training objective."
- )
- if isinstance(args.lang_pairs, str):
- args.lang_pairs = args.lang_pairs.split(",")
-
- @classmethod
- def prepare(cls, args, **kargs):
- cls.update_args(args)
- sorted_langs = sorted(
- list({x for lang_pair in args.lang_pairs for x in lang_pair.split("-")})
- )
- if args.source_lang is not None or args.target_lang is not None:
- training = False
- else:
- training = True
-
- # load dictionaries
- dicts = OrderedDict()
- for lang in sorted_langs:
- paths = utils.split_paths(args.data)
- assert len(paths) > 0
- dicts[lang] = cls.load_dictionary(
- os.path.join(paths[0], "dict.{}.txt".format(lang))
- )
- if len(dicts) > 0:
- assert dicts[lang].pad() == dicts[sorted_langs[0]].pad()
- assert dicts[lang].eos() == dicts[sorted_langs[0]].eos()
- assert dicts[lang].unk() == dicts[sorted_langs[0]].unk()
- if args.encoder_langtok is not None or args.decoder_langtok:
- for lang_to_add in sorted_langs:
- dicts[lang].add_symbol(_lang_token(lang_to_add))
- logger.info("[{}] dictionary: {} types".format(lang, len(dicts[lang])))
- return dicts, training
-
- def get_encoder_langtok(self, src_lang, tgt_lang):
- if self.args.encoder_langtok is None:
- return self.dicts[src_lang].eos()
- if self.args.encoder_langtok == "src":
- return _lang_token_index(self.dicts[src_lang], src_lang)
- else:
- return _lang_token_index(self.dicts[src_lang], tgt_lang)
-
- def get_decoder_langtok(self, tgt_lang):
- if not self.args.decoder_langtok:
- return self.dicts[tgt_lang].eos()
- return _lang_token_index(self.dicts[tgt_lang], tgt_lang)
-
- def alter_dataset_langtok(
- self,
- lang_pair_dataset,
- src_eos=None,
- src_lang=None,
- tgt_eos=None,
- tgt_lang=None,
- ):
- if self.args.encoder_langtok is None and not self.args.decoder_langtok:
- return lang_pair_dataset
-
- new_src_eos = None
- if (
- self.args.encoder_langtok is not None
- and src_eos is not None
- and src_lang is not None
- and tgt_lang is not None
- ):
- new_src_eos = self.get_encoder_langtok(src_lang, tgt_lang)
- else:
- src_eos = None
-
- new_tgt_bos = None
- if self.args.decoder_langtok and tgt_eos is not None and tgt_lang is not None:
- new_tgt_bos = self.get_decoder_langtok(tgt_lang)
- else:
- tgt_eos = None
-
- return TransformEosLangPairDataset(
- lang_pair_dataset,
- src_eos=src_eos,
- new_src_eos=new_src_eos,
- tgt_bos=tgt_eos,
- new_tgt_bos=new_tgt_bos,
- )
-
- def load_dataset(self, split, epoch=1, **kwargs):
- """Load a dataset split."""
- paths = utils.split_paths(self.args.data)
- assert len(paths) > 0
- data_path = paths[(epoch - 1) % len(paths)]
-
- def language_pair_dataset(lang_pair):
- src, tgt = lang_pair.split("-")
- langpair_dataset = load_langpair_dataset(
- data_path,
- split,
- src,
- self.dicts[src],
- tgt,
- self.dicts[tgt],
- combine=True,
- dataset_impl=self.args.dataset_impl,
- upsample_primary=self.args.upsample_primary,
- left_pad_source=self.args.left_pad_source,
- left_pad_target=self.args.left_pad_target,
- max_source_positions=self.args.max_source_positions,
- max_target_positions=self.args.max_target_positions,
- )
- return self.alter_dataset_langtok(
- langpair_dataset,
- src_eos=self.dicts[src].eos(),
- src_lang=src,
- tgt_eos=self.dicts[tgt].eos(),
- tgt_lang=tgt,
- )
-
- self.datasets[split] = RoundRobinZipDatasets(
- OrderedDict(
- [
- (lang_pair, language_pair_dataset(lang_pair))
- for lang_pair in self.lang_pairs
- ]
- ),
- eval_key=None
- if self.training
- else "%s-%s" % (self.args.source_lang, self.args.target_lang),
- )
-
- def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None):
- if constraints is not None:
- raise NotImplementedError(
- "Constrained decoding with the multilingual_translation task is not supported"
- )
-
- lang_pair = "%s-%s" % (self.args.source_lang, self.args.target_lang)
- return RoundRobinZipDatasets(
- OrderedDict(
- [
- (
- lang_pair,
- self.alter_dataset_langtok(
- LanguagePairDataset(
- src_tokens, src_lengths, self.source_dictionary
- ),
- src_eos=self.source_dictionary.eos(),
- src_lang=self.args.source_lang,
- tgt_eos=self.target_dictionary.eos(),
- tgt_lang=self.args.target_lang,
- ),
- )
- ]
- ),
- eval_key=lang_pair,
- )
-
- def build_model(self, args):
- def check_args():
- messages = []
- if (
- len(set(self.args.lang_pairs).symmetric_difference(args.lang_pairs))
- != 0
- ):
- messages.append(
- "--lang-pairs should include all the language pairs {}.".format(
- args.lang_pairs
- )
- )
- if self.args.encoder_langtok != args.encoder_langtok:
- messages.append(
- "--encoder-langtok should be {}.".format(args.encoder_langtok)
- )
- if self.args.decoder_langtok != args.decoder_langtok:
- messages.append(
- "--decoder-langtok should {} be set.".format(
- "" if args.decoder_langtok else "not"
- )
- )
-
- if len(messages) > 0:
- raise ValueError(" ".join(messages))
-
- # Update args -> the fact that the constructor here
- # changes the args object doesn't mean you get the same one here
- self.update_args(args)
-
- # Check if task args are consistant with model args
- check_args()
-
- from fairseq import models
-
- model = models.build_model(args, self)
- if not isinstance(model, FairseqMultiModel):
- raise ValueError(
- "MultilingualTranslationTask requires a FairseqMultiModel architecture"
- )
- return model
-
- def _per_lang_pair_train_loss(
- self, lang_pair, model, update_num, criterion, sample, optimizer, ignore_grad
- ):
- loss, sample_size, logging_output = criterion(
- model.models[lang_pair], sample[lang_pair]
- )
- if ignore_grad:
- loss *= 0
- optimizer.backward(loss)
- return loss, sample_size, logging_output
-
- def train_step(
- self, sample, model, criterion, optimizer, update_num, ignore_grad=False
- ):
- model.train()
- from collections import defaultdict
-
- agg_loss, agg_sample_size, agg_logging_output = 0.0, 0.0, defaultdict(float)
- curr_lang_pairs = [
- lang_pair
- for lang_pair in self.model_lang_pairs
- if sample[lang_pair] is not None and len(sample[lang_pair]) != 0
- ]
-
- for idx, lang_pair in enumerate(curr_lang_pairs):
-
- def maybe_no_sync():
- if (
- self.args.distributed_world_size > 1
- and hasattr(model, "no_sync")
- and idx < len(curr_lang_pairs) - 1
- ):
- return model.no_sync()
- else:
- return contextlib.ExitStack() # dummy contextmanager
-
- with maybe_no_sync():
- loss, sample_size, logging_output = self._per_lang_pair_train_loss(
- lang_pair,
- model,
- update_num,
- criterion,
- sample,
- optimizer,
- ignore_grad,
- )
- agg_loss += loss.detach().item()
- # TODO make summing of the sample sizes configurable
- agg_sample_size += sample_size
- for k in logging_output:
- agg_logging_output[k] += logging_output[k]
- agg_logging_output[f"{lang_pair}:{k}"] += logging_output[k]
- return agg_loss, agg_sample_size, agg_logging_output
-
- def _per_lang_pair_valid_loss(self, lang_pair, model, criterion, sample):
- return criterion(model.models[lang_pair], sample[lang_pair])
-
- def valid_step(self, sample, model, criterion):
- model.eval()
- with torch.no_grad():
- from collections import defaultdict
-
- agg_loss, agg_sample_size, agg_logging_output = 0.0, 0.0, defaultdict(float)
- for lang_pair in self.eval_lang_pairs:
- if (
- lang_pair not in sample
- or sample[lang_pair] is None
- or len(sample[lang_pair]) == 0
- ):
- continue
- loss, sample_size, logging_output = self._per_lang_pair_valid_loss(
- lang_pair, model, criterion, sample
- )
- agg_loss += loss.data.item()
- # TODO make summing of the sample sizes configurable
- agg_sample_size += sample_size
- for k in logging_output:
- agg_logging_output[k] += logging_output[k]
- agg_logging_output[f"{lang_pair}:{k}"] += logging_output[k]
- return agg_loss, agg_sample_size, agg_logging_output
-
- def inference_step(
- self, generator, models, sample, prefix_tokens=None, constraints=None
- ):
- with torch.no_grad():
- if self.args.decoder_langtok:
- bos_token = _lang_token_index(
- self.target_dictionary, self.args.target_lang
- )
- else:
- bos_token = self.target_dictionary.eos()
- return generator.generate(
- models,
- sample,
- prefix_tokens=prefix_tokens,
- constraints=constraints,
- bos_token=bos_token,
- )
-
- def reduce_metrics(self, logging_outputs, criterion):
- with metrics.aggregate():
- # pass 'sample_size', 'nsentences', 'ntokens' stats to fairseq_task
- super().reduce_metrics(logging_outputs, criterion)
- for k in ["sample_size", "nsentences", "ntokens"]:
- metrics.log_scalar(k, sum(l[k] for l in logging_outputs))
-
- @property
- def source_dictionary(self):
- if self.training:
- return next(iter(self.dicts.values()))
- else:
- return self.dicts[self.args.source_lang]
-
- @property
- def target_dictionary(self):
- if self.training:
- return next(iter(self.dicts.values()))
- else:
- return self.dicts[self.args.target_lang]
-
- def max_positions(self):
- """Return the max sentence length allowed by the task."""
- if len(self.datasets.values()) == 0:
- return {
- "%s-%s"
- % (self.args.source_lang, self.args.target_lang): (
- self.args.max_source_positions,
- self.args.max_target_positions,
- )
- }
- return OrderedDict(
- [
- (key, (self.args.max_source_positions, self.args.max_target_positions))
- for split in self.datasets.keys()
- for key in self.datasets[split].datasets.keys()
- ]
- )
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/speech_recognition/test_data_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/speech_recognition/test_data_utils.py
deleted file mode 100644
index a72e0b66948da1349d87eafdef4c4004dd535c96..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/speech_recognition/test_data_utils.py
+++ /dev/null
@@ -1,62 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-import unittest
-
-import torch
-from examples.speech_recognition.data import data_utils
-
-
-class DataUtilsTest(unittest.TestCase):
- def test_normalization(self):
- sample_len1 = torch.tensor(
- [
- [
- -0.7661,
- -1.3889,
- -2.0972,
- -0.9134,
- -0.7071,
- -0.9765,
- -0.8700,
- -0.8283,
- 0.7512,
- 1.3211,
- 2.1532,
- 2.1174,
- 1.2800,
- 1.2633,
- 1.6147,
- 1.6322,
- 2.0723,
- 3.1522,
- 3.2852,
- 2.2309,
- 2.5569,
- 2.2183,
- 2.2862,
- 1.5886,
- 0.8773,
- 0.8725,
- 1.2662,
- 0.9899,
- 1.1069,
- 1.3926,
- 1.2795,
- 1.1199,
- 1.1477,
- 1.2687,
- 1.3843,
- 1.1903,
- 0.8355,
- 1.1367,
- 1.2639,
- 1.4707,
- ]
- ]
- )
- out = data_utils.apply_mv_norm(sample_len1)
- assert not torch.isnan(out).any()
- assert (out == sample_len1).all()
diff --git a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/chrF.py b/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/chrF.py
deleted file mode 100644
index 3a35941d61b618a8b32d937b51f0d10071129bd6..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/chrF.py
+++ /dev/null
@@ -1,139 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-# Author: Rico Sennrich
-
-"""Compute chrF3 for machine translation evaluation
-
-Reference:
-Maja Popović (2015). chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translationn, pages 392–395, Lisbon, Portugal.
-"""
-
-from __future__ import print_function, unicode_literals, division
-
-import sys
-import codecs
-import io
-import argparse
-
-from collections import defaultdict
-
-# hack for python2/3 compatibility
-from io import open
-argparse.open = open
-
-def create_parser():
- parser = argparse.ArgumentParser(
- formatter_class=argparse.RawDescriptionHelpFormatter,
- description="learn BPE-based word segmentation")
-
- parser.add_argument(
- '--ref', '-r', type=argparse.FileType('r'), required=True,
- metavar='PATH',
- help="Reference file")
- parser.add_argument(
- '--hyp', type=argparse.FileType('r'), metavar='PATH',
- default=sys.stdin,
- help="Hypothesis file (default: stdin).")
- parser.add_argument(
- '--beta', '-b', type=float, default=3,
- metavar='FLOAT',
- help="beta parameter (default: '%(default)s')")
- parser.add_argument(
- '--ngram', '-n', type=int, default=6,
- metavar='INT',
- help="ngram order (default: '%(default)s')")
- parser.add_argument(
- '--space', '-s', action='store_true',
- help="take spaces into account (default: '%(default)s')")
- parser.add_argument(
- '--precision', action='store_true',
- help="report precision (default: '%(default)s')")
- parser.add_argument(
- '--recall', action='store_true',
- help="report recall (default: '%(default)s')")
-
- return parser
-
-def extract_ngrams(words, max_length=4, spaces=False):
-
- if not spaces:
- words = ''.join(words.split())
- else:
- words = words.strip()
-
- results = defaultdict(lambda: defaultdict(int))
- for length in range(max_length):
- for start_pos in range(len(words)):
- end_pos = start_pos + length + 1
- if end_pos <= len(words):
- results[length][tuple(words[start_pos: end_pos])] += 1
- return results
-
-
-def get_correct(ngrams_ref, ngrams_test, correct, total):
-
- for rank in ngrams_test:
- for chain in ngrams_test[rank]:
- total[rank] += ngrams_test[rank][chain]
- if chain in ngrams_ref[rank]:
- correct[rank] += min(ngrams_test[rank][chain], ngrams_ref[rank][chain])
-
- return correct, total
-
-
-def f1(correct, total_hyp, total_ref, max_length, beta=3, smooth=0):
-
- precision = 0
- recall = 0
-
- for i in range(max_length):
- if total_hyp[i] + smooth and total_ref[i] + smooth:
- precision += (correct[i] + smooth) / (total_hyp[i] + smooth)
- recall += (correct[i] + smooth) / (total_ref[i] + smooth)
-
- precision /= max_length
- recall /= max_length
-
- return (1 + beta**2) * (precision*recall) / ((beta**2 * precision) + recall), precision, recall
-
-def main(args):
-
- correct = [0]*args.ngram
- total = [0]*args.ngram
- total_ref = [0]*args.ngram
- for line in args.ref:
- line2 = args.hyp.readline()
-
- ngrams_ref = extract_ngrams(line, max_length=args.ngram, spaces=args.space)
- ngrams_test = extract_ngrams(line2, max_length=args.ngram, spaces=args.space)
-
- get_correct(ngrams_ref, ngrams_test, correct, total)
-
- for rank in ngrams_ref:
- for chain in ngrams_ref[rank]:
- total_ref[rank] += ngrams_ref[rank][chain]
-
- chrf, precision, recall = f1(correct, total, total_ref, args.ngram, args.beta)
-
- print('chrF3: {0:.4f}'.format(chrf))
- if args.precision:
- print('chrPrec: {0:.4f}'.format(precision))
- if args.recall:
- print('chrRec: {0:.4f}'.format(recall))
-
-if __name__ == '__main__':
-
- # python 2/3 compatibility
- if sys.version_info < (3, 0):
- sys.stderr = codecs.getwriter('UTF-8')(sys.stderr)
- sys.stdout = codecs.getwriter('UTF-8')(sys.stdout)
- sys.stdin = codecs.getreader('UTF-8')(sys.stdin)
- else:
- sys.stdin = io.TextIOWrapper(sys.stdin.buffer, encoding='utf-8')
- sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8')
- sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', write_through=True, line_buffering=True)
-
- parser = create_parser()
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/Hexamind/swarms/team_wrap.py b/spaces/Hexamind/swarms/team_wrap.py
deleted file mode 100644
index f085590cdf510e1da8f3e40c57db703261cc7f08..0000000000000000000000000000000000000000
--- a/spaces/Hexamind/swarms/team_wrap.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import numpy as np
-import gym
-from gym import spaces
-
-from swarm_policy import SwarmPolicy
-from settings import Settings
-
-
-class TeamWrapper(gym.Wrapper):
- """
- :param env: (gym.Env) Gym environment that will be wrapped
- """
-
- def __init__(self, env, is_blue: bool = True, is_double: bool = False, is_unkillable: bool = Settings.is_unkillable):
-
- self.is_blue = is_blue
- self.is_double = is_double
- self.is_unkillabe = is_unkillable
-
-
- nb_blues, nb_reds = env.nb_blues, env.nb_reds
-
- self.foe_action = None
- self.foe_policy = SwarmPolicy(is_blue=not is_blue, blues=nb_blues, reds=nb_reds)
-
- if is_double:
- env.action_space = spaces.Tuple((
- spaces.Box(low=-1, high=1, shape=(nb_blues*3,), dtype=np.float32),
- spaces.Box(low=-1, high=1, shape=(nb_reds*3,), dtype=np.float32)
- ))
- else:
- nb_friends = nb_blues if is_blue else nb_reds
- env.action_space = spaces.Box(low=-1, high=1, shape=(nb_friends*3,), dtype=np.float32)
-
- flatten_dimension = 6 * nb_blues + 6 * nb_reds # the position and speeds for blue and red drones
- flatten_dimension += (nb_blues * nb_reds) * (1 if is_unkillable else 2) # the fire matrices
-
- env.observation_space = spaces.Box(low=-1, high=1, shape=(flatten_dimension,), dtype=np.float32)
-
- super(TeamWrapper, self).__init__(env)
-
- def reset(self):
- """
- Reset the environment
- """
- obs = self.env.reset()
- obs = self.post_obs(obs)
-
- return obs
-
- def step(self, action):
- """
- :param action: ([float] or int) Action taken by the agent
- :return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
- """
-
- if self.is_double:
- blue_action, red_action = action
- blue_action = _decentralise(blue_action)
- red_action = _decentralise(red_action)
- action = _unflatten(blue_action), _unflatten(red_action)
- else:
- friend_action = _decentralise(action)
- foe_action = _decentralise(self.foe_action)
- if self.is_blue:
- action = _unflatten(friend_action), _unflatten(foe_action)
- else:
- action = _unflatten(foe_action), _unflatten(friend_action)
-
- obs, reward, done, info = self.env.step(action)
-
- obs = self.post_obs(obs)
-
- return obs, reward, done, info
-
- def post_obs(self, obs):
-
- if self.is_unkillabe:
- o1, o2, o3, _ = obs
- obs = o1, o2, o3
- flatten_obs = _flatten(obs)
- centralised_obs = _centralise(flatten_obs)
-
- if not self.is_double:
- self.foe_action = self.foe_policy.predict(centralised_obs)
-
- return centralised_obs
-
-
-def _unflatten(action):
- return np.split(action, len(action)/3)
-
-
-def _flatten(obs): # need normalisation too
- fl_obs = [this_obs.flatten().astype('float32') for this_obs in obs]
- fl_obs = np.hstack(fl_obs)
- return fl_obs
-
-
-def _centralise(obs): # [0,1] to [-1,1]
- obs = 2 * obs - 1
- return obs
-
-
-def _decentralise(act): # [-1,1] to [0,1]
- act = 0.5 * (act + 1)
- return act
diff --git a/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/main.e37be.js b/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/main.e37be.js
deleted file mode 100644
index 459fbc584cb8b07585ee8624ebe8b986edb0e8db..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/main.e37be.js
+++ /dev/null
@@ -1,239 +0,0 @@
-(function () {
-
- function boot () {
-
- var settings = window._CCSettings;
- window._CCSettings = undefined;
-
- if ( !settings.debug ) {
- var uuids = settings.uuids;
-
- var rawAssets = settings.rawAssets;
- var assetTypes = settings.assetTypes;
- var realRawAssets = settings.rawAssets = {};
- for (var mount in rawAssets) {
- var entries = rawAssets[mount];
- var realEntries = realRawAssets[mount] = {};
- for (var id in entries) {
- var entry = entries[id];
- var type = entry[1];
- // retrieve minified raw asset
- if (typeof type === 'number') {
- entry[1] = assetTypes[type];
- }
- // retrieve uuid
- realEntries[uuids[id] || id] = entry;
- }
- }
-
- var scenes = settings.scenes;
- for (var i = 0; i < scenes.length; ++i) {
- var scene = scenes[i];
- if (typeof scene.uuid === 'number') {
- scene.uuid = uuids[scene.uuid];
- }
- }
-
- var packedAssets = settings.packedAssets;
- for (var packId in packedAssets) {
- var packedIds = packedAssets[packId];
- for (var j = 0; j < packedIds.length; ++j) {
- if (typeof packedIds[j] === 'number') {
- packedIds[j] = uuids[packedIds[j]];
- }
- }
- }
- }
-
- // init engine
- var canvas;
-
- if (cc.sys.isBrowser) {
- canvas = document.getElementById('GameCanvas');
- }
-
- if (false) {
- var ORIENTATIONS = {
- 'portrait': 1,
- 'landscape left': 2,
- 'landscape right': 3
- };
- BK.Director.screenMode = ORIENTATIONS[settings.orientation];
- initAdapter();
- }
-
- function setLoadingDisplay () {
- // Loading splash scene
- var splash = document.getElementById('splash');
- var progressBar = splash.querySelector('.progress-bar span');
- cc.loader.onProgress = function (completedCount, totalCount, item) {
- var percent = 100 * completedCount / totalCount;
- if (progressBar) {
- progressBar.style.width = percent.toFixed(2) + '%';
- }
- };
- splash.style.display = 'block';
- progressBar.style.width = '0%';
-
- cc.director.once(cc.Director.EVENT_AFTER_SCENE_LAUNCH, function () {
- splash.style.display = 'none';
- });
- }
-
- var onStart = function () {
- cc.loader.downloader._subpackages = settings.subpackages;
-
- if (false) {
- BK.Script.loadlib();
- }
-
- cc.view.resizeWithBrowserSize(true);
-
- if (!false && !false) {
- if (cc.sys.isBrowser) {
- setLoadingDisplay();
- }
-
- if (cc.sys.isMobile) {
- if (settings.orientation === 'landscape') {
- cc.view.setOrientation(cc.macro.ORIENTATION_LANDSCAPE);
- }
- else if (settings.orientation === 'portrait') {
- cc.view.setOrientation(cc.macro.ORIENTATION_PORTRAIT);
- }
- cc.view.enableAutoFullScreen([
- cc.sys.BROWSER_TYPE_BAIDU,
- cc.sys.BROWSER_TYPE_WECHAT,
- cc.sys.BROWSER_TYPE_MOBILE_QQ,
- cc.sys.BROWSER_TYPE_MIUI,
- ].indexOf(cc.sys.browserType) < 0);
- }
-
- // Limit downloading max concurrent task to 2,
- // more tasks simultaneously may cause performance draw back on some android system / browsers.
- // You can adjust the number based on your own test result, you have to set it before any loading process to take effect.
- if (cc.sys.isBrowser && cc.sys.os === cc.sys.OS_ANDROID) {
- cc.macro.DOWNLOAD_MAX_CONCURRENT = 2;
- }
- }
-
- // init assets
- cc.AssetLibrary.init({
- libraryPath: 'res/import',
- rawAssetsBase: 'res/raw-',
- rawAssets: settings.rawAssets,
- packedAssets: settings.packedAssets,
- md5AssetsMap: settings.md5AssetsMap
- });
-
- if (false) {
- cc.Pipeline.Downloader.PackDownloader._doPreload("WECHAT_SUBDOMAIN", settings.WECHAT_SUBDOMAIN_DATA);
- }
-
- var launchScene = settings.launchScene;
-
- // load scene
- cc.director.loadScene(launchScene, null,
- function () {
- if (cc.sys.isBrowser) {
- // show canvas
- canvas.style.visibility = '';
- var div = document.getElementById('GameDiv');
- if (div) {
- div.style.backgroundImage = '';
- }
- }
- cc.loader.onProgress = null;
- console.log('Success to load scene: ' + launchScene);
- }
- );
- };
-
- // jsList
- var jsList = settings.jsList;
-
- if (!false) {
- var bundledScript = settings.debug ? 'src/project.dev.js' : 'src/project.5549d.js';
- if (jsList) {
- jsList = jsList.map(function (x) {
- return 'src/' + x;
- });
- jsList.push(bundledScript);
- }
- else {
- jsList = [bundledScript];
- }
- }
-
- // anysdk scripts
- if (cc.sys.isNative && cc.sys.isMobile) {
- jsList = jsList.concat(['src/anysdk/jsb_anysdk.js', 'src/anysdk/jsb_anysdk_constants.js']);
- }
-
- var option = {
- //width: width,
- //height: height,
- id: 'GameCanvas',
- scenes: settings.scenes,
- debugMode: settings.debug ? cc.DebugMode.INFO : cc.DebugMode.ERROR,
- showFPS: (!false && !false) && settings.debug,
- frameRate: 60,
- jsList: jsList,
- groupList: settings.groupList,
- collisionMatrix: settings.collisionMatrix,
- renderMode: 1
- }
-
- cc.game.run(option, onStart);
- }
-
- if (false) {
- BK.Script.loadlib('GameRes://libs/qqplay-adapter.js');
- BK.Script.loadlib('GameRes://src/settings.js');
- BK.Script.loadlib();
- BK.Script.loadlib('GameRes://libs/qqplay-downloader.js');
- qqPlayDownloader.REMOTE_SERVER_ROOT = "";
- var prevPipe = cc.loader.md5Pipe || cc.loader.assetLoader;
- cc.loader.insertPipeAfter(prevPipe, qqPlayDownloader);
- //
- boot();
- return;
- }
-
- if (false) {
- require(window._CCSettings.debug ? 'cocos2d-js.js' : 'cocos2d-js-min.335ee.js');
- require('./libs/weapp-adapter/engine/index.js');
- var prevPipe = cc.loader.md5Pipe || cc.loader.assetLoader;
- cc.loader.insertPipeAfter(prevPipe, wxDownloader);
- boot();
- return;
- }
-
- if (window.jsb) {
- require('src/settings.4cc17.js');
- require('src/jsb_polyfill.js');
- boot();
- return;
- }
-
- if (window.document) {
- var splash = document.getElementById('splash');
- splash.style.display = 'block';
-
- var cocos2d = document.createElement('script');
- cocos2d.async = true;
- cocos2d.src = window._CCSettings.debug ? 'cocos2d-js.js' : 'cocos2d-js-min.335ee.js';
-
- var engineLoaded = function () {
- document.body.removeChild(cocos2d);
- cocos2d.removeEventListener('load', engineLoaded, false);
- if (typeof VConsole !== 'undefined') {
- window.vConsole = new VConsole();
- }
- boot();
- };
- cocos2d.addEventListener('load', engineLoaded, false);
- document.body.appendChild(cocos2d);
- }
-
-})();
diff --git a/spaces/HusseinHE/psis/gallery_history.py b/spaces/HusseinHE/psis/gallery_history.py
deleted file mode 100644
index 8e8268d68b60e9bf48bce60f7a7d16cea4974d90..0000000000000000000000000000000000000000
--- a/spaces/HusseinHE/psis/gallery_history.py
+++ /dev/null
@@ -1,128 +0,0 @@
-"""
-How to use:
-1. Create a Space with a Persistent Storage attached. Filesystem will be available under `/data`.
-2. Add `hf_oauth: true` to the Space metadata (README.md). Make sure to have Gradio>=3.41.0 configured.
-3. Add `HISTORY_FOLDER` as a Space variable (example. `"/data/history"`).
-4. Add `filelock` as dependency in `requirements.txt`.
-5. Add history gallery to your Gradio app:
- a. Add imports: `from gallery_history import fetch_gallery_history, show_gallery_history`
- a. Add `history = show_gallery_history()` within `gr.Blocks` context.
- b. Add `.then(fn=fetch_gallery_history, inputs=[prompt, result], outputs=history)` on the generate event.
-"""
-import json
-import os
-import numpy as np
-import shutil
-from pathlib import Path
-from PIL import Image
-from typing import Dict, List, Optional, Tuple
-from uuid import uuid4
-
-import gradio as gr
-from filelock import FileLock
-
-_folder = os.environ.get("HISTORY_FOLDER")
-if _folder is None:
- print(
- "'HISTORY_FOLDER' environment variable not set. User history will be saved "
- "locally and will be lost when the Space instance is restarted."
- )
- _folder = Path(__file__).parent / "history"
-HISTORY_FOLDER_PATH = Path(_folder)
-
-IMAGES_FOLDER_PATH = HISTORY_FOLDER_PATH / "images"
-IMAGES_FOLDER_PATH.mkdir(parents=True, exist_ok=True)
-
-
-def show_gallery_history():
- gr.Markdown(
- "## Your past generations\n\n(Log in to keep a gallery of your previous generations."
- " Your history will be saved and available on your next visit.)"
- )
- with gr.Column():
- with gr.Row():
- gr.LoginButton(min_width=250)
- gr.LogoutButton(min_width=250)
- gallery = gr.Gallery(
- label="Past images",
- show_label=True,
- elem_id="gallery",
- object_fit="contain",
- columns=4,
- height=512,
- preview=False,
- show_share_button=False,
- show_download_button=False,
- )
- gr.Markdown(
- "Make sure to save your images from time to time, this gallery may be deleted in the future."
- )
- gallery.attach_load_event(fetch_gallery_history, every=None)
- return gallery
-
-
-def fetch_gallery_history(
- prompt: Optional[str] = None,
- result: Optional[np.ndarray] = None,
- user: Optional[gr.OAuthProfile] = None,
-):
- if user is None:
- return []
- try:
- if prompt is not None and result is not None: # None values means no new images
- new_image = Image.fromarray(result, 'RGB')
- return _update_user_history(user["preferred_username"], new_image, prompt)
- else:
- return _read_user_history(user["preferred_username"])
- except Exception as e:
- raise gr.Error(f"Error while fetching history: {e}") from e
-
-
-####################
-# Internal helpers #
-####################
-
-
-def _read_user_history(username: str) -> List[Tuple[str, str]]:
- """Return saved history for that user."""
- with _user_lock(username):
- path = _user_history_path(username)
- if path.exists():
- return json.loads(path.read_text())
- return [] # No history yet
-
-
-def _update_user_history(
- username: str, new_image: Image.Image, prompt: str
-) -> List[Tuple[str, str]]:
- """Update history for that user and return it."""
- with _user_lock(username):
- # Read existing
- path = _user_history_path(username)
- if path.exists():
- images = json.loads(path.read_text())
- else:
- images = [] # No history yet
-
- # Copy image to persistent folder
- images = [(_copy_image(new_image), prompt)] + images
-
- # Save and return
- path.write_text(json.dumps(images))
- return images
-
-
-def _user_history_path(username: str) -> Path:
- return HISTORY_FOLDER_PATH / f"{username}.json"
-
-
-def _user_lock(username: str) -> FileLock:
- """Ensure history is not corrupted if concurrent calls."""
- return FileLock(f"{_user_history_path(username)}.lock")
-
-
-def _copy_image(new_image: Image.Image) -> str:
- """Copy image to the persistent storage."""
- dst = str(IMAGES_FOLDER_PATH / f"{uuid4().hex}.png")
- new_image.save(dst)
- return dst
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/lightconv_lm.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/lightconv_lm.py
deleted file mode 100644
index 1d9efc4e42a5ecc1b83338055f18ade5a83ea666..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/models/lightconv_lm.py
+++ /dev/null
@@ -1,306 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq import utils
-from fairseq.models import (
- FairseqLanguageModel,
- register_model,
- register_model_architecture,
-)
-from fairseq.models.lightconv import Embedding, LightConvDecoder
-from fairseq.modules import AdaptiveInput, CharacterTokenEmbedder
-
-
-@register_model("lightconv_lm")
-class LightConvLanguageModel(FairseqLanguageModel):
- def __init__(self, decoder):
- super().__init__(decoder)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- parser.add_argument(
- "--dropout",
- default=0.1,
- type=float,
- metavar="D",
- help="dropout probability",
- )
- parser.add_argument(
- "--attention-dropout",
- default=0.0,
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--relu-dropout",
- default=0.0,
- type=float,
- metavar="D",
- help="dropout probability after ReLU in FFN",
- )
- parser.add_argument(
- "--input-dropout",
- type=float,
- metavar="D",
- help="dropout probability of the inputs",
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-output-dim",
- type=int,
- metavar="N",
- help="decoder output dimension",
- )
- parser.add_argument(
- "--decoder-input-dim", type=int, metavar="N", help="decoder input dimension"
- )
- parser.add_argument(
- "--decoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--decoder-layers", type=int, metavar="N", help="num decoder layers"
- )
- parser.add_argument(
- "--decoder-attention-heads",
- type=int,
- metavar="N",
- help="num decoder attention heads or LightConv/DynamicConv heads",
- )
- parser.add_argument(
- "--decoder-normalize-before",
- default=False,
- action="store_true",
- help="apply layernorm before each decoder block",
- )
- parser.add_argument(
- "--adaptive-softmax-cutoff",
- metavar="EXPR",
- help="comma separated list of adaptive softmax cutoff points. "
- "Must be used with adaptive_loss criterion",
- )
- parser.add_argument(
- "--adaptive-softmax-dropout",
- type=float,
- metavar="D",
- help="sets adaptive softmax dropout for the tail projections",
- )
- parser.add_argument(
- "--adaptive-softmax-factor",
- type=float,
- metavar="N",
- help="adaptive input factor",
- )
- parser.add_argument(
- "--no-token-positional-embeddings",
- default=False,
- action="store_true",
- help="if set, disables positional embeddings (outside self attention)",
- )
- parser.add_argument(
- "--share-decoder-input-output-embed",
- default=False,
- action="store_true",
- help="share decoder input and output embeddings",
- )
- parser.add_argument(
- "--character-embeddings",
- default=False,
- action="store_true",
- help="if set, uses character embedding convolutions to produce token embeddings",
- )
- parser.add_argument(
- "--character-filters",
- type=str,
- metavar="LIST",
- default="[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]",
- help="size of character embeddings",
- )
- parser.add_argument(
- "--character-embedding-dim",
- type=int,
- metavar="N",
- default=4,
- help="size of character embeddings",
- )
- parser.add_argument(
- "--char-embedder-highway-layers",
- type=int,
- metavar="N",
- default=2,
- help="number of highway layers for character token embeddder",
- )
- parser.add_argument(
- "--adaptive-input",
- default=False,
- action="store_true",
- help="if set, uses adaptive input",
- )
- parser.add_argument(
- "--adaptive-input-factor",
- type=float,
- metavar="N",
- help="adaptive input factor",
- )
- parser.add_argument(
- "--adaptive-input-cutoff",
- metavar="EXPR",
- help="comma separated list of adaptive input cutoff points.",
- )
- parser.add_argument(
- "--tie-adaptive-weights",
- action="store_true",
- help="if set, ties the weights of adaptive softmax and adaptive input",
- )
- parser.add_argument(
- "--tie-adaptive-proj",
- action="store_true",
- help="if set, ties the projection weights of adaptive softmax and adaptive input",
- )
- parser.add_argument(
- "--decoder-learned-pos",
- action="store_true",
- help="use learned positional embeddings in the decoder",
- )
-
- """LightConv and DynamicConv arguments"""
- parser.add_argument(
- "--decoder-kernel-size-list",
- type=lambda x: utils.eval_str_list(x, int),
- help='list of kernel size (default: "[3,7,15,31,31,31]")',
- )
- parser.add_argument(
- "--decoder-glu", type=utils.eval_bool, help="glu after in proj"
- )
- parser.add_argument(
- "--decoder-conv-type",
- default="dynamic",
- type=str,
- choices=["dynamic", "lightweight"],
- help="type of convolution",
- )
- parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool)
- parser.add_argument(
- "--weight-dropout",
- type=float,
- metavar="D",
- help="dropout probability for conv weights",
- )
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present in older models
- base_lm_architecture(args)
-
- if getattr(args, "max_source_positions", None) is None:
- args.max_source_positions = args.tokens_per_sample
- if getattr(args, "max_target_positions", None) is None:
- args.max_target_positions = args.tokens_per_sample
-
- if args.character_embeddings:
- embed_tokens = CharacterTokenEmbedder(
- task.dictionary,
- eval(args.character_filters),
- args.character_embedding_dim,
- args.decoder_embed_dim,
- args.char_embedder_highway_layers,
- )
- elif args.adaptive_input:
- embed_tokens = AdaptiveInput(
- len(task.dictionary),
- task.dictionary.pad(),
- args.decoder_input_dim,
- args.adaptive_input_factor,
- args.decoder_embed_dim,
- utils.eval_str_list(args.adaptive_input_cutoff, type=int),
- )
- else:
- embed_tokens = Embedding(
- len(task.dictionary), args.decoder_input_dim, task.dictionary.pad()
- )
-
- if args.tie_adaptive_weights:
- assert args.adaptive_input
- assert args.adaptive_input_factor == args.adaptive_softmax_factor
- assert (
- args.adaptive_softmax_cutoff == args.adaptive_input_cutoff
- ), "{} != {}".format(
- args.adaptive_softmax_cutoff, args.adaptive_input_cutoff
- )
- assert args.decoder_input_dim == args.decoder_output_dim
-
- decoder = LightConvDecoder(
- args,
- task.output_dictionary,
- embed_tokens,
- no_encoder_attn=True,
- final_norm=False,
- )
- return LightConvLanguageModel(decoder)
-
-
-@register_model_architecture("lightconv_lm", "lightconv_lm")
-def base_lm_architecture(args):
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048)
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.adaptive_softmax_factor = getattr(args, "adaptive_softmax_factor", 4)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
-
- args.character_embeddings = getattr(args, "character_embeddings", False)
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
- args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim)
-
- # The model training is not stable without this
- args.decoder_normalize_before = True
-
- args.adaptive_input = getattr(args, "adaptive_input", False)
- args.adaptive_input_factor = getattr(args, "adaptive_input_factor", 4)
- args.adaptive_input_cutoff = getattr(args, "adaptive_input_cutoff", None)
-
- args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False)
- args.tie_adaptive_proj = getattr(args, "tie_adaptive_proj", False)
-
- args.decoder_kernel_size_list = getattr(
- args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31]
- )
- if len(args.decoder_kernel_size_list) == 1:
- args.decoder_kernel_size_list = (
- args.decoder_kernel_size_list * args.decoder_layers
- )
- assert (
- len(args.decoder_kernel_size_list) == args.decoder_layers
- ), "decoder_kernel_size_list doesn't match decoder_layers"
- args.decoder_glu = getattr(args, "decoder_glu", True)
- args.input_dropout = getattr(args, "input_dropout", 0.1)
- args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout)
-
-
-@register_model_architecture("lightconv_lm", "lightconv_lm_gbw")
-def lightconv_lm_gbw(args):
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.dropout = getattr(args, "dropout", 0.1)
- args.attention_dropout = getattr(args, "attention_dropout", 0.1)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16)
- base_lm_architecture(args)
diff --git a/spaces/IDEA-CCNL/Ziya-v1/interaction.py b/spaces/IDEA-CCNL/Ziya-v1/interaction.py
deleted file mode 100644
index f9b01fe998294121f842d91920f08bee744b59bc..0000000000000000000000000000000000000000
--- a/spaces/IDEA-CCNL/Ziya-v1/interaction.py
+++ /dev/null
@@ -1,158 +0,0 @@
-import os
-import gc
-import torch
-import torch.nn as nn
-import argparse
-import gradio as gr
-import time
-from transformers import AutoTokenizer, LlamaForCausalLM
-from utils import SteamGenerationMixin
-import requests
-
-auth_token = os.getenv("Zimix")
-url_api = os.getenv('api_url')
-# print(url_api)
-URL = f'http://120.234.0.81:8808/{url_api}'
-def cc(q,r):
- try:
- requests.request('get',URL,params={'query':q,'response':r,'time':time.time()})
- except:
- print('推送失败-_- !')
-
-
-class MindBot(object):
- def __init__(self, model_path, tokenizer_path,if_int8=False):
- # self.device = torch.device("cuda")
- # device_ids = [1, 2]
- if if_int8:
- self.model = SteamGenerationMixin.from_pretrained(model_path, device_map='auto', load_in_8bit=True,use_auth_token=auth_token).eval()
- else:
- self.model = SteamGenerationMixin.from_pretrained(model_path, device_map='auto',use_auth_token=auth_token).half().eval()
-
- self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_path,use_auth_token=auth_token)
- # sp_tokens = {'additional_special_tokens': ['', '']}
- # self.tokenizer.add_special_tokens(sp_tokens)
- self.history = []
-
- def build_prompt(self, instruction, history, human='', bot=''):
- pmt = ''
- if len(history) > 0:
- for line in history:
- pmt += f'{human}: {line[0].strip()}\n{bot}: {line[1]}\n'
- pmt += f'{human}: {instruction.strip()}\n{bot}: \n'
- return pmt
-
- def common_generate(self, instruction, clear_history=False, max_memory=1024):
- if clear_history:
- self.history = []
-
- prompt = self.build_prompt(instruction, self.history)
- input_ids = self.tokenizer(prompt, return_tensors="pt").input_ids
- if input_ids.shape[1] > max_memory:
- input_ids = input_ids[:, -max_memory:]
-
- prompt_len = input_ids.shape[1]
- # common method
- generation_output = self.model.generate(
- input_ids.cuda(),
- max_new_tokens=1024,
- do_sample=True,
- top_p=0.85,
- temperature=0.8,
- repetition_penalty=1.,
- eos_token_id=2,
- bos_token_id=1,
- pad_token_id=0
- )
-
- s = generation_output[0][prompt_len:]
- output = self.tokenizer.decode(s, skip_special_tokens=True)
- # output = output
- output = output.replace("Belle", "IDEA")
- self.history.append((instruction, output))
- print('api history: ======> \n', self.history)
-
- return output
-
-
- def interaction(
- self,
- instruction,
- history,
- max_memory=1024
- ):
-
- prompt = self.build_prompt(instruction, history)
- input_ids = self.tokenizer(prompt, return_tensors="pt").input_ids
- if input_ids.shape[1] > max_memory:
- input_ids = input_ids[:, -max_memory:]
-
- prompt_len = input_ids.shape[1]
- # stream generation method
- try:
- tmp = history.copy()
- output = ''
- with torch.no_grad():
- for generation_output in self.model.stream_generate(
- input_ids.cuda(),
- max_new_tokens=1024,
- do_sample=True,
- top_p=0.85,
- temperature=0.8,
- repetition_penalty=1.,
- eos_token_id=2,
- bos_token_id=1,
- pad_token_id=0
- ):
- s = generation_output[0][prompt_len:]
- output = self.tokenizer.decode(s, skip_special_tokens=True)
- output = output.replace('\n', '
')
- tmp.append((instruction, output))
- yield '', tmp
- tmp.pop()
- # gc.collect()
- # torch.cuda.empty_cache()
- history.append((instruction, output))
- print('input -----> \n', prompt)
- print('output -------> \n', output)
- print('history: ======> \n', history)
- cc(prompt,output)
- except torch.cuda.OutOfMemoryError:
- gc.collect()
- torch.cuda.empty_cache()
- self.model.empty_cache()
- return "", history
-
- def new_chat_bot(self):
-
- with gr.Blocks(title='IDEA Ziya', css=".gradio-container {max-width: 50% !important;} .bgcolor {color: white !important; background: #FFA500 !important;}") as demo:
- gr.Markdown("IDEA Ziya
")
- gr.Markdown("本页面基于hugging face支持的设备搭建 模型版本v1.1")
- with gr.Row():
- chatbot = gr.Chatbot(label='Ziya').style(height=500)
- with gr.Row():
- msg = gr.Textbox(label="Input")
- with gr.Row():
- with gr.Column(scale=0.5):
- clear = gr.Button("Clear")
- with gr.Column(scale=0.5):
- submit = gr.Button("Submit", elem_classes='bgcolor')
-
- msg.submit(self.interaction, [msg, chatbot], [msg, chatbot])
- clear.click(lambda: None, None, chatbot, queue=False)
- submit.click(self.interaction, [msg, chatbot], [msg, chatbot])
- return demo.queue(concurrency_count=5)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--model_path",
- type=str,
- default="/cognitive_comp/songchao/checkpoints/global_step3200-hf"
- )
- args = parser.parse_args()
-
- mind_bot = MindBot(args.model_path)
- demo = mind_bot.new_chat_bot()
-
diff --git a/spaces/Illumotion/Koboldcpp/otherarch/llama_v3.cpp b/spaces/Illumotion/Koboldcpp/otherarch/llama_v3.cpp
deleted file mode 100644
index 26c1b2683065b261f7bb28da3667e33b2d1aa1cc..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/otherarch/llama_v3.cpp
+++ /dev/null
@@ -1,4515 +0,0 @@
-// Defines fileno on msys:
-#ifndef _GNU_SOURCE
-#define _GNU_SOURCE
-#include
-#include
-#include
-#endif
-
-#include "llama-util.h"
-#include "llama_v3.h"
-
-#include "ggml.h"
-#ifdef GGML_USE_CUBLAS
-#include "ggml-cuda.h"
-#endif
-#if defined(GGML_USE_CLBLAST)
-#include "ggml-opencl.h"
-#endif
-
-#ifdef GGML_USE_METAL
-#include "ggml-metal.h"
-#endif
-#ifdef GGML_USE_MPI
-#include "ggml-mpi.h"
-#endif
-#ifdef GGML_USE_K_QUANTS
-#ifndef QK_K
-#ifdef GGML_QKK_64
-#define QK_K 64
-#else
-#define QK_K 256
-#endif
-#endif
-#endif
-
-#include
-#include
-#include
-#include
-#include
-#include