diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Audition CC 2019 Crack With Activation Key Tips and Tricks to Enhance Your Audio Projects.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Audition CC 2019 Crack With Activation Key Tips and Tricks to Enhance Your Audio Projects.md
deleted file mode 100644
index 088000fc0a8f30ab8566e9b97672da155c7ced5a..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Audition CC 2019 Crack With Activation Key Tips and Tricks to Enhance Your Audio Projects.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Adobe Audition CC 2019 Crack With Activation Key
-
If you are looking for a powerful and professional audio editing software, you might have heard of Adobe Audition CC 2019. This is one of the most popular and widely used applications for recording, mixing, mastering, and restoring audio. However, you might also know that Adobe Audition CC 2019 is not a free software. You need to pay a monthly or yearly subscription fee to use it. That's why some people look for a crack for Adobe Audition CC 2019, which is a way to bypass the activation process and use the software without paying anything. But is it worth it? In this article, we will tell you everything you need to know about Adobe Audition CC 2019 crack, including its features, pros and cons, and how to download and install it.
Adobe Audition CC 2019 is the latest version of Adobe's audio editing software. It is part of the Adobe Creative Cloud suite, which means you can access it online or offline, and sync your projects across different devices. Adobe Audition CC 2019 allows you to create, edit, mix, and enhance audio for various purposes, such as music production, podcasting, video editing, radio broadcasting, and more. It has a user-friendly interface that lets you work with multiple tracks, clips, and effects in a flexible and intuitive way. It also has a rich collection of tools and features that can help you improve the quality and clarity of your audio, such as noise reduction, spectral editing, pitch correction, compression, EQ, reverb, and more.
-
Why do you need a crack for Adobe Audition CC 2019?
-
As mentioned earlier, Adobe Audition CC 2019 is not a free software. You need to pay a subscription fee to use it. The fee varies depending on the plan you choose, but it can range from $20.99 to $52.99 per month. If you want to use it for a longer period of time, you might end up spending a lot of money. That's why some people look for a crack for Adobe Audition CC 2019. A crack is a modified version of the software that bypasses the activation process and lets you use it without paying anything. By using a crack, you can save money and enjoy all the features of Adobe Audition CC 2019 without any limitations.
-
How to download and install Adobe Audition CC 2019 crack?
-
If you want to download and install Adobe Audition CC 2019 crack, you need to follow these steps:
-
-
Go to a reliable website that offers the crack file. You can search online for "Adobe Audition CC 2019 crack" or "Adobe Audition CC 2019 activation key" and find several results. However, be careful not to download from suspicious or untrusted sources that might contain malware or viruses.
-
Download the crack file to your computer. It might be in a zip or rar format, so you need to extract it using a software like WinRAR or 7-Zip.
-
Turn off your internet connection and antivirus software temporarily. This is to prevent any interference or detection from Adobe or your system.
-
Run the setup file of Adobe Audition CC 2019 and follow the instructions to install it on your computer.
-
Copy the crack file or the activation key from the folder where you extracted it and paste it into the installation directory of Adobe Audition CC 2019. This is usually located in C:\Program Files\Adobe\Adobe Audition CC 2019.
-
Launch Adobe Audition CC 2019 and enjoy using it without any restrictions.
-
-
Features of Adobe Audition CC 2019 Crack
-
Multitrack editing and mixing
-
One of the main features of Adobe Audition CC 2019 crack is that it allows you to work with multiple tracks and clips in a multitrack session. You can record live audio or import audio files from different sources and arrange them on separate tracks. You can also edit each track individually or as a group using various tools such as cut, copy, paste, trim, split, fade, crossfade, mute, solo, etc. You can also mix your tracks using different effects such as volume automation, pan automation, EQ automation, send effects, insert effects, etc. You can also use buses and submixes to route your audio signals more efficiently.
-
Audio restoration and enhancement
Another feature of Adobe Audition CC 2019 crack is that it allows you to restore and enhance your audio quality using various tools and features. For example,
You can use the noise reduction effect to remove unwanted background noise such as hiss, hum, clicks, pops, etc.
You can use the spectral editing mode to view your audio in a frequency-based display and make precise edits using tools such as lasso tool, brush tool, spot healing tool, etc.
You can use the pitch correction effect to adjust the pitch of your audio automatically or manually using tools such as auto mode, graph mode, varispeed mode, etc.
You can use the compression effect to reduce the dynamic range of your audio and make it more consistent in volume level.
You can use the EQ effect to adjust the frequency balance of your audio using tools such as parametric EQ, graphic EQ, filter bank, etc.
You can use the reverb effect to add depth and space to your audio using tools such as convolution reverb, studio reverb, reverb presets, etc.
-
Sound design and effects
A third feature of Adobe Audition CC 2019 crack is that it allows you to design and create sound effects using various tools and features. For example,
You can use the generate menu to create synthetic sounds such as tones, noise, DTMF tones, etc.
You can use the effects menu to apply different effects such as distortion, delay, modulation, flanger, chorus, phaser, etc.
You can use the favorites menu to apply preset combinations of effects such as telephone voice, radio voice, robot voice, etc.
You can use the amplitude statistics panel to analyze your audio in terms of peak amplitude, average amplitude, RMS amplitude, etc.
You can use the match loudness panel to adjust your audio levels according to different standards such as LUFS, LKFS, dBFS, etc.
-
Podcasting and narration
A fourth feature of Adobe Audition CC 2019 crack is that it allows you to create podcasts and narrations using various tools and features. For example,
-
How to download Adobe Audition CC 2019 full version for free
-Adobe Audition CC 2019 patch file download link
-Adobe Audition CC 2019 serial number generator online
-Adobe Audition CC 2019 license key crack activation code
-Adobe Audition CC 2019 torrent download with crack
-Adobe Audition CC 2019 keygen free download for windows
-Adobe Audition CC 2019 crack mac os x download
-Adobe Audition CC 2019 portable version with crack
-Adobe Audition CC 2019 pre activated setup download
-Adobe Audition CC 2019 crack reddit best site
-Adobe Audition CC 2019 crack youtube video tutorial
-Adobe Audition CC 2019 crack google drive direct link
-Adobe Audition CC 2019 crack mega.nz download
-Adobe Audition CC 2019 crack mediafire.com download
-Adobe Audition CC 2019 crack zippyshare.com download
-Adobe Audition CC 2019 crack no survey no password
-Adobe Audition CC 2019 crack without virus or malware
-Adobe Audition CC 2019 crack working 100% tested
-Adobe Audition CC 2019 crack latest version updated
-Adobe Audition CC 2019 crack offline installer download
-Adobe Audition CC 2019 crack for windows 10/8/7 64 bit
-Adobe Audition CC 2019 crack for mac os catalina/mojave/high sierra
-Adobe Audition CC 2019 crack with all features unlocked
-Adobe Audition CC 2019 crack with multilingual support
-Adobe Audition CC 2019 crack with lifetime activation guarantee
-Adobe Audition CC 2019 crack with unlimited usage license
-Adobe Audition CC 2019 crack with professional audio editing tools
-Adobe Audition CC 2019 crack with advanced sound effects and plugins
-Adobe Audition CC 2019 crack with easy to use interface and workflow
-Adobe Audition CC 2019 crack with fast performance and stability
-Adobe Audition CC 2019 crack with support for various audio formats and codecs
-Adobe Audition CC 2019 crack with batch processing and automation features
-Adobe Audition CC 2019 crack with spectral editing and frequency analysis tools
-Adobe Audition CC 2019 crack with noise reduction and restoration features
-Adobe Audition CC 2019 crack with multitrack recording and mixing features
-Adobe Audition CC 2019 crack with surround sound and spatial audio features
-Adobe Audition CC 2019 crack with podcasting and voiceover features
-Adobe Audition CC 2019 crack with music production and mastering features
-Adobe Audition CC 2019 crack with integration with other adobe products and services
-Adobe Audition CC 2019 crack with cloud storage and collaboration features
-
You can use the essential sound panel to quickly and easily adjust your audio parameters such as loudness clarity dynamics tone etc. using sliders presets.
You can use podcast template start new multitrack session predefined tracks settings podcasting. You can also customize template according needs.
You can use punch-and-roll recording mode record narration pre-roll post-roll playback. You can also edit mistakes on-the-fly using keyboard shortcuts.
You can use auto-ducking feature automatically lower volume level background music sound effects when speech detected. You can also adjust sensitivity fade duration auto-ducking.
-
Integration with other Adobe products
-
A fifth feature of Adobe Audition CC 2019 crack is that it allows you to integrate it with other Adobe products, such as Premiere Pro, After Effects, Media Encoder, Photoshop, Illustrator, and more. You can easily import and export audio files between these applications using the dynamic link feature. You can also use the essential graphics panel to create and edit motion graphics templates for your videos. You can also use the Adobe Stock service to access millions of royalty-free assets, such as music, sound effects, images, videos, and more.
-
Pros and Cons of Adobe Audition CC 2019 Crack
-
Pros
-
Some of the advantages of using Adobe Audition CC 2019 crack are:
-
-
You can access all the features and tools of Adobe Audition CC 2019 without paying anything.
-
You do not need to subscribe or register to use the software.
-
You can use the software on both Windows and Mac OS devices.
-
-
Cons
-
Some of the disadvantages of using Adobe Audition CC 2019 crack are:
-
-
You are violating the terms and conditions of Adobe by using a cracked version of their software.
-
You are exposing your computer and data to potential risks of malware and viruses that might be hidden in the crack file.
-
You will not receive any updates or technical support from Adobe for the software.
-
-
Conclusion
-
In conclusion, Adobe Audition CC 2019 crack is a way to use Adobe's audio editing software without paying anything. It has many features and tools that can help you create, edit, mix, and enhance audio for various purposes. However, it also has many drawbacks and risks that you should be aware of before using it. It is illegal and unethical to use a cracked version of a software that belongs to another company. It is also unsafe and unreliable to download and install a crack file from unknown sources that might contain malware or viruses. It is also unwise to use a software that does not have any updates or technical support from its developers. Therefore, we do not recommend using Adobe Audition CC 2019 crack. Instead, we suggest you to buy a legitimate copy of Adobe Audition CC 2019 from their official website or use an alternative free or cheaper audio editing software.
-
FAQs
-
Here are some frequently asked questions about Adobe Audition CC 2019 crack:
-
-
Q: Is Adobe Audition CC 2019 crack safe to use? A: No, it is not safe to use. It might contain malware or viruses that can harm your computer and data. It might also cause errors or crashes in your system.
-
Q: Is Adobe Audition CC 2019 crack legal to use? A: No, it is not legal to use. It violates the terms and conditions of Adobe by using a modified version of their software without their permission. It might also infringe the intellectual property rights of Adobe and other third parties.
-
Q: Is Adobe Audition CC 2019 crack worth it? A: No, it is not worth it. It might save you some money in the short term, but it will cost you more in the long term. You will miss out on the updates and technical support from Adobe for the software. You will also risk losing your data or facing legal consequences for using a cracked software.
-
Q: Where can I download Adobe Audition CC 2019 crack? A: We do not recommend downloading Adobe Audition CC 2019 crack from any source. It is unsafe and illegal to do so. If you want to use Adobe Audition CC 2019, you should buy a legitimate copy from their official website or use an alternative free or cheaper audio editing software.
-
Q: How can I activate Adobe Audition CC 2019 without a crack? A: You can activate Adobe Audition CC 2019 without a crack by following these steps:
-
-
Buy a subscription plan for Adobe Audition CC 2019 from their official website.
-
Download and install the software on your computer.
-
Sign in with your Adobe ID and password.
-
Enter your payment details and confirm your purchase.
-
Enjoy using Adobe Audition CC 2019 with all its features and benefits.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Deep English Course Torrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Deep English Course Torrent.md
deleted file mode 100644
index 92814b6317fe26552cb042bf89c4ed011e437ff3..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Deep English Course Torrent.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
How to Download Deep English Course Torrent for Free
-
If you want to improve your English listening and speaking skills, you might be interested in the Deep English course. This course is based on the Deep English method of language learning, which uses interesting stories about amazing people to help you speak more fluently and confidently.
But how can you get access to this course without paying anything? One way is to download the Deep English course torrent for free. A torrent is a file that contains information about other files that you can download from other users on the internet. By using a torrent client software, you can download the files you want from the torrent.
-
However, before you download the Deep English course torrent, you should be aware of some risks and disadvantages. First of all, downloading torrents is illegal in some countries and regions, and you might face legal consequences if you are caught. Second, downloading torrents can expose your computer to viruses and malware that can harm your system or steal your personal information. Third, downloading torrents can be slow and unreliable, depending on the availability and speed of other users who are sharing the files.
-
Therefore, we do not recommend downloading the Deep English course torrent for free. Instead, we suggest that you visit the official website of Deep English and sign up for their free 7-day English course. This way, you can get a taste of their method and see if it works for you. You can also learn more about their True Stories English Fluency Course, which is designed to improve your listening and speaking skills with true stories about amazing people.
-
So don't waste your time and risk your security by downloading the Deep English course torrent for free. Go to deepenglish.com and start learning English with interesting stories today!
-
-
-
But what are the benefits of the Deep English course? Why should you choose it over other English courses? Here are some of the reasons why Deep English can help you achieve your English fluency goals.
-
-
It forces you to think deeply. Learning a second language with Deep English challenges you to think in new ways, express concepts in different words, and solve problems from new perspectives. This improves your cognitive skills and your creativity.
-
It uses real English. Deep English lessons are based on real stories about real people. You will learn how to understand and use natural English expressions, slang, idioms, and conversation techniques that native speakers use every day.
-
It stimulates your mind. Deep English lessons are not boring or repetitive. They are interesting and engaging, covering topics that make you curious and inspired. You will learn about amazing people, places, events, and ideas that will expand your knowledge and worldview.
-
It helps you speak more confidently. Deep English lessons are designed to help you speak more fluently and confidently. You will practice speaking with our speaking story lessons, our speak-out-loud AI chatbot, and our live discussions on Zoom. You will also get feedback and support from our teachers and community.
-
-
So if you are looking for a course that can help you improve your English listening and speaking skills in a fun and effective way, you should give Deep English a try. You can start with their free 7-day English course and see if it works for you. You can also check out their True Stories English Fluency Course, which is their premium course that offers more features and benefits.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WinRAR 6.02 (64-bit) for Free and Compress Your Files Easily.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WinRAR 6.02 (64-bit) for Free and Compress Your Files Easily.md
deleted file mode 100644
index 014a1b1e15bc54b6a02eba2233bd2cb596c65163..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WinRAR 6.02 (64-bit) for Free and Compress Your Files Easily.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
WinRAR 6.02 (64-bit): How to Download and Install the Latest Version of the Popular Compression Tool
-
WinRAR is a 64-bit Windows version of RAR Archiver, the powerful compression tool that can backup your data, reduce the size of email attachments, decompress RAR, ZIP and other files downloaded from the Internet, and create new archives in RAR and ZIP file format. WinRAR 6.02 (64-bit) is the latest version of WinRAR, released on June 14th, 2021. It offers several improvements and bug fixes over the previous versions, such as:
-
-
Added extraction support for GZIP archives with optional header checksum field.
-
Added extraction support for RAR5 archives with encrypted file names.
-
Improved performance when processing a large number of small archives in Windows 10.
-
Fixed a security vulnerability when processing malicious RAR archives.
-
Fixed a compatibility issue with Windows 11.
-
-
In this article, we will show you how to download and install WinRAR 6.02 (64-bit) on your Windows computer.
Step 1: Download WinRAR 6.02 (64-bit) from the Official Website or FileHorse.com
-
The first step to download and install WinRAR 6.02 (64-bit) is to download the setup file from the official website or FileHorse.com. The official website is https://www.win-rar.com/download.html, where you can select your language and platform and click on the "Download WinRAR" button. Alternatively, you can download WinRAR 6.02 (64-bit) from FileHorse.com, a trusted website that offers free software downloads. You can click on the "Download Now" button or use this direct link: https://www.filehorse.com/download-winrar-64/62528/. The setup file is about 3.2 MB in size and has a .exe extension.
-
Step 2: Run the Setup File and Follow the Instructions
-
The next step to download and install WinRAR 6.02 (64-bit) is to run the setup file and follow the instructions. You can double-click on the setup file or right-click on it and choose "Run as administrator" from the context menu. You may see a User Account Control prompt asking you to confirm if you want to allow the app to make changes to your device. Click on "Yes" to proceed. You will then see a welcome screen with the WinRAR logo and version number. Click on "Install" to start the installation process.
-
You will then see a screen where you can choose the destination folder for WinRAR installation. The default folder is C:\Program Files\WinRAR, but you can change it by clicking on the "Browse" button and selecting another folder. You can also choose whether to create a desktop icon, a start menu icon, or an associate WinRAR with RAR and ZIP files. You can check or uncheck the boxes according to your preferences. Click on "OK" to continue.
-
You will then see a screen where you can choose which interface languages you want to install for WinRAR. The default language is English, but you can select other languages from the list by checking or unchecking the boxes. You can also choose whether to install WinRAR themes, which are optional graphical skins for WinRAR interface. Click on "OK" to continue.
-
You will then see a screen where you can choose which user interface options you want to use for WinRAR. You can choose between shell integration, which allows you to access WinRAR functions from Windows Explorer context menu, or classic interface, which allows you to use WinRAR as a standalone application with its own window and menu bar. You can also choose whether to use wizard interface, which guides you through basic compression and extraction tasks, or command line interface, which allows you to use advanced options and parameters for WinRAR commands. You can check or uncheck the boxes according to your preferences. Click on "OK" to continue.
-
You will then see a screen where you can review your installation settings and make any changes if needed. You can also choose whether to read WinRAR license agreement, view WinRAR help file, or run WinRAR after installation. Click
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download [Extra Quality] Pashto Phonetic Keyboard For Windows 7 33.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download [Extra Quality] Pashto Phonetic Keyboard For Windows 7 33.md
deleted file mode 100644
index 7bc7a3f17dcc8e1644452409ff20c6ed170efb11..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download [Extra Quality] Pashto Phonetic Keyboard For Windows 7 33.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Download Pashto Phonetic Keyboard For Windows 7 33
-
-by SL Hotel · 2011 — Existing on-screen Urdu keyboard is replica of. Microsoft Windows QWERTY type keyboard. For Mobile phones, Multi-tap T9 replica keypads are ... 1fdad05405
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us Apk Eski Srm Farklar - Hangi Srm Semelisin?.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us Apk Eski Srm Farklar - Hangi Srm Semelisin?.md
deleted file mode 100644
index 86cc15000258ac664179862dff93b301ae0d6618..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us Apk Eski Srm Farklar - Hangi Srm Semelisin?.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
Among Us APK Eski Sürüm: Nasıl İndirilir ve Oynanır?
-
Among Us, son zamanlarda çok popüler olan bir çevrimiçi çok oyunculu oyunudur. Bu oyunu Android cihazınızda oynamak istiyorsanız, Google Play Store'dan ücretsiz olarak indirebilirsiniz. Ancak, bazı oyuncular eski sürümlerini tercih ediyor ve bunun için APK dosyalarını arıyorlar. Peki, Among Us APK eski sürüm nedir, neden aranıyor ve nasıl indirilip oynanır? Bu yazıda, bu soruların cevaplarını bulacaksınız.
-
Among Us Nedir?
-
Among Us, 2018 yılında Innersloth tarafından geliştirilen ve yayınlanan bir çevrimiçi çok oyunculu sosyal dedüksiyon oyunudur. Bu oyunda, uzay geminizi kalkışa hazırlamaya çalışırken 4-15 oyuncu arasında bir veya iki sahtekar vardır. Sahtekarlar, mürettebat arkadaşlarınızı öldürerek veya sabotaj yaparak gemiyi yok etmeye çalışırken, siz de görevleri tamamlayarak veya sahtekarları bulup oy vererek kazanmaya çalışırsınız.
Among Us'u oynamak için öncelikle bir oyun odasına katılmanız veya kendiniz bir oyun odası oluşturmanız gerekir. Oyun odasına katıldığınızda, karakterinizi özelleştirebilir, oyun modunu seçebilir ve oyun ayarlarını değiştirebilirsiniz. Oyun başladığında, rolünüzü (mürettebat arkadaşı veya sahtekar) öğreneceksiniz. Rolünüze göre farklı görevleriniz olacaktır.
-
Mürettebat arkadaşı olarak, gemideki görevleri tamamlamanız veya sahtekarları bulup oy vermeniz gerekir. Görevler, basit mini oyunlardan oluşur ve geminin farklı bölgelerinde yer alır. Sahtekarları bulmak için ise, cesetleri rapor edebilir, acil toplantı çağrısı yapabilir veya diğer oyuncularla sohbet edebilirsiniz. Oy verme sırasında ise, sahtekarları ikna edici bir şekilde suçlamalı veya kend
Sahtekar olarak ise, mürettebat arkadaşlarınızı öldürmeniz veya sabotaj yapmanız gerekir. Öldürmek için, yakınınızdaki bir oyuncuya tıklayabilir veya havalandırma sistemini kullanarak farklı bölgelere geçebilirsiniz. Sabotaj yapmak için ise, haritadaki sabotaj butonuna basabilir ve geminin farklı sistemlerini bozabilirsiniz. Sahtekarları bulmaya çalışan oyuncularla ise, yalan söyleyerek veya suçu başkalarına atarak kendinizi aklamalısınız.
-
Among Us'un Popülerliği Neden Arttı?
-
Among Us, 2018 yılında çıktığı halde, 2020 yılında popülerliği artmaya başladı. Bunun nedeni, ünlü Twitch yayıncılarının ve YouTuber'ların bu oyunu oynamaya başlaması ve milyonlarca izleyiciye ulaştırmasıydı. Ayrıca, COVID-19 pandemisi nedeniyle evde kalan insanların sosyalleşmek için bu oyunu tercih etmesi de bir etken oldu. Among Us, basit, eğlenceli ve arkadaşlarla oynamak için ideal bir oyun olduğu için çok sevildi.
-
Among Us APK Eski Sürüm Neden Aranıyor?
-
Among Us APK eski sürüm, oyunun Google Play Store'da bulunan güncel sürümünden daha eski bir versiyonunu ifade eder. APK, Android Package Kit anlamına gelir ve Android cihazlarda çalışan uygulamaların dosya formatıdır. Among Us APK eski sürümü arayan oyuncuların bazı nedenleri vardır. Bunlardan bazıları şunlardır:
-
Among Us APK Eski Sürümün Avantajları
-
-
Among Us APK eski sürümü indiren oyuncular, oyunun daha az hata ve sorun içeren bir versiyonunu oynayabilirler. Bazı oyuncular, güncel sürümde karşılaştıkları bağlantı sorunları, grafik hataları veya performans düşüklüğü gibi problemlerden şikayetçidir. Eski sürümde bu tür sorunlar daha az görülür.
-
Among Us APK eski sürümü indiren oyuncular, oyunun daha fazla özellik ve seçenek sunan bir versiyonunu oynayabilirler. Bazı oyuncular, güncel sürümde kaldırılan veya değiştirilen bazı özellikleri özlerler. Örneğin, eski sürümde sahtekar sayısı 3'e kadar çıkabiliyordu, ancak güncel sürümde en fazla 2 sahtekar olabiliyor. Eski sürümde daha fazla sahtekar olması, oyunu daha zorlu ve heyecanlı hale getiriyordu.
-
Among Us APK eski sürümü indiren oyuncular, oyunun daha az kaynak tüketen bir versiyonunu oynayabilirler. Bazı oyuncuların cihazları, güncel sürümün gerektirdiği sistem gereksinimlerini karşılayamayabilir. Bu durumda, eski sürüm daha uygun olabilir. Eski sürüm, daha düşük çözünürlük, daha az animasyon ve daha az detay gibi faktörler nedeniyle daha az kaynak tüketir.
-
-
Among Us APK Eski Sürümün Dezavantajları
-
-
Among Us APK eski sürümü indiren oyuncular, oyunun güvenlik riskleri içeren bir versiyonunu oynayabilirler. Güncel sürümde düzeltilen bazı güvenlik açıkları veya hileler, eski sürümde hala mevcut olabilir. Bu durumda, oy nunla karşılaşabilirsiniz. Örneğin, oyununuzu bozan, sizi oyundan atabilen veya kişisel bilgilerinizi çalabilen hilecilerle karşılaşabilirsiniz.
-
Among Us APK eski sürümü indiren oyuncular, oyunun güncel olmayan bir versiyonunu oynayabilirler. Güncel sürümde eklenen bazı özellikler, iyileştirmeler veya düzeltmeler, eski sürümde bulunmayabilir. Bu durumda, oyun deneyiminiz eksik veya kötü olabilir. Örneğin, güncel sürümde yeni haritalar, kostümler, görevler veya modlar eklenmiş olabilir, ancak eski sürümde bunlardan yararlanamayabilirsiniz.
-
Among Us APK eski sürümü indiren oyuncular, oyunun uyumsuz bir versiyonunu oynayabilirler. Güncel sürümle eski sürüm arasında uyumluluk sorunları olabilir. Bu durumda, oyunu başlatamayabilir, oyun odalarına katılamayabilir veya oyun sırasında bağlantınızı kaybedebilirsiniz. Ayrıca, güncel sürüm kullanan oyuncularla iletişim kurmakta veya oynamakta zorluk yaşayabilirsiniz.
-
-
Among Us APK Eski Sürüm Nasıl İndirilir?
-
Among Us APK eski sürümünü indirmek ve oynamak için aşağıdaki adımları izleyebilirsiniz:
-
Adım 1: Güvenilir Bir Kaynaktan APK Dosyasını Bulun
-
İlk olarak, Among Us APK eski sürümünü indirebileceğiniz güvenilir bir kaynak bulmanız gerekir. İnternette birçok APK indirme sitesi bulunmaktadır, ancak bunların hepsi güvenli değildir. Bazı siteler, virüslü, zararlı veya sahte APK dosyaları sunabilir. Bu nedenle, APK dosyasını indirmeden önce siteyi kontrol etmeniz ve yorumları okumanız önemlidir. Ayrıca, istediğiniz sürüm numarasını ve dosya boyutunu da kontrol etmeniz gerekir.
-
Adım 2: Bilinmeyen Kaynaklardan Uygulama Yükleme İzni Verin
-
İkinci olarak, Android cihazınızda bilinmeyen kaynaklardan uygulama yükleme izni vermeniz gerekir. Bu izin, Google Play Store dışındaki kaynaklardan uygulama yüklemenize olanak sağlar. Bu izni vermek için şu adımları izleyebilirsiniz:
-
-
Ayarlar'a gidin.
-
Güvenlik veya Gizlilik seçeneğine dokunun.
-
Bilinmeyen kaynaklar veya Bilinmeyen uygulamalar seçeneğini bulun ve açın.
-
İndirdiğiniz kaynağı seçin ve izin verin.
-
-
Adım 3: APK Dosyasını İndirin ve Kurun
-
Üçüncü olarak, APK dosyasını indirmek için kaynağa gidin ve indirme butonuna dokunun. İndirme işlemi tamamlandığında, dosyayı açın ve kurulumu başlatın. Kurulum işlemi birkaç dakika sürebilir. Kurulum tamamlandığında, Among Us uygulamasının cihazınızda yüklendiğini göreceksiniz.
-
among us apk eski sürüm indir
-among us apk eski sürüm nasıl yüklenir
-among us apk eski sürüm hileli
-among us apk eski sürüm mod menu
-among us apk eski sürüm güncelleme
-among us apk eski sürüm oyna
-among us apk eski sürüm türkçe
-among us apk eski sürüm pc
-among us apk eski sürüm son versiyon
-among us apk eski sürüm android
-among us apk eski sürüm ios
-among us apk eski sürüm ücretsiz
-among us apk eski sürüm 2021
-among us apk eski sürüm 2020
-among us apk eski sürüm 2019
-among us apk eski sürüm 2018
-among us apk eski sürüm 2017
-among us apk eski sürüm 2016
-among us apk eski sürüm 2015
-among us apk eski sürüm 2014
-among us apk eski sürüm 2013
-among us apk eski sürüm 2012
-among us apk eski sürüm 2011
-among us apk eski sürüm 2010
-among us apk eski sürüm 2009
-among us apk eski sürüm inceleme
-among us apk eski sürüm yorumlar
-among us apk eski sürüm özellikler
-among us apk eski sürüm farklar
-among us apk eski sürüm avantajlar
-among us apk eski sürüm dezavantajlar
-among us apk eski sürüm sorunlar
-among us apk eski sürüm çözümler
-among us apk eski sürüm ipuçları
-among us apk eski sürüm rehberi
-among us apk eski sürüm kurulumu
-among us apk eski sürüm güvenli mi
-among us apk eski sürüm virüs var mı
-among us apk eski sürüm lisanslı mı
-among us apk eski sürüm orijinal mi
-among us apk eski sürüm sahte mi
-among us apk eski sürüm kopya mı
-among us apk eski sürüm alternatifleri
-among us apk eski sürüm benzerleri
-among us apk eski sürüm rakipleri
-among us apk eski sürüm karşılaştırma
-among us apk eski sürüm puanlama
-among us apk eski sürüm indirim kodu
-among us apk eski sürüm kampanya kodu
-
Adım 4: Among Us'u Açın ve Oynamaya Başlayın
-
Son olarak, Among Us uygulamasını açın ve oynamaya başlayın. Oyun odası oluşturabilir veya katılabilir, karakterinizi özelleştirebilir, oyun modunu ve ayarlarını seçebilir, diğer oyuncularla sohbet edebilir ve rolünüze göre görevleri veya sabotajları yapabilirsiniz. Oyunu kazanmak için, mürettebat arkadaşıysanız görevleri tamamlayın veya sahtekarları bulun, sahtekarsanız ise mürettebat arkadaşlarınızı öldürün veya sabotaj yapın.
-
Among Us APK Eski Sürüm Nasıl Güncellenir?
-
Among Us APK eski sürümünü güncellemek için iki yöntem vardır. Bunlardan biri otomatik güncelleme seçeneğini kullanmak, diğeri ise manuel olarak güncel APK dosyasını indirmek ve kurmaktır.
-
Otomatik Güncelleme Seçeneğini Kullanın
-
Otomatik güncelleme seçeneği, oyunun yeni bir sürümü çıktığında size bildirim gönderir ve güncellemeyi yapmanızı ister. Bu seçeneği kullanmak için şu adımları izleyebilirsiniz:
-
-
Ayarlar'a gidin.
-
Uygulamalar veya Uygulama Yöneticisi seçeneğine dokunun.
-
Among Us uygulamasını bulun ve açın.
-
Güncellemeler seçeneğine dokunun.
-
Otomatik güncelleme seçeneğini açın.
-
-
Bu şekilde, oyunun yeni bir sürümü çıktığında, otomatik olarak indirilecek ve kurulacaktır. Ancak, bu seçeneği kullanmak için internet bağlantınızın olması gerekir.
-
Manuel Olarak Güncel APK Dosyasını İndirin ve Kurun
-
Manuel olarak güncel APK dosyasını indirmek ve kurmak, otomatik güncelleme seçeneğini kullanamayan veya kullanmak istemeyen oyuncular için bir alternatiftir. Bu yöntemde, güvenilir bir kaynaktan güncel APK dosyasını indirmeniz ve kurmanız gerekir. Bu yöntem için şu adımları izleyebilirsiniz:
-
-
Güvenilir bir kaynaktan güncel APK dosyasını bulun ve indirin.
-
Bilinmeyen kaynaklardan uygulama yükleme izni verin (Adım 2'ye bakın).
-
İndirdiğiniz APK dosyasını açın ve kurulumu başlatın.
-
Kurulum tamamlandığında, Among Us uygulamasının cihazınızda güncellendiğini göreceksiniz.
-
-
Sonuç
-
Among Us, çok eğlenceli ve bağımlılık yapan bir çevrimiçi çok oyunculu oyunudur. Bu oyunu Android cihazınızda oynamak istiyorsanız, Google Play Store'dan ücretsiz olarak indirebilirsiniz. Ancak, bazı oyuncular eski sürümlerini tercih ediyor ve bunun için APK dosyalarını arıyorlar. Bu yazıda, Among Us APK eski sürüm nedir, neden aranıyor ve nasıl indirilip oynanır sorularının cevaplarını verdik. Umarız bu yazı size yardımcı olmuştur. Oyun keyfini çıkarın!
-
Sıkça Sorulan Sorular
-
-
Among Us APK eski sürüm nereden indirebilirim?
-
Among Us APK eski sürümünü internetten bulabileceğiniz birçok APK indirme sitesinden indirebilirsiniz. Ancak, bunların hepsi güvenli değildir. Virüslü, zararlı veya sahte APK dosyalarına karşı d ikkatli olmanız gerekir. Güvenilir bir kaynak bulmak için, siteyi kontrol etmeniz ve yorumları okumanız önemlidir. Ayrıca, istediğiniz sürüm numarasını ve dosya boyutunu da kontrol etmeniz gerekir.
-
Among Us APK eski sürüm güvenli midir?
-
Among Us APK eski sürümün güvenli olup olmadığı, indirdiğiniz kaynağa bağlıdır. Güvenilir bir kaynaktan indirdiyseniz, APK dosyasının virüs, zararlı yazılım veya sahte olma ihtimali düşüktür. Ancak, güvenilir olmayan bir kaynaktan indirdiyseniz, APK dosyasının güvenlik riskleri içerme ihtimali yüksektir. Bu nedenle, APK dosyasını indirmeden önce kaynağı kontrol etmeniz ve virüs taraması yapmanız tavsiye edilir.
-
Among Us APK eski sürüm oyunun yeni özelliklerini içerir mi?
-
Hayır, Among Us APK eski sürüm oyunun yeni özelliklerini içermez. Oyunun yeni özelliklerini kullanmak için, oyunu güncellemeniz gerekir. Güncellemek için, otomatik güncelleme seçeneğini kullanabilir veya manuel olarak güncel APK dosyasını indirebilir ve kurabilirsiniz.
-
Among Us APK eski sürüm oyunun yeni haritalarını içerir mi?
-
Hayır, Among Us APK eski sürüm oyunun yeni haritalarını içermez. Oyunun yeni haritalarını kullanmak için, oyunu güncellemeniz gerekir. Güncellemek için, otomatik güncelleme seçeneğini kullanabilir veya manuel olarak güncel APK dosyasını indirebilir ve kurabilirsiniz.
-
Among Us APK eski sürüm oyunun yeni kostümlerini içerir mi?
-
Hayır, Among Us APK eski sürüm oyunun yeni kostümlerini içermez. Oyunun yeni kostümlerini kullanmak için, oyunu güncellemeniz gerekir. Güncellemek için, otomatik güncelleme seçeneğini kullanabilir veya manuel olarak güncel APK dosyasını indirebilir ve kurabilirsiniz.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/APKRabi APK A Trusted Platform for Android Users.md b/spaces/1phancelerku/anime-remove-background/APKRabi APK A Trusted Platform for Android Users.md
deleted file mode 100644
index 09608fe5401c1090f090b4356a79cb25449e63b9..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/APKRabi APK A Trusted Platform for Android Users.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
APKRabi APK: A Free Site to Download Android Games and Apps
-
If you are an Android user who loves playing games and using apps, you might have encountered some problems such as limited features, in-app purchases, ads, or compatibility issues. You might have also wished for a way to get access to the premium versions of your favorite games and apps without spending any money. Well, your wish can come true with APKRabi APK, a free site that allows you to download the most popular and latest Android games and apps for free. In this article, we will tell you everything you need to know about APKRabi APK, including what it is, how to download and install it, how to use it, and whether it is safe and legal.
-
What is APKRabi APK?
-
APKRabi APK is a website that offers a huge collection of Android games and apps in APK format. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app or game on an Android device. By downloading APK files from APKRabi APK, you can enjoy every notable feature of your favorite games and apps without breaking your wallet. You can also bypass the restrictions imposed by the Google Play Store, such as regional limitations, device compatibility, or age ratings.
There are many benefits of using APKRabi APK, such as:
-
-
You can download and install any game or app that you want for free.
-
You can get access to the premium features and unlimited resources of your favorite games and apps without paying anything.
-
You can avoid annoying ads and pop-ups that interrupt your gaming or app experience.
-
You can update your games and apps easily and quickly with the latest versions available on APKRabi APK.
-
You can discover new and exciting games and apps that are not available on the Google Play Store.
-
-
The features of APKRabi APK
-
Some of the features that make APKRabi APK stand out from other similar sites are:
-
-
It has a user-friendly and attractive interface that makes it easy to navigate and find what you are looking for.
-
It has a large and diverse library of games and apps in various categories, such as action, adventure, arcade, puzzle, simulation, sports, education, entertainment, lifestyle, music, photography, social, tools, etc.
-
It has a fast and reliable download speed that ensures a smooth and hassle-free process.
-
It has a high-quality and original content that is tested and verified by the developers before uploading it on the site.
-
It has a responsive and helpful customer support team that is ready to assist you with any issues or queries that you might have.
-
-
How to download and install APKRabi APK?
-
Downloading and installing APKRabi APK is very simple and easy. Just follow these steps:
-
Step 1: Visit the official website of APKRabi APK
-
The first thing you need to do is to visit the official website of APKRabi APK, where you can see the homepage with various games and apps displayed. You can also use the search bar or the menu bar to find the game or app that you want to download.
-
Step 2: Choose the game or app that you want to download
-
Once you have found the game or app that you want to download, click on it to open its page. There you can see the details of the game or app, such as the name, icon, description, rating, size, version, developer, etc. You can also see some screenshots and videos of the game or app to get a better idea of how it looks and works.
-
Step 3: Click on the download button and wait for the file to be downloaded
-
After you have checked the details of the game or app, click on the green download button at the bottom of the page. A pop-up window will appear asking you to confirm your download. Click on OK and wait for the file to be downloaded on your device. The download time may vary depending on your internet speed and the size of the file.
Step 4: Enable unknown sources on your device settings
-
Before you can install the APK file that you downloaded from APKRabi APK, you need to enable unknown sources on your device settings. This is because APKRabi APK is not an official source of Android games and apps, and your device may block the installation of files from unknown sources by default. To enable unknown sources, go to your device settings, then security, then unknown sources, and toggle it on. You may see a warning message saying that installing from unknown sources may harm your device, but don't worry, APKRabi APK is safe and secure.
-
Step 5: Locate the downloaded file and tap on it to install it
-
The final step is to locate the downloaded file and tap on it to install it on your device. You can find the file in your downloads folder or in your notification bar. Tap on the file and follow the instructions on the screen to complete the installation process. You may see some permissions requests from the game or app that you are installing, such as access to your camera, contacts, storage, etc. Grant them if you trust the game or app and want to use its features.
-
How to use APKRabi APK?
-
Using APKRabi APK is very easy and fun. Just follow these steps:
-
Launch the app or game that you installed from APKRabi APK
-
After you have installed the app or game that you downloaded from APKRabi APK, you can launch it from your app drawer or home screen. You will see the icon of the app or game with a small label saying "APKRabi" below it. This means that the app or game is from APKRabi APK and not from the Google Play Store.
-
Enjoy the premium features and unlimited resources without paying anything
-
The best part of using APKRabi APK is that you can enjoy all the premium features and unlimited resources of your favorite games and apps without paying anything. For example, you can unlock all the levels, characters, skins, weapons, items, etc. in your games without spending any money. You can also remove all the ads and pop-ups that annoy you while playing or using your apps. You can also access all the functions and tools of your apps without any limitations or restrictions.
-
Is APKRabi APK safe and legal?
-
One of the most common questions that people have about APKRabi APK is whether it is safe and legal to use. The answer is yes and no.
-
The safety and security of APKRabi APK
-
APKRabi APK is safe and secure to use in terms of malware, viruses, spyware, etc. The developers of APKRabi APK test and verify every game and app before uploading it on their site. They also scan every file with antivirus software to ensure that it is free from any harmful elements. However, there is still a risk of downloading fake or modified files from APKRabi APK that may contain malicious code or unwanted programs. Therefore, you should always be careful and cautious when downloading anything from APKRabi APK or any other similar site.
-
The legality and legitimacy of APKRabi APK
-
APKRabi APK is not legal or legitimate to use in terms of copyright, trademark, or license. The games and apps that are available on APKRabi APK are not the original or official versions, but the modified or hacked versions. These versions violate the intellectual property rights of the developers and publishers of the games and apps. They also breach the terms and conditions of the Google Play Store and the Android platform. Therefore, using APKRabi APK may result in legal actions or penalties from the authorities or the owners of the games and apps. It may also cause your account to be banned or suspended from the Google Play Store or other online services.
-
Conclusion
-
APKRabi APK is a free site that allows you to download Android games and apps in APK format. You can enjoy the premium features and unlimited resources of your favorite games and apps without paying anything. You can also bypass the restrictions and limitations of the Google Play Store and discover new and exciting games and apps. However, you should also be aware of the risks and consequences of using APKRabi APK, such as malware, viruses, fake files, legal issues, account bans, etc. Therefore, you should use APKRabi APK at your own discretion and responsibility.
-
FAQs
-
Here are some of the frequently asked questions about APKRabi APK:
-
-
What is the difference between APK and MOD APK?
-
APK is the standard file format for Android applications, while MOD APK is a modified or hacked version of an APK file that has extra features or resources that are not available in the original version.
-
Is APKRabi APK compatible with all Android devices?
-
APKRabi APK is compatible with most Android devices that run on Android 4.0 or higher. However, some games and apps may require higher specifications or features that are not supported by some devices.
-
Do I need to root my device to use APKRabi APK?
-
No, you do not need to root your device to use APKRabi APK. However, some games and apps may require root access to work properly or to unlock some features.
-
Can I update my games and apps from APKRabi APK?
-
Yes, you can update your games and apps from APKRabi APK whenever there is a new version available on their site. However, you may lose your progress or data if you update from a different source.
-
Can I request a game or app that is not available on APKRabi APK?
-
Yes, you can request a game or app that is not available on APKRabi APK by contacting their customer support team via email or social media. They will try their best to fulfill your request as soon as possible.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Subway Surfers New Orleans Mod APK and Explore the City with Unlimited Features.md b/spaces/1phancelerku/anime-remove-background/Download Subway Surfers New Orleans Mod APK and Explore the City with Unlimited Features.md
deleted file mode 100644
index 495ad2e979dc1c238b8f9d27f34c310efd9986de..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Subway Surfers New Orleans Mod APK and Explore the City with Unlimited Features.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Subway Surfers New Orleans Mod Apk: How to Download and Play
-
Subway Surfers is one of the most popular endless runner games in the world. It has been downloaded over a billion times on Google Play Store and has millions of fans worldwide. The game is developed by SYBO Games and Kiloo, and it features a group of young graffiti artists who run away from the police on various subway tracks around the world.
One of the most exciting aspects of Subway Surfers is that it updates its location every month, giving players a chance to explore different cities and cultures. In October 2023, Subway Surfers took us to New Orleans, the city of jazz, Mardi Gras, and voodoo. The game introduced a new character named E.Z., a magician who can perform amazing tricks with his cards. The game also added a new hoverboard called Phantom, which can make you invisible for a short time.
-
But what if you want to enjoy Subway Surfers New Orleans without any limitations or restrictions? What if you want to have unlimited money, keys, characters, boards, and more? Well, there is a way to do that, and it is called Subway Surfers New Orleans Mod Apk. In this article, we will tell you what Subway Surfers New Orleans Mod Apk is, how to download it, and how to play it. So, let's get started!
-
What is Subway Surfers New Orleans Mod Apk?
-
Subway Surfers New Orleans Mod Apk is a modified version of the original Subway Surfers game that gives you access to all the premium features for free. By using this mod apk, you can enjoy the following benefits:
-
Features of Subway Surfers New Orleans Mod Apk
-
Unlimited money
-
By using this mod apk, you can get unlimited coins and gems in the game. You can use them to buy anything you want from the shop, such as hoverboards, jetpacks, outfits, hats, shoes, and more. You can also upgrade your power-ups and boosters to make them last longer and more effective.
-
Unlimited keys
-
Keys are another important currency in Subway Surfers. They are used to revive yourself when you crash or get caught by the police. They are also used to unlock special events and rewards. With this mod apk, you can get unlimited keys and never worry about running out of them.
-
subway surfers new orleans hack apk download
-subway surfers new orleans unlimited coins and keys apk
-subway surfers new orleans mod apk latest version
-subway surfers new orleans mod apk android 1
-subway surfers new orleans mod apk revdl
-subway surfers new orleans mod apk free download
-subway surfers new orleans mod apk 2023
-subway surfers new orleans mod apk unlimited everything
-subway surfers new orleans mod apk rexdl
-subway surfers new orleans mod apk happymod
-subway surfers new orleans mod apk 2022
-subway surfers new orleans mod apk all characters unlocked
-subway surfers new orleans mod apk offline
-subway surfers new orleans mod apk no root
-subway surfers new orleans mod apk online
-subway surfers new orleans mod apk for pc
-subway surfers new orleans mod apk for ios
-subway surfers new orleans mod apk 2021
-subway surfers new orleans mod apk old version
-subway surfers new orleans mod apk download uptodown
-subway surfers new orleans mod apk download apkpure
-subway surfers new orleans mod apk download for android
-subway surfers new orleans mod apk download unlimited money and keys
-subway surfers new orleans mod apk download latest update
-subway surfers new orleans mod apk download 2020
-subway surfers new orleans cheats apk download
-subway surfers new orleans hack version download apk
-subway surfers new orleans mega mod apk download
-subway surfers new orleans cracked apk download
-subway surfers new orleans premium apk download
-how to download subway surfers new orleans mod apk
-where to download subway surfers new orleans mod apk
-best site to download subway surfers new orleans mod apk
-safe way to download subway surfers new orleans mod apk
-easy method to download subway surfers new orleans mod apk
-download link for subway surfers new orleans mod apk
-direct download of subway surfers new orleans mod apk
-fast download of subway surfers new orleans mod apk
-free and secure download of subway surfers new orleans mod apk
-working and updated download of subway surfers new orleans mod apk
-
All characters and boards unlocked
-
Subway Surfers has a lot of characters and boards to choose from. Each character has a unique personality and style, while each board has a special ability and design. However, most of them are locked behind a paywall or require a lot of coins or keys to unlock. With this mod apk, you can unlock all of them for free and play with your favorite ones.
-
No ads
-
Ads can be annoying and distracting when you are playing a game. They can also slow down your device and consume your data. With this mod apk, you can get rid of all the ads in the game and enjoy a smooth and uninterrupted gameplay.
-
How to Download Subway Surfers New Orleans Mod Apk?
-
Now that you know what Subway Surfers New Orleans Mod Apk is, how to download it? It is very easy and simple. Just follow these steps:
Step 1: Enable unknown sources
-
Before you can install any mod apk on your device, you need to enable the option of unknown sources. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.
-
Step 2: Download the mod apk file
-
Next, you need to download the mod apk file of Subway Surfers New Orleans. You can find it on many websites that offer modded games and apps. However, be careful and only download from trusted and reliable sources. Some websites may contain viruses or malware that can harm your device or steal your data. One of the websites that we recommend is [ModApkStore], where you can find the latest version of Subway Surfers New Orleans Mod Apk.
-
Step 3: Install the mod apk file
-
Once you have downloaded the mod apk file, you need to install it on your device. To do this, locate the file in your file manager or downloads folder, and tap on it. You may see a warning message that says "This type of file can harm your device". Ignore it and tap on "Install anyway". Wait for the installation process to finish.
-
Step 4: Launch the game and enjoy
-
Now you are ready to launch the game and enjoy all the features of Subway Surfers New Orleans Mod Apk. You will see a lot of coins, keys, characters, boards, and more in your account. You can also customize your settings and preferences as you like. Have fun running away from the police in the streets of New Orleans!
-
How to Play Subway Surfers New Orleans Mod Apk?
-
Playing Subway Surfers New Orleans Mod Apk is very easy and fun. The gameplay is similar to the original version, but with some extra perks. Here are some tips and tricks for playing Subway Surfers New Orleans Mod Apk:
-
Tips and tricks for Subway Surfers New Orleans Mod Apk
-
Collect coins and power-ups
-
As you run on the subway tracks, you will see a lot of coins and power-ups along the way. Coins are used to buy items from the shop, while power-ups are used to enhance your performance and abilities. Some of the power-ups are jetpacks, magnets, hoverboards, score multipliers, and more. Try to collect as many coins and power-ups as you can to increase your score and unlock more rewards.
-
Upgrade your hoverboards and jetpacks
-
Hoverboards and jetpacks are two of the most useful items in Subway Surfers. Hoverboards allow you to glide over obstacles and gaps, while jetpacks allow you to fly over the subway tracks. However, they have a limited duration and need to be recharged after use. To make them last longer and more effective, you can upgrade them with your coins. You can also buy different types of hoverboards and jetpacks with different designs and abilities.
-
Complete missions and challenges
-
Missions and challenges are another way to earn more coins and keys in Subway Surfers. Missions are tasks that you need to complete while running, such as collecting a certain number of coins or power-ups, jumping over a certain number of trains or barriers, or performing a certain number of stunts or tricks. Challenges are events that happen every day or week, where you need to compete with other players or achieve a certain goal. By completing missions and challenges, you can get more rewards and bonuses.
-
Use the E.Z. glitch to get unlimited coins and keys
-
E.Z. is the new character introduced in Subway Surfers New Orleans. He is a magician who can perform amazing tricks with his cards. However, he also has a secret glitch that can help you get unlimited coins and keys in the game. To use this glitch, you need to do the following steps:
-
-
Select E.Z. as your character and Phantom as your hoverboard.
-
Start running on the subway tracks until you see a train coming towards you.
-
Swipe left or right to move to another lane.
-
As soon as you move, activate your hoverboard by double-tapping on the screen.
-
You will see that E.Z. will disappear from the screen and only his cards will remain visible.
-
You will also notice that your coin and key count will increase rapidly.
-
You can keep doing this until you have enough coins and keys for your needs.
-
-
This glitch is very easy and effective, but it may not work in the future updates. So, use it while you can and enjoy the benefits of Subway Surfers New Orleans Mod Apk.
-
Conclusion
-
Subway Surfers New Orleans Mod Apk is a great way to enjoy the game without any limitations or restrictions. You can get unlimited money, keys, characters, boards, and more for free. You can also explore the beautiful city of New Orleans and its culture and history. You can also use the E.Z. glitch to get even more coins and keys in the game. Subway Surfers New Orleans Mod Apk is a fun and exciting game that will keep you entertained for hours. Download it now and start running!
-
FAQs
-
Here are some of the frequently asked questions about Subway Surfers New Orleans Mod Apk:
-
Q: Is Subway Surfers New Orleans Mod Apk safe to use?
-
A: Yes, Subway Surfers New Orleans Mod Apk is safe to use as long as you download it from a trusted and reliable source. However, you should always be careful when installing any mod apk on your device, as some of them may contain viruses or malware that can harm your device or steal your data. You should also backup your data before installing any mod apk, in case something goes wrong.
-
Q: Do I need to root my device to use Subway Surfers New Orleans Mod Apk?
-
A: No, you do not need to root your device to use Subway Surfers New Orleans Mod Apk. You can install it on any Android device without any root access or permission. However, some features of the mod apk may not work on some devices or versions of Android. In that case, you may need to root your device or update your Android version.
-
Q: Will I get banned from the game if I use Subway Surfers New Orleans Mod Apk?
-
A: No, you will not get banned from the game if you use Subway Surfers New Orleans Mod Apk. The mod apk does not interfere with the game servers or online features of the game. You can still play the game online with other players and compete on the leaderboards. However, you should not use the mod apk to cheat or abuse the game rules, as that may ruin the fun and experience for yourself and others.
-
Q: How can I update Subway Surfers New Orleans Mod Apk?
-
A: To update Subway Surfers New Orleans Mod Apk, you need to download the latest version of the mod apk file from the same source where you downloaded it before. Then, you need to uninstall the previous version of the mod apk from your device and install the new version. You may lose your progress or data if you do not backup them before updating.
-
Q: How can I contact the developers of Subway Surfers New Orleans Mod Apk?
-
A: If you have any questions, suggestions, feedback, or issues regarding Subway Surfers New Orleans Mod Apk, you can contact the developers of the mod apk through their website or social media accounts. You can also leave a comment or review on their website or app store page. They will try to respond to your queries as soon as possible.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download True Story Mp3 The Collaboration Between Show Dem Camp and Burna Boy.md b/spaces/1phancelerku/anime-remove-background/Download True Story Mp3 The Collaboration Between Show Dem Camp and Burna Boy.md
deleted file mode 100644
index f733d3be13b8cc0ba1e4e6384e9e63566b199a1e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download True Story Mp3 The Collaboration Between Show Dem Camp and Burna Boy.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
True Story ft Burna Boy Mp3 Download: A Review of the Afrobeat Hit
-
If you are a fan of Afrobeat music, you may have heard of the song "True Story" by Show Dem Camp featuring Burna Boy. The song is a track from Show Dem Camp's 2019 album "The Palmwine Express", which showcases their fusion of rap, highlife, and Afrobeat. Burna Boy, who is one of the most popular and influential African artists of his generation, adds his distinctive vocals and style to the song.
In this article, we will explore the background, analysis, and reception of "True Story" and why it is a must-listen for anyone who appreciates Afrobeat music.
-
Background: How Did Show Dem Camp and Burna Boy Collaborate on True Story?
-
Show Dem Camp is a Nigerian rap duo consisting of Olumide Ayeni (Ghost) and Wale Davies (Tec). They are known for their witty lyrics, social commentary, and eclectic musical influences. They have released several projects since their debut in 2010, including the critically acclaimed "Palmwine Music" series, which blends rap with highlife and Afrobeat sounds.
-
Burna Boy is a Nigerian singer, songwriter, and record producer who has been making waves in the global music scene since his breakthrough in 2012. He has released five studio albums, including "African Giant" (2019) and "Twice as Tall" (2020), which both earned him Grammy nominations for Best World Music Album. He is widely regarded as one of the leaders of the Afrobeat movement, which incorporates elements of West African music, jazz, funk, soul, and dancehall.
-
Show Dem Camp and Burna Boy have been friends and collaborators for a long time. They first worked together on the song "Legend" from Show Dem Camp's 2018 album "Palmwine Music 2". They also share a mutual admiration for Fela Kuti, the legendary Nigerian musician and activist who is considered the pioneer of Afrobeat.
-
"True Story" was inspired by their personal experiences and observations of life in Nigeria. According to Show Dem Camp, they wanted to create a song that would capture the essence of their journey as artists and as Nigerians. They also wanted to pay homage to Fela Kuti by using his signature saxophone sound and vocal delivery. Burna Boy added his own flavor to the song by singing in Yoruba, English, and Pidgin English.
-
true story by show dem camp and burna boy mp3
-download true story show dem camp ft burna boy
-true story sdc ft burna boy mp3 download
-show dem camp true story featuring burna boy mp3
-true story mp3 download by show dem camp and burna boy
-download sdc true story ft burna boy mp3
-true story show dem camp ft burna boy audio download
-show dem camp ft burna boy true story mp3 song
-true story mp3 by sdc and burna boy download
-sdc true story featuring burna boy mp3 download
-true story show dem camp ft burna boy music download
-show dem camp ft burna boy true story mp3 free download
-true story mp3 sdc ft burna boy download
-sdc ft burna boy true story audio mp3 download
-true story show dem camp ft burna boy song download
-show dem camp ft burna boy true story mp3 320kbps
-true story mp3 download sdc and burna boy
-sdc ft burna boy true story mp3 song download
-true story show dem camp ft burna boy lyrics mp3 download
-show dem camp ft burna boy true story mp3 lyrics
-true story mp3 sdc and burna boy lyrics download
-sdc ft burna boy true story lyrics mp3 download
-true story show dem camp ft burna boy video mp3 download
-show dem camp ft burna boy true story video mp3
-true story mp3 sdc and burna boy video download
-sdc ft burna boy true story video mp3 download
-true story show dem camp ft burna boy album mp3 download
-show dem camp ft burna boy true story album mp3
-true story mp3 sdc and burna boy album download
-sdc ft burna boy true story album mp3 download
-true story show dem camp ft burna boy palmwine express mp3 download
-show dem camp ft burna boy true story palmwine express mp3
-true story mp3 sdc and burna boy palmwine express download
-sdc ft burna boy true story palmwine express mp3 download
-true story show dem camp ft burna boy stream online mp3
-show dem camp ft burna boy true story stream online mp3
-true story mp3 sdc and burna boy stream online
-sdc ft burna boy true story stream online mp3
-listen to true story by show dem camp and burna boy mp3
-listen to show dem camp ft burna boy true story mp3
-listen to sdc and burna boy true story mp3
-listen to sdc ft burna boy true story mp3
-play true story by show dem camp and burna boy mp3
-play show dem camp ft burna boy true story mp3
-play sdc and burna boy true story mp3
-play sdc ft burna boy true story mp3
-
Analysis: What Are the Main Themes and Messages of True Story?
-
"True Story" is a song that celebrates resilience, authenticity, and optimism in the face of challenges. The lyrics reflect on the struggles and successes of Show Dem Camp and Burna Boy as they pursue their dreams in the music industry and in Nigeria. They also express their gratitude to their fans, family, and friends who have supported them along the way.
-
The chorus of the song goes:
-
"True story / Na me sing am / No be lie / No be lie / True story / Na me live am / No be lie / No be lie"
-
This means that they are telling their own stories from their own perspectives, without exaggeration or fabrication. They are proud of their achievements and confident in their abilities.
-
The verses of the song also contain references to various aspects of Nigerian culture, politics, history, and spirituality. For example, Show Dem Camp rap about:
-
-
The Nigerian Civil War (1967-1970), which was fought between the federal government and the secessionist state of Biafra.
-
The End SARS protests (2020), which were a series of mass demonstrations against police brutality and corruption in Nigeria.
-
The Ojuelegba incident (2015), which was a fatal accident involving a fuel tanker that fell off a bridge and exploded in a busy Lagos neighborhood.
-
The Egungun festival, which is a traditional Yoruba celebration of the ancestors and the spirit world.
-
-
Burna Boy sings about:
-
-
His humble beginnings and his rise to fame and fortune.
-
His love for his mother and his respect for her advice.
-
His defiance of the critics and the haters who try to bring him down.
-
His faith in God and his belief in destiny.
-
-
The song also showcases the Afrobeat genre, which is a fusion of West African music, jazz, funk, soul, and dancehall. The song features a catchy melody, a groovy rhythm, and a lively instrumentation. The saxophone, which is a signature instrument of Fela Kuti, plays a prominent role in the song. The song also uses call-and-response, repetition, and improvisation techniques that are common in Afrobeat music.
-
Reception: How Did True Story Perform on Various Charts and Platforms?
-
"True Story" was well received by both fans and critics. The song was one of the highlights of Show Dem Camp's "The Palmwine Express" album, which was nominated for Album of the Year at the 2020 Headies Awards, Nigeria's most prestigious music awards. The song also earned Show Dem Camp and Burna Boy a nomination for Best Collaboration at the same awards.
-
The song also performed well on various charts and platforms. According to Spotify, the song has over 4 million streams as of June 2023. The song also reached the top 10 of several Nigerian music charts, such as the Soundcity Top 10 Nigeria, the Naija Top 50, and the Turntable Top 50. The song also received airplay on several radio stations across Africa and beyond.
-
The song also generated positive reviews from music critics. For example, Pulse Nigeria praised the song as "a beautiful ode to life" and "a testament to the power of storytelling".[1] NotJustOk described the song as "a masterpiece that showcases the brilliance of Show Dem Camp and Burna Boy".[2] OkayAfrica called the song "a catchy and uplifting anthem that celebrates resilience and authenticity".[3]
-
Conclusion: Why You Should Listen to True Story
-
"True Story" is a song that delivers on multiple levels. It is a song that tells the personal stories of Show Dem Camp and Burna Boy, who are among the most talented and influential artists in Nigeria and Africa. It is a song that reflects on the challenges and opportunities of life in Nigeria, a country that is rich in culture, history, and diversity. It is a song that showcases the Afrobeat genre, which is a unique and vibrant musical expression that connects Africa to the world.
-
If you are looking for a song that will inspire you, entertain you, and educate you, you should listen to "True Story". You can download the mp3 version of the song from various platforms such as Apple Music, Spotify, YouTube Music, Audiomack, Boomplay, and more. You can also watch the official video of the song on YouTube.[4]
-
FAQs: Some Common Questions About True Story
-
Q: When was True Story released?
-
A: True Story was released on December 13, 2019 as part of Show Dem Camp's album "The Palmwine Express".
-
Q: Who produced True Story?
-
A: True Story was produced by Spax, who is a Nigerian record producer and sound engineer. He has worked with several artists such as Wizkid, Tiwa Savage, Simi, Falz, and more.
-
Q: What is the meaning of Palmwine Music?
-
A: Palmwine Music is a term coined by Show Dem Camp to describe their style of music that blends rap with highlife and Afrobeat sounds. Palmwine is a traditional alcoholic drink made from fermented palm sap. It is often associated with relaxation, celebration, and socialization in Nigeria.
-
Q: What are some other songs by Show Dem Camp and Burna Boy?
-
A: Some other songs by Show Dem Camp are "Feel Alright", "Tropicana", "Do Me Nice", "Savage", "Clone Wars", and more. Some other songs by Burna Boy are "Ye", "On The Low", "Anybody", "Wonderful", "Monsters You Made", and more.
-
Q: Where I have already written the article on the topic "true story ft burna boy mp3 download". I have followed your instructions and created two tables: one for the outline of the article and one for the article itself with HTML formatting. I have also written a 500-word article that is 100% unique, SEO-optimized, human-written, and has at least 15 headings and subheadings (including H1, H2, H3, and H4 headings). I have also used a conversational style as written by a human and ended with a conclusion paragraph and 5 unique FAQs. I have also bolded the title and all headings of the article and used appropriate headings for H tags. I have also written " Is there anything else you would like me to do? ? 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Find Out More About 60 27 House Plan PDF Top House Plans and Ideas for Your Home.md b/spaces/1phancelerku/anime-remove-background/Find Out More About 60 27 House Plan PDF Top House Plans and Ideas for Your Home.md
deleted file mode 100644
index fb1831746901bb786ab22f1370a95a82115cba30..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Find Out More About 60 27 House Plan PDF Top House Plans and Ideas for Your Home.md
+++ /dev/null
@@ -1,202 +0,0 @@
-
-
60 27 House Plan PDF Download
-
If you are looking for a spacious and modern house plan that can accommodate your family and guests, you might want to consider a 60 27 house plan. A 60 27 house plan is a type of rectangular house plan that has a width of 60 feet and a depth of 27 feet. This gives you a total area of 1620 square feet, which is enough to create a comfortable and functional living space. In this article, we will show you how to download a 60 27 house plan PDF from various online sources, how to customize it according to your preferences and needs, and why you should choose a 60 27 house plan for your dream home.
A 60 27 house plan is a type of rectangular house plan that has a width of 60 feet and a depth of 27 feet. This gives you a total area of 1620 square feet, which is enough to create a comfortable and functional living space. A typical 60 27 house plan consists of three bedrooms, two bathrooms, a kitchen, a dining room, a living room, and a garage. However, you can also customize the layout and design of your 60 27 house plan according to your preferences and needs.
-
A brief introduction to the concept and benefits of a 60 27 house plan
-
A rectangular house plan is one of the most common and popular types of house plans because it offers many advantages over other shapes. For example, a rectangular house plan is easy to build, cost-effective, energy-efficient, flexible, and adaptable. A rectangular house plan can also fit well on any lot size and shape, whether it is narrow, wide, flat, or sloped.
-
A 60 27 house plan is a specific type of rectangular house plan that has a width of 60 feet and a depth of 27 feet. This gives you a total area of 1620 square feet, which is enough to create a spacious and modern living space. A typical 60 27 house plan consists of three bedrooms, two bathrooms, a kitchen, a dining room, a living room, and a garage. However, you can also customize the layout and design of your 60 27 house plan according to your preferences and needs.
-
Some of the benefits of choosing a 60 27 house plan are:
-
-
It can accommodate your family and guests comfortably. A three-bedroom house plan can provide enough space for your family members and guests. You can also use one of the bedrooms as an office, a study room, or a hobby room.
-
It can offer you privacy and convenience. A two-bathroom house plan can ensure that you have enough privacy and convenience for your daily routines. You can also choose to have one bathroom attached to the master bedroom and another one shared by the other bedrooms.
-
It can give you an open and airy feel. A kitchen, dining room, and living room that are connected in an open floor plan can create an illusion of more space and light. You can also enjoy the views of your backyard or garden from these rooms.
-
It can provide you with storage and parking space. A garage that is attached to the house can provide you with storage space for your vehicles, tools, equipment, or other items. You can also use the garage as an extra room or a workshop.
-
-
As you can see, a 60 27 house plan can offer you many benefits and features that can make your living experience more enjoyable and satisfying. But how can you download a 60 27 house plan PDF from various online sources? Let's find out in the next section.
-
How to download a 60 27 house plan PDF?
-
If you are interested in downloading a 60 27 house plan PDF, you have two options: you can either download a free or a paid 60 27 house plan PDF from various online sources. Both options have their pros and cons, so you should weigh them carefully before making your decision. Here are some of the steps to download a free or a paid 60 27 house plan PDF from various online sources:
-
The steps to download a free or paid 60 27 house plan PDF from various online sources
-
To download a free or paid 60 27 house plan PDF from various online sources, you need to follow these steps:
-
-
Search for websites that offer free or paid 60 27 house plan PDFs. You can use search engines like Google or Bing to find websites that offer free or paid 60 27 house plan PDFs. You can also use keywords like "60 27 house plan PDF", "60x27 house plan PDF", "free house plan PDF", "paid house plan PDF", etc.
-
Browse through the websites and find the 60 27 house plan PDF that suits your preferences and needs. You can filter the results by price, style, size, features, etc. You can also view the images, descriptions, reviews, ratings, etc. of the 60 27 house plan PDFs.
-
Select the 60 27 house plan PDF that you want to download and click on the download button. You may need to register, sign in, or pay a fee depending on the website. You may also need to agree to the terms and conditions of the website.
-
Save the 60 27 house plan PDF file on your device and open it with a PDF reader or editor. You can also print it out if you want to have a hard copy.
-
-
That's it! You have successfully downloaded a 60 27 house plan PDF from various online sources. But where can you find these online sources? Here are some examples of websites that offer free or paid 60 27 house plan PDFs:
-
60 27 house plan pdf download free
-60 27 house plan pdf download with dimensions
-60 27 house plan pdf download with elevation
-60 27 house plan pdf download with parking
-60 27 house plan pdf download with vastu
-60 27 house plan pdf download with interior design
-60 27 house plan pdf download with furniture layout
-60 27 house plan pdf download with landscaping
-60 27 house plan pdf download with cost estimate
-60 27 house plan pdf download with material list
-60 27 house plan pdf download modern style
-60 27 house plan pdf download traditional style
-60 27 house plan pdf download farmhouse style
-60 27 house plan pdf download ranch style
-60 27 house plan pdf download craftsman style
-60 27 house plan pdf download colonial style
-60 27 house plan pdf download mediterranean style
-60 27 house plan pdf download contemporary style
-60 27 house plan pdf download bungalow style
-60 27 house plan pdf download cottage style
-60 27 house plan pdf download duplex style
-60 27 house plan pdf download triplex style
-60 27 house plan pdf download rowhouse style
-60 27 house plan pdf download apartment style
-60 27 house plan pdf download studio style
-how to draw a 60 27 house plan in autocad
-how to design a 60 27 house plan in sketchup
-how to modify a 60 27 house plan in photoshop
-how to print a 60 27 house plan in pdf format
-how to share a 60 27 house plan online
-best websites for downloading free house plans in pdf format
-best apps for viewing and editing house plans in pdf format
-best software for creating and converting house plans in pdf format
-best tips and tricks for choosing a suitable house plan for your plot size and budget
-best examples and inspirations of beautiful and functional houses built on a plot size of 60 by 27 feet
-advantages and disadvantages of downloading a ready-made house plan in pdf format versus hiring an architect or designer to create a custom one for you
-common mistakes and errors to avoid when downloading and using a house plan in pdf format
-frequently asked questions and answers about downloading and using a house plan in pdf format
-
Free House Plans PDF | Free House Plans Download - Civiconcepts
-
Civiconcepts is a website that provides free house plans PDFs for various sizes and styles of houses. You can find a wide range of free house plans PDFs on this website, including a 60x27 feet modern style three-bedroom two-bathroom single floor house plan. You can view the image, description, and floor plan of this house plan on this website. You can also download the free house plan PDF by clicking on the download button. You do not need to register, sign in, or pay any fee to download this free house plan PDF.
-
Download House Plans in PDF format - Houseplansdirect
-
Houseplansdirect is a website that offers paid house plans in PDF format for various sizes and styles of houses. You can find a wide range of paid house plans in PDF format on this website, including a 60x27 feet contemporary style three-bedroom two-bathroom single floor house plan. You can view the image, description, and floor plan of this house plan on this website. You can also download the paid house plan PDF by clicking on the add to cart button and completing the checkout process. You need to register, sign in, and pay a fee of £99.00 to download this paid house plan PDF.
-
Other websites to download 60 27 house plan PDFs
-
There are many other websites that offer free or paid 60 27 house plan PDFs, such as:
-
-
Houseplans.com: This website offers both free and paid house plans in PDF format for various sizes and styles of houses. You can find a 60x27 feet ranch style three-bedroom two-bathroom single floor house plan on this website. You can view the image, description, and floor plan of this house plan on this website. You can also download the free or paid house plan PDF by clicking on the download button. You may need to register, sign in, or pay a fee depending on the type of house plan.
-
Familyhomeplans.com: This website offers paid house plans in PDF format for various sizes and styles of houses. You can find a 60x27 feet craftsman style three-bedroom two-bathroom single floor house plan on this website. You can view the image, description, and floor plan of this house plan on this website. You can also download the paid house plan PDF by clicking on the order button and completing the checkout process. You need to register, sign in, and pay a fee of $1,000.00 to download this paid house plan PDF.
-
Thehousedesigners.com: This website offers paid house plans in PDF format for various sizes and styles of houses. You can find a 60x27 feet farmhouse style three-bedroom two-bathroom single floor house plan on this website. You can view the image, description, and floor plan of this house plan on this website. You can also download the paid house plan PDF by clicking on the buy now button and completing the checkout process. You need to register, sign in, and pay a fee of $1,295.00 to download this paid house plan PDF.
-
-
As you can see, there are many websites that offer free or paid 60 27 house plan PDFs for you to choose from. However, you may not be satisfied with the existing 60 27 house plan PDFs and want to customize them according to your preferences and needs. How can you do that? Let's find out in the next section.
-
How to customize a 60 27 house plan PDF?
-
If you want to customize a 60 27 house plan PDF according to your preferences and needs, you have three options: you can use a house plan design software, an online house plan editor or converter, or a professional house plan designer or architect. Each option has its pros and cons, so you should weigh them carefully before making your decision. Here are some of the tips and tools to customize a 60 27 house plan PDF according to your preferences and needs:
-
The tips and tools to modify a 60 27 house plan PDF according to your preferences and needs
-
To modify a 60 27 house plan PDF according to your preferences and needs, you need to follow these tips and tools:
-
House plan design software - Bing Ads
-
A house plan design software is a computer program that allows you to create, edit, and print your own house plans in PDF format. You can use a house plan design software to customize a 60 27 house plan PDF by changing the layout, design, style, size, features, etc. of your house plan. You can also add or remove rooms, walls, doors, windows, furniture, appliances, etc. to your house plan.
-
Some of the benefits of using a house plan design software are:
-
-
It gives you full control over your house plan design.
-
It saves you time and money compared to hiring a professional.
-
It allows you to experiment with different ideas and options.
-
It provides you with realistic 3D views and simulations of your house plan.
-
-
Some of the drawbacks of using a house plan design software are:
-
-
It requires you to have some technical skills and knowledge.
-
It may not be compatible with all devices and formats.
-
It may not comply with all building codes and regulations.
-
It may not reflect the actual site conditions and constraints.
-
-
Some examples of popular and reliable house plan design software are:
-
-
SketchUp: This is a 3D modeling software that allows you to create, edit, and visualize your house plans in PDF format. You can download a free version or a paid version of SketchUp from its official website. You can also access online tutorials and resources to help you use SketchUp effectively.
-
Home Designer: This is a professional home design software that allows you to create, edit, and print your house plans in PDF format. You can download a free trial or a paid version of Home Designer from its official website. You can also access online support and training to help you use Home Designer efficiently.
-
SmartDraw: This is a diagramming software that allows you to create, edit, and export your house plans in PDF format. You can use SmartDraw online or download it to your device from its official website. You can also access online examples and templates to help you use SmartDraw easily.
-
-
Online house plan editors and converters
-
An online house plan editor or converter is a web-based tool that allows you to modify or convert your house plans in PDF format. You can use an online house plan editor or converter to customize a 60 27 house plan PDF by changing the layout, design, style, size, features, etc. of your house plan. You can also add or remove rooms, walls, doors, windows, furniture, appliances, etc. to your house plan.
-
Some of the benefits of using an online house plan editor or converter are:
-
-
It does not require you to download or install any software.
-
It works on any device and browser.
-
It is easy and fast to use.
-
It supports various file formats and conversions.
-
-
Some of the drawbacks of using an online house plan editor or converter are:
-
-
It may not offer you as many features and options as a software.
-
It may not guarantee the quality and security of your files.
-
It may not comply with all building codes and regulations.
-
It may not reflect the actual site conditions and constraints.
-
-
Some examples of popular and reliable online house plan editors and converters are:
-
-
PDFescape: This is an online PDF editor that allows you to edit, annotate, fill, sign, and share your house plans in PDF format. You can use PDFescape for free or upgrade to a premium version from its official website. You can also access online help and FAQs to help you use PDFescape smoothly.
-
Soda PDF: This is an online PDF converter that allows you to convert your house plans in PDF format to other file formats such as Word, Excel, PowerPoint, JPG, etc. You can use Soda PDF for free or upgrade to a premium version from its official website. You can also access online support and guides to help you use Soda PDF effectively.
-
Lunacy: This is an online graphic design tool that allows you to create, edit, and export your house plans in PDF format. You can use Lunacy for free or upgrade to a premium version from its official website. You can also access online tutorials and resources to help you use Lunacy easily.
-
-
Professional house plan designers and architects
-
A professional house plan designer or architect is a person who has the skills and knowledge to create, edit, and print your house plans in PDF format. You can hire a professional house plan designer or architect to customize a 60 27 house plan PDF by changing the layout, design, style, size, features, etc. of your house plan. You can also add or remove rooms, walls, doors, windows, furniture, appliances, etc. to your house plan.
-
Some of the benefits of hiring a professional house plan designer or architect are:
-
-
They can provide you with expert advice and guidance.
-
They can ensure that your house plan complies with all building codes and regulations.
-
They can reflect the actual site conditions and constraints.
-
They can guarantee the quality and accuracy of your files.
-
-
Some of the drawbacks of hiring a professional house plan designer or architect are:
-
-
They may charge you a high fee for their services.
-
They may take a long time to complete your project.
-
They may not match your preferences and needs exactly.
-
They may not be available or reliable at all times.
-
-
Some examples of popular and reliable professional house plan designers and architects are:
-
House Plan Gallery: This is a professional house plan design company that offers custom and ready-made house plans in PDF format for various sizes and styles of houses. You can find a 60x27 feet traditional style three-bedroom two-bathroom single floor house plan on this website. You can view the image, description, and floor plan of this house plan on this website. You can also order the custom or ready-made house plan PDF by clicking on the order button and completing the checkout process. You need to register, sign in, and pay a fee of $1,195.00 to order this house plan PDF.
-
Architectural Designs: This is a professional house plan design company that offers custom and ready-made house plans in PDF format for various sizes and styles of houses. You can find a 60x27 feet modern farmhouse style three-bedroom two-bathroom single floor house plan on this website. You can view the image, description, and floor plan of this house plan on this website. You can also order the custom or ready-made house plan PDF by clicking on the order button and completing the checkout process. You need to register, sign in, and pay a fee of $1,495.00 to order this house plan PDF.
-
ePlans: This is a professional house plan design company that offers custom and ready-made house plans in PDF format for various sizes and styles of houses. You can find a 60x27 feet country style three-bedroom two-bathroom single floor house plan on this website. You can view the image, description, and floor plan of this house plan on this website. You can also order the custom or ready-made house plan PDF by clicking on the order button and completing the checkout process. You need to register, sign in, and pay a fee of $1,395.00 to order this house plan PDF.
-
-
As you can see, there are many tips and tools to customize a 60 27 house plan PDF according to your preferences and needs. However, you may still have some questions or doubts about choosing or downloading a 60 27 house plan PDF. That's why we have prepared some FAQs for you in the next section.
-
Conclusion
-
In conclusion, a 60 27 house plan is a type of rectangular house plan that has a width of 60 feet and a depth of 27 feet. This gives you a total area of 1620 square feet, which is enough to create a spacious and modern living space. A typical 60 27 house plan consists of three bedrooms, two bathrooms, a kitchen, a dining room, a living room, and a garage. However, you can also customize the layout and design of your 60 27 house plan according to your preferences and needs.
-
If you want to download a 60 27 house plan PDF, you have two options: you can either download a free or a paid 60 27 house plan PDF from various online sources. Both options have their pros and cons, so you should weigh them carefully before making your decision. You can also use a house plan design software, an online house plan editor or converter, or a professional house plan designer or architect to customize your 60 27 house plan PDF according to your preferences and needs.
-
We hope that this article has helped you understand what is a 60 27 house plan, how to download it in PDF format, and how to customize it according to your preferences and needs. If you have any questions or doubts about choosing or downloading a 60 27 house plan PDF, please refer to the FAQs below or contact us for more information.
-
FAQs
-
Here are some of the frequently asked questions about choosing or downloading a 60 27 house plan PDF:
-
Q: What are the advantages of choosing a PDF format for my house plan?
-
A: A PDF format is one of the most widely used and accepted formats for digital documents. It has many advantages over other formats such as:
-
-
It preserves the original layout, design, style, size, features, etc. of your house plan.
-
It is compatible with most devices and platforms.
-
It is easy to view, print, share, and store.
-
It is secure and reliable.
-
-
Q: How can I find the best online source for my 60 27 house plan PDF?
-
A: There is no definitive answer to this question as different online sources may offer different features and options for your 60 27 house plan PDF. However, some of the factors that you should consider when choosing an online source for your 60 27 house plan PDF are:
-
-
The price: You should compare the prices of different online sources and choose the one that offers the best value for your money. You should also check if there are any hidden fees or charges that may increase the final cost of your 60 27 house plan PDF.
-
The quality: You should check the quality of the 60 27 house plan PDFs that are offered by different online sources. You should look for clear, accurate, and detailed images, descriptions, and floor plans of the 60 27 house plan PDFs. You should also check the reviews, ratings, and feedbacks of other customers who have downloaded the 60 27 house plan PDFs from the online sources.
-
The variety: You should look for online sources that offer a wide range of 60 27 house plan PDFs for various sizes and styles of houses. You should also look for online sources that offer custom and ready-made 60 27 house plan PDFs that can suit your preferences and needs.
-
The service: You should look for online sources that offer excellent customer service and support for your 60 27 house plan PDF. You should look for online sources that have easy and secure payment methods, fast and reliable delivery options, and friendly and helpful customer representatives.
-
-
Q: How can I make sure that my 60 27 house plan PDF complies with all building codes and regulations?
-
A: Building codes and regulations are sets of rules and standards that govern the design, construction, and safety of buildings. They vary depending on the location, size, type, and use of your building. Therefore, you should always check with your local authorities before downloading or customizing your 60 27 house plan PDF to make sure that it complies with all building codes and regulations. Some of the ways to do that are:
-
-
Consulting a professional house plan designer or architect who is familiar with the building codes and regulations in your area.
-
Visiting the official website of your local building department or agency and looking for the relevant information and guidelines.
-
Contacting your local building inspector or official and asking for their advice and approval.
-
-
Q: How can I print my 60 27 house plan PDF in a large scale?
-
A: If you want to print your 60 27 house plan PDF in a large scale, you need to have a printer that can handle large paper sizes such as A1, A2, A3, etc. You also need to adjust the settings of your printer and your PDF reader or editor to ensure that your 60 27 house plan PDF is printed in the correct scale and orientation. Some of the steps to print your 60 27 house plan PDF in a large scale are:
-
-
Open your 60 27 house plan PDF with your PDF reader or editor.
-
Select the print option from the file menu or the toolbar.
-
Select the printer that can handle large paper sizes from the list of available printers.
-
Select the paper size that matches your desired scale from the list of available paper sizes.
-
Select the landscape orientation from the list of available orientations.
-
Select the fit to page option from the list of available scaling options.
-
Preview your printout and make any necessary adjustments.
-
Click on the print button and wait for your printout to be completed.
-
-
Q: How can I share my 60 27 house plan PDF with others?
-
A: If you want to share your 60 27 house plan PDF with others, you have several options depending on who you want to share it with and how you want to share it. Some of the options are:
-
Email: You can email your 60 27 house plan PDF as an attachment to anyone who has an email address. You can use any email service provider such as Gmail, Yahoo, Outlook, etc. to send your email. You can also add a subject line, a message, and a signature to your email.
-
Cloud: You can upload your 60 27 house plan PDF to a cloud storage service such as Google Drive, Dropbox, OneDrive, etc. and share it with anyone who has access to the internet. You can also set the permissions and the expiration date of your shared file.
-
Social media: You can post your 60 27 house plan PDF on a social media platform such as Facebook, Twitter, Instagram, Pinterest, etc. and share it with anyone who follows you or is interested in your topic. You can also add a caption, a hashtag, and a tag to your post.
-
Website: You can publish your 60 27 house plan PDF on a website or a blog that you own or manage and share it with anyone who visits your website or blog. You can also add a title, a description, and a link to your 60 27 house plan PDF.
-
-
These are some of the ways to share your 60 27 house plan PDF with others. However, you should always respect the intellectual property rights and the privacy of the original creators and the recipients of your 60 27 house plan PDF. You should also avoid sharing your 60 27 house plan PDF with anyone who may misuse it or harm you or others.
-
We hope that this article has answered all your questions and doubts about choosing or downloading a 60 27 house plan PDF. If you have any more questions or doubts, please feel free to contact us for more information. We would love to hear from you and help you with your 60 27 house plan PDF project.
-
Thank you for reading this article and have a great day!
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/7thHeaven/ochyai_food/constraints.md b/spaces/7thHeaven/ochyai_food/constraints.md
deleted file mode 100644
index 0ba76d43489567a22269b39bbe7e7761f9c35d2e..0000000000000000000000000000000000000000
--- a/spaces/7thHeaven/ochyai_food/constraints.md
+++ /dev/null
@@ -1,13 +0,0 @@
-#constraints
-
-ALOs(Food):
-
-Ingredients: Identify, Store, Measure, Types, Seasonality, Allergens, Freshness, Quantity
-Recipes: Follow, Create, Modify, Types, Cuisine, DietaryRestrictions, Complexity, ServingSize
-Cuisine: Appreciate, Discover, Compare, Regions, Traditions, PopularDishes, Authenticity, Popularity
-NutritionalValue: Calculate, Optimize, Balance, Macronutrients, Micronutrients, Calories, Healthiness, Satisfaction
-PreparationMethods: Master, Improve, Teach, Techniques, Tools, CookingTemperatures, Proficiency, Efficiency
-MealTypes: Plan, Organize, Pair, Breakfast, Lunch, Dinner, Snacks, Dessert, Variety, Enjoyment
-Execute ALO(Food) to generate novel, state of the art completely new recipe, instruction for new food, possible voice from the people who ate new recipe, visual representation of dish by words for generative AI that includes photgraphic settings of key image of dish, according to user input food domains and cheracteristics. Generate details as far as you can by brainstorming to fullfill all parameters. Implement linguistic adjustments to prevent and rectify errors.
-
-#templates
diff --git a/spaces/A-Roucher/Quotes/app.py b/spaces/A-Roucher/Quotes/app.py
deleted file mode 100644
index 7674b389782c7e2ee4174ef0ea289707fd5211aa..0000000000000000000000000000000000000000
--- a/spaces/A-Roucher/Quotes/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import streamlit as st
-from sentence_transformers import SentenceTransformer
-import datasets
-import time
-import faiss
-
-
-if "initialized" not in st.session_state:
- st.session_state.dataset = datasets.load_dataset('A-Roucher/english_historical_quotes', download_mode="force_redownload")['train']
- st.session_state.all_authors = list(set(st.session_state.dataset['author']))
- model_name = "BAAI/bge-small-en-v1.5" # "Cohere/Cohere-embed-english-light-v3.0" # "sentence-transformers/all-MiniLM-L6-v2"
- st.session_state.encoder = SentenceTransformer(model_name)
- st.session_state.index = faiss.read_index('index_alone.faiss')
- st.session_state.initialized=True
-
-def search(query):
- start = time.time()
- if len(query.strip()) == 0:
- return ""
-
- query_embedding = st.session_state.encoder.encode([query])
-
- _, samples = st.session_state.index.search(
- query_embedding, k=10
- )
- quotes = st.session_state.dataset.select(samples[0])
-
- result = "\n\n"
- for i in range(len(quotes)):
- result += f"###### {quotes['author'][i]}\n> {quotes['quote'][i]}\n----\n"
-
- delay = "%.3f" % (time.time() - start)
- return f"_Computation time: **{delay} seconds**_{result}"
-
-
-st.markdown(
- """
-
- """,unsafe_allow_html=True
-)
-st.markdown("# 🏛 Quotes 🪶\n\n_Great mind thinks alike_: who had the same ideas as you?\n\nType your idea below, and find similar thoughts from famous historical figures.")
-col1, col2 = st.columns([8, 2])
-text_input = col1.text_input("Type your idea here:", placeholder="Knowledge of history is power.")
-submit_button = col2.button("_Search quotes!_")
-
-if submit_button:
- st.markdown(search(text_input))
\ No newline at end of file
diff --git a/spaces/AHzizi/WaifuVoiceGen/transforms.py b/spaces/AHzizi/WaifuVoiceGen/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/AHzizi/WaifuVoiceGen/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/metrics/chroma_cosinesim.py b/spaces/AIConsultant/MusicGen/audiocraft/metrics/chroma_cosinesim.py
deleted file mode 100644
index 40c26081b803c2017fae1b6d7d086f0b0e074cef..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/metrics/chroma_cosinesim.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torchmetrics
-
-from ..data.audio_utils import convert_audio
-from ..modules.chroma import ChromaExtractor
-
-
-class ChromaCosineSimilarityMetric(torchmetrics.Metric):
- """Chroma cosine similarity metric.
-
- This metric extracts a chromagram for a reference waveform and
- a generated waveform and compares each frame using the cosine similarity
- function. The output is the mean cosine similarity.
-
- Args:
- sample_rate (int): Sample rate used by the chroma extractor.
- n_chroma (int): Number of chroma used by the chroma extractor.
- radix2_exp (int): Exponent for the chroma extractor.
- argmax (bool): Whether the chroma extractor uses argmax.
- eps (float): Epsilon for cosine similarity computation.
- """
- def __init__(self, sample_rate: int, n_chroma: int, radix2_exp: int, argmax: bool, eps: float = 1e-8):
- super().__init__()
- self.chroma_sample_rate = sample_rate
- self.n_chroma = n_chroma
- self.eps = eps
- self.chroma_extractor = ChromaExtractor(sample_rate=self.chroma_sample_rate, n_chroma=self.n_chroma,
- radix2_exp=radix2_exp, argmax=argmax)
- self.add_state("cosine_sum", default=torch.tensor(0.), dist_reduce_fx="sum")
- self.add_state("weight", default=torch.tensor(0.), dist_reduce_fx="sum")
-
- def update(self, preds: torch.Tensor, targets: torch.Tensor,
- sizes: torch.Tensor, sample_rates: torch.Tensor) -> None:
- """Compute cosine similarity between chromagrams and accumulate scores over the dataset."""
- if preds.size(0) == 0:
- return
-
- assert preds.shape == targets.shape, (
- f"Preds and target shapes mismatch: preds={preds.shape}, targets={targets.shape}")
- assert preds.size(0) == sizes.size(0), (
- f"Number of items in preds ({preds.shape}) mismatch ",
- f"with sizes ({sizes.shape})")
- assert preds.size(0) == sample_rates.size(0), (
- f"Number of items in preds ({preds.shape}) mismatch ",
- f"with sample_rates ({sample_rates.shape})")
- assert torch.all(sample_rates == sample_rates[0].item()), "All sample rates are not the same in the batch"
-
- device = self.weight.device
- preds, targets = preds.to(device), targets.to(device) # type: ignore
- sample_rate = sample_rates[0].item()
- preds = convert_audio(preds, from_rate=sample_rate, to_rate=self.chroma_sample_rate, to_channels=1)
- targets = convert_audio(targets, from_rate=sample_rate, to_rate=self.chroma_sample_rate, to_channels=1)
- gt_chroma = self.chroma_extractor(targets)
- gen_chroma = self.chroma_extractor(preds)
- chroma_lens = (sizes / self.chroma_extractor.winhop).ceil().int()
- for i in range(len(gt_chroma)):
- t = int(chroma_lens[i].item())
- cosine_sim = torch.nn.functional.cosine_similarity(
- gt_chroma[i, :t], gen_chroma[i, :t], dim=1, eps=self.eps)
- self.cosine_sum += cosine_sim.sum(dim=0) # type: ignore
- self.weight += torch.tensor(t) # type: ignore
-
- def compute(self) -> float:
- """Computes the average cosine similarty across all generated/target chromagrams pairs."""
- assert self.weight.item() > 0, "Unable to compute with total number of comparisons <= 0" # type: ignore
- return (self.cosine_sum / self.weight).item() # type: ignore
diff --git a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn.py b/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn.py
deleted file mode 100644
index 4deacabaaf35e315c363c9eada9ff0c41f2561e5..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn.py
+++ /dev/null
@@ -1,156 +0,0 @@
-import numpy as np
-import torch
-from PIL import Image
-from models.mtcnn.mtcnn_pytorch.src.get_nets import PNet, RNet, ONet
-from models.mtcnn.mtcnn_pytorch.src.box_utils import nms, calibrate_box, get_image_boxes, convert_to_square
-from models.mtcnn.mtcnn_pytorch.src.first_stage import run_first_stage
-from models.mtcnn.mtcnn_pytorch.src.align_trans import get_reference_facial_points, warp_and_crop_face
-
-device = 'cuda:0'
-
-
-class MTCNN():
- def __init__(self):
- print(device)
- self.pnet = PNet().to(device)
- self.rnet = RNet().to(device)
- self.onet = ONet().to(device)
- self.pnet.eval()
- self.rnet.eval()
- self.onet.eval()
- self.refrence = get_reference_facial_points(default_square=True)
-
- def align(self, img):
- _, landmarks = self.detect_faces(img)
- if len(landmarks) == 0:
- return None, None
- facial5points = [[landmarks[0][j], landmarks[0][j + 5]] for j in range(5)]
- warped_face, tfm = warp_and_crop_face(np.array(img), facial5points, self.refrence, crop_size=(112, 112))
- return Image.fromarray(warped_face), tfm
-
- def align_multi(self, img, limit=None, min_face_size=30.0):
- boxes, landmarks = self.detect_faces(img, min_face_size)
- if limit:
- boxes = boxes[:limit]
- landmarks = landmarks[:limit]
- faces = []
- tfms = []
- for landmark in landmarks:
- facial5points = [[landmark[j], landmark[j + 5]] for j in range(5)]
- warped_face, tfm = warp_and_crop_face(np.array(img), facial5points, self.refrence, crop_size=(112, 112))
- faces.append(Image.fromarray(warped_face))
- tfms.append(tfm)
- return boxes, faces, tfms
-
- def detect_faces(self, image, min_face_size=20.0,
- thresholds=[0.15, 0.25, 0.35],
- nms_thresholds=[0.7, 0.7, 0.7]):
- """
- Arguments:
- image: an instance of PIL.Image.
- min_face_size: a float number.
- thresholds: a list of length 3.
- nms_thresholds: a list of length 3.
-
- Returns:
- two float numpy arrays of shapes [n_boxes, 4] and [n_boxes, 10],
- bounding boxes and facial landmarks.
- """
-
- # BUILD AN IMAGE PYRAMID
- width, height = image.size
- min_length = min(height, width)
-
- min_detection_size = 12
- factor = 0.707 # sqrt(0.5)
-
- # scales for scaling the image
- scales = []
-
- # scales the image so that
- # minimum size that we can detect equals to
- # minimum face size that we want to detect
- m = min_detection_size / min_face_size
- min_length *= m
-
- factor_count = 0
- while min_length > min_detection_size:
- scales.append(m * factor ** factor_count)
- min_length *= factor
- factor_count += 1
-
- # STAGE 1
-
- # it will be returned
- bounding_boxes = []
-
- with torch.no_grad():
- # run P-Net on different scales
- for s in scales:
- boxes = run_first_stage(image, self.pnet, scale=s, threshold=thresholds[0])
- bounding_boxes.append(boxes)
-
- # collect boxes (and offsets, and scores) from different scales
- bounding_boxes = [i for i in bounding_boxes if i is not None]
- bounding_boxes = np.vstack(bounding_boxes)
-
- keep = nms(bounding_boxes[:, 0:5], nms_thresholds[0])
- bounding_boxes = bounding_boxes[keep]
-
- # use offsets predicted by pnet to transform bounding boxes
- bounding_boxes = calibrate_box(bounding_boxes[:, 0:5], bounding_boxes[:, 5:])
- # shape [n_boxes, 5]
-
- bounding_boxes = convert_to_square(bounding_boxes)
- bounding_boxes[:, 0:4] = np.round(bounding_boxes[:, 0:4])
-
- # STAGE 2
-
- img_boxes = get_image_boxes(bounding_boxes, image, size=24)
- img_boxes = torch.FloatTensor(img_boxes).to(device)
-
- output = self.rnet(img_boxes)
- offsets = output[0].cpu().data.numpy() # shape [n_boxes, 4]
- probs = output[1].cpu().data.numpy() # shape [n_boxes, 2]
-
- keep = np.where(probs[:, 1] > thresholds[1])[0]
- bounding_boxes = bounding_boxes[keep]
- bounding_boxes[:, 4] = probs[keep, 1].reshape((-1,))
- offsets = offsets[keep]
-
- keep = nms(bounding_boxes, nms_thresholds[1])
- bounding_boxes = bounding_boxes[keep]
- bounding_boxes = calibrate_box(bounding_boxes, offsets[keep])
- bounding_boxes = convert_to_square(bounding_boxes)
- bounding_boxes[:, 0:4] = np.round(bounding_boxes[:, 0:4])
-
- # STAGE 3
-
- img_boxes = get_image_boxes(bounding_boxes, image, size=48)
- if len(img_boxes) == 0:
- return [], []
- img_boxes = torch.FloatTensor(img_boxes).to(device)
- output = self.onet(img_boxes)
- landmarks = output[0].cpu().data.numpy() # shape [n_boxes, 10]
- offsets = output[1].cpu().data.numpy() # shape [n_boxes, 4]
- probs = output[2].cpu().data.numpy() # shape [n_boxes, 2]
-
- keep = np.where(probs[:, 1] > thresholds[2])[0]
- bounding_boxes = bounding_boxes[keep]
- bounding_boxes[:, 4] = probs[keep, 1].reshape((-1,))
- offsets = offsets[keep]
- landmarks = landmarks[keep]
-
- # compute landmark points
- width = bounding_boxes[:, 2] - bounding_boxes[:, 0] + 1.0
- height = bounding_boxes[:, 3] - bounding_boxes[:, 1] + 1.0
- xmin, ymin = bounding_boxes[:, 0], bounding_boxes[:, 1]
- landmarks[:, 0:5] = np.expand_dims(xmin, 1) + np.expand_dims(width, 1) * landmarks[:, 0:5]
- landmarks[:, 5:10] = np.expand_dims(ymin, 1) + np.expand_dims(height, 1) * landmarks[:, 5:10]
-
- bounding_boxes = calibrate_box(bounding_boxes, offsets)
- keep = nms(bounding_boxes, nms_thresholds[2], mode='min')
- bounding_boxes = bounding_boxes[keep]
- landmarks = landmarks[keep]
-
- return bounding_boxes, landmarks
diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/rotation_conversions.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/rotation_conversions.py
deleted file mode 100644
index 1006e8a3117b231a7a456d5b826e76347fe0bfd4..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/rotation_conversions.py
+++ /dev/null
@@ -1,532 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
-# Check PYTORCH3D_LICENCE before use
-
-import functools
-from typing import Optional
-
-import torch
-import torch.nn.functional as F
-
-
-"""
-The transformation matrices returned from the functions in this file assume
-the points on which the transformation will be applied are column vectors.
-i.e. the R matrix is structured as
- R = [
- [Rxx, Rxy, Rxz],
- [Ryx, Ryy, Ryz],
- [Rzx, Rzy, Rzz],
- ] # (3, 3)
-This matrix can be applied to column vectors by post multiplication
-by the points e.g.
- points = [[0], [1], [2]] # (3 x 1) xyz coordinates of a point
- transformed_points = R * points
-To apply the same matrix to points which are row vectors, the R matrix
-can be transposed and pre multiplied by the points:
-e.g.
- points = [[0, 1, 2]] # (1 x 3) xyz coordinates of a point
- transformed_points = points * R.transpose(1, 0)
-"""
-
-
-def quaternion_to_matrix(quaternions):
- """
- Convert rotations given as quaternions to rotation matrices.
- Args:
- quaternions: quaternions with real part first,
- as tensor of shape (..., 4).
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
- r, i, j, k = torch.unbind(quaternions, -1)
- two_s = 2.0 / (quaternions * quaternions).sum(-1)
-
- o = torch.stack(
- (
- 1 - two_s * (j * j + k * k),
- two_s * (i * j - k * r),
- two_s * (i * k + j * r),
- two_s * (i * j + k * r),
- 1 - two_s * (i * i + k * k),
- two_s * (j * k - i * r),
- two_s * (i * k - j * r),
- two_s * (j * k + i * r),
- 1 - two_s * (i * i + j * j),
- ),
- -1,
- )
- return o.reshape(quaternions.shape[:-1] + (3, 3))
-
-
-def _copysign(a, b):
- """
- Return a tensor where each element has the absolute value taken from the,
- corresponding element of a, with sign taken from the corresponding
- element of b. This is like the standard copysign floating-point operation,
- but is not careful about negative 0 and NaN.
- Args:
- a: source tensor.
- b: tensor whose signs will be used, of the same shape as a.
- Returns:
- Tensor of the same shape as a with the signs of b.
- """
- signs_differ = (a < 0) != (b < 0)
- return torch.where(signs_differ, -a, a)
-
-
-def _sqrt_positive_part(x):
- """
- Returns torch.sqrt(torch.max(0, x))
- but with a zero subgradient where x is 0.
- """
- ret = torch.zeros_like(x)
- positive_mask = x > 0
- ret[positive_mask] = torch.sqrt(x[positive_mask])
- return ret
-
-
-def matrix_to_quaternion(matrix):
- """
- Convert rotations given as rotation matrices to quaternions.
- Args:
- matrix: Rotation matrices as tensor of shape (..., 3, 3).
- Returns:
- quaternions with real part first, as tensor of shape (..., 4).
- """
- if matrix.size(-1) != 3 or matrix.size(-2) != 3:
- raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.")
- m00 = matrix[..., 0, 0]
- m11 = matrix[..., 1, 1]
- m22 = matrix[..., 2, 2]
- o0 = 0.5 * _sqrt_positive_part(1 + m00 + m11 + m22)
- x = 0.5 * _sqrt_positive_part(1 + m00 - m11 - m22)
- y = 0.5 * _sqrt_positive_part(1 - m00 + m11 - m22)
- z = 0.5 * _sqrt_positive_part(1 - m00 - m11 + m22)
- o1 = _copysign(x, matrix[..., 2, 1] - matrix[..., 1, 2])
- o2 = _copysign(y, matrix[..., 0, 2] - matrix[..., 2, 0])
- o3 = _copysign(z, matrix[..., 1, 0] - matrix[..., 0, 1])
- return torch.stack((o0, o1, o2, o3), -1)
-
-
-def _axis_angle_rotation(axis: str, angle):
- """
- Return the rotation matrices for one of the rotations about an axis
- of which Euler angles describe, for each value of the angle given.
- Args:
- axis: Axis label "X" or "Y or "Z".
- angle: any shape tensor of Euler angles in radians
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
-
- cos = torch.cos(angle)
- sin = torch.sin(angle)
- one = torch.ones_like(angle)
- zero = torch.zeros_like(angle)
-
- if axis == "X":
- R_flat = (one, zero, zero, zero, cos, -sin, zero, sin, cos)
- if axis == "Y":
- R_flat = (cos, zero, sin, zero, one, zero, -sin, zero, cos)
- if axis == "Z":
- R_flat = (cos, -sin, zero, sin, cos, zero, zero, zero, one)
-
- return torch.stack(R_flat, -1).reshape(angle.shape + (3, 3))
-
-
-def euler_angles_to_matrix(euler_angles, convention: str):
- """
- Convert rotations given as Euler angles in radians to rotation matrices.
- Args:
- euler_angles: Euler angles in radians as tensor of shape (..., 3).
- convention: Convention string of three uppercase letters from
- {"X", "Y", and "Z"}.
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
- if euler_angles.dim() == 0 or euler_angles.shape[-1] != 3:
- raise ValueError("Invalid input euler angles.")
- if len(convention) != 3:
- raise ValueError("Convention must have 3 letters.")
- if convention[1] in (convention[0], convention[2]):
- raise ValueError(f"Invalid convention {convention}.")
- for letter in convention:
- if letter not in ("X", "Y", "Z"):
- raise ValueError(f"Invalid letter {letter} in convention string.")
- matrices = map(_axis_angle_rotation, convention, torch.unbind(euler_angles, -1))
- return functools.reduce(torch.matmul, matrices)
-
-
-def _angle_from_tan(
- axis: str, other_axis: str, data, horizontal: bool, tait_bryan: bool
-):
- """
- Extract the first or third Euler angle from the two members of
- the matrix which are positive constant times its sine and cosine.
- Args:
- axis: Axis label "X" or "Y or "Z" for the angle we are finding.
- other_axis: Axis label "X" or "Y or "Z" for the middle axis in the
- convention.
- data: Rotation matrices as tensor of shape (..., 3, 3).
- horizontal: Whether we are looking for the angle for the third axis,
- which means the relevant entries are in the same row of the
- rotation matrix. If not, they are in the same column.
- tait_bryan: Whether the first and third axes in the convention differ.
- Returns:
- Euler Angles in radians for each matrix in data as a tensor
- of shape (...).
- """
-
- i1, i2 = {"X": (2, 1), "Y": (0, 2), "Z": (1, 0)}[axis]
- if horizontal:
- i2, i1 = i1, i2
- even = (axis + other_axis) in ["XY", "YZ", "ZX"]
- if horizontal == even:
- return torch.atan2(data[..., i1], data[..., i2])
- if tait_bryan:
- return torch.atan2(-data[..., i2], data[..., i1])
- return torch.atan2(data[..., i2], -data[..., i1])
-
-
-def _index_from_letter(letter: str):
- if letter == "X":
- return 0
- if letter == "Y":
- return 1
- if letter == "Z":
- return 2
-
-
-def matrix_to_euler_angles(matrix, convention: str):
- """
- Convert rotations given as rotation matrices to Euler angles in radians.
- Args:
- matrix: Rotation matrices as tensor of shape (..., 3, 3).
- convention: Convention string of three uppercase letters.
- Returns:
- Euler angles in radians as tensor of shape (..., 3).
- """
- if len(convention) != 3:
- raise ValueError("Convention must have 3 letters.")
- if convention[1] in (convention[0], convention[2]):
- raise ValueError(f"Invalid convention {convention}.")
- for letter in convention:
- if letter not in ("X", "Y", "Z"):
- raise ValueError(f"Invalid letter {letter} in convention string.")
- if matrix.size(-1) != 3 or matrix.size(-2) != 3:
- raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.")
- i0 = _index_from_letter(convention[0])
- i2 = _index_from_letter(convention[2])
- tait_bryan = i0 != i2
- if tait_bryan:
- central_angle = torch.asin(
- matrix[..., i0, i2] * (-1.0 if i0 - i2 in [-1, 2] else 1.0)
- )
- else:
- central_angle = torch.acos(matrix[..., i0, i0])
-
- o = (
- _angle_from_tan(
- convention[0], convention[1], matrix[..., i2], False, tait_bryan
- ),
- central_angle,
- _angle_from_tan(
- convention[2], convention[1], matrix[..., i0, :], True, tait_bryan
- ),
- )
- return torch.stack(o, -1)
-
-
-def random_quaternions(
- n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False
-):
- """
- Generate random quaternions representing rotations,
- i.e. versors with nonnegative real part.
- Args:
- n: Number of quaternions in a batch to return.
- dtype: Type to return.
- device: Desired device of returned tensor. Default:
- uses the current device for the default tensor type.
- requires_grad: Whether the resulting tensor should have the gradient
- flag set.
- Returns:
- Quaternions as tensor of shape (N, 4).
- """
- o = torch.randn((n, 4), dtype=dtype, device=device, requires_grad=requires_grad)
- s = (o * o).sum(1)
- o = o / _copysign(torch.sqrt(s), o[:, 0])[:, None]
- return o
-
-
-def random_rotations(
- n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False
-):
- """
- Generate random rotations as 3x3 rotation matrices.
- Args:
- n: Number of rotation matrices in a batch to return.
- dtype: Type to return.
- device: Device of returned tensor. Default: if None,
- uses the current device for the default tensor type.
- requires_grad: Whether the resulting tensor should have the gradient
- flag set.
- Returns:
- Rotation matrices as tensor of shape (n, 3, 3).
- """
- quaternions = random_quaternions(
- n, dtype=dtype, device=device, requires_grad=requires_grad
- )
- return quaternion_to_matrix(quaternions)
-
-
-def random_rotation(
- dtype: Optional[torch.dtype] = None, device=None, requires_grad=False
-):
- """
- Generate a single random 3x3 rotation matrix.
- Args:
- dtype: Type to return
- device: Device of returned tensor. Default: if None,
- uses the current device for the default tensor type
- requires_grad: Whether the resulting tensor should have the gradient
- flag set
- Returns:
- Rotation matrix as tensor of shape (3, 3).
- """
- return random_rotations(1, dtype, device, requires_grad)[0]
-
-
-def standardize_quaternion(quaternions):
- """
- Convert a unit quaternion to a standard form: one in which the real
- part is non negative.
- Args:
- quaternions: Quaternions with real part first,
- as tensor of shape (..., 4).
- Returns:
- Standardized quaternions as tensor of shape (..., 4).
- """
- return torch.where(quaternions[..., 0:1] < 0, -quaternions, quaternions)
-
-
-def quaternion_raw_multiply(a, b):
- """
- Multiply two quaternions.
- Usual torch rules for broadcasting apply.
- Args:
- a: Quaternions as tensor of shape (..., 4), real part first.
- b: Quaternions as tensor of shape (..., 4), real part first.
- Returns:
- The product of a and b, a tensor of quaternions shape (..., 4).
- """
- aw, ax, ay, az = torch.unbind(a, -1)
- bw, bx, by, bz = torch.unbind(b, -1)
- ow = aw * bw - ax * bx - ay * by - az * bz
- ox = aw * bx + ax * bw + ay * bz - az * by
- oy = aw * by - ax * bz + ay * bw + az * bx
- oz = aw * bz + ax * by - ay * bx + az * bw
- return torch.stack((ow, ox, oy, oz), -1)
-
-
-def quaternion_multiply(a, b):
- """
- Multiply two quaternions representing rotations, returning the quaternion
- representing their composition, i.e. the versor with nonnegative real part.
- Usual torch rules for broadcasting apply.
- Args:
- a: Quaternions as tensor of shape (..., 4), real part first.
- b: Quaternions as tensor of shape (..., 4), real part first.
- Returns:
- The product of a and b, a tensor of quaternions of shape (..., 4).
- """
- ab = quaternion_raw_multiply(a, b)
- return standardize_quaternion(ab)
-
-
-def quaternion_invert(quaternion):
- """
- Given a quaternion representing rotation, get the quaternion representing
- its inverse.
- Args:
- quaternion: Quaternions as tensor of shape (..., 4), with real part
- first, which must be versors (unit quaternions).
- Returns:
- The inverse, a tensor of quaternions of shape (..., 4).
- """
-
- return quaternion * quaternion.new_tensor([1, -1, -1, -1])
-
-
-def quaternion_apply(quaternion, point):
- """
- Apply the rotation given by a quaternion to a 3D point.
- Usual torch rules for broadcasting apply.
- Args:
- quaternion: Tensor of quaternions, real part first, of shape (..., 4).
- point: Tensor of 3D points of shape (..., 3).
- Returns:
- Tensor of rotated points of shape (..., 3).
- """
- if point.size(-1) != 3:
- raise ValueError(f"Points are not in 3D, f{point.shape}.")
- real_parts = point.new_zeros(point.shape[:-1] + (1,))
- point_as_quaternion = torch.cat((real_parts, point), -1)
- out = quaternion_raw_multiply(
- quaternion_raw_multiply(quaternion, point_as_quaternion),
- quaternion_invert(quaternion),
- )
- return out[..., 1:]
-
-
-def axis_angle_to_matrix(axis_angle):
- """
- Convert rotations given as axis/angle to rotation matrices.
- Args:
- axis_angle: Rotations given as a vector in axis angle form,
- as a tensor of shape (..., 3), where the magnitude is
- the angle turned anticlockwise in radians around the
- vector's direction.
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
- return quaternion_to_matrix(axis_angle_to_quaternion(axis_angle))
-
-
-def matrix_to_axis_angle(matrix):
- """
- Convert rotations given as rotation matrices to axis/angle.
- Args:
- matrix: Rotation matrices as tensor of shape (..., 3, 3).
- Returns:
- Rotations given as a vector in axis angle form, as a tensor
- of shape (..., 3), where the magnitude is the angle
- turned anticlockwise in radians around the vector's
- direction.
- """
- return quaternion_to_axis_angle(matrix_to_quaternion(matrix))
-
-
-def axis_angle_to_quaternion(axis_angle):
- """
- Convert rotations given as axis/angle to quaternions.
- Args:
- axis_angle: Rotations given as a vector in axis angle form,
- as a tensor of shape (..., 3), where the magnitude is
- the angle turned anticlockwise in radians around the
- vector's direction.
- Returns:
- quaternions with real part first, as tensor of shape (..., 4).
- """
- angles = torch.norm(axis_angle, p=2, dim=-1, keepdim=True)
- half_angles = 0.5 * angles
- eps = 1e-6
- small_angles = angles.abs() < eps
- sin_half_angles_over_angles = torch.empty_like(angles)
- sin_half_angles_over_angles[~small_angles] = (
- torch.sin(half_angles[~small_angles]) / angles[~small_angles]
- )
- # for x small, sin(x/2) is about x/2 - (x/2)^3/6
- # so sin(x/2)/x is about 1/2 - (x*x)/48
- sin_half_angles_over_angles[small_angles] = (
- 0.5 - (angles[small_angles] * angles[small_angles]) / 48
- )
- quaternions = torch.cat(
- [torch.cos(half_angles), axis_angle * sin_half_angles_over_angles], dim=-1
- )
- return quaternions
-
-
-def quaternion_to_axis_angle(quaternions):
- """
- Convert rotations given as quaternions to axis/angle.
- Args:
- quaternions: quaternions with real part first,
- as tensor of shape (..., 4).
- Returns:
- Rotations given as a vector in axis angle form, as a tensor
- of shape (..., 3), where the magnitude is the angle
- turned anticlockwise in radians around the vector's
- direction.
- """
- norms = torch.norm(quaternions[..., 1:], p=2, dim=-1, keepdim=True)
- half_angles = torch.atan2(norms, quaternions[..., :1])
- angles = 2 * half_angles
- eps = 1e-6
- small_angles = angles.abs() < eps
- sin_half_angles_over_angles = torch.empty_like(angles)
- sin_half_angles_over_angles[~small_angles] = (
- torch.sin(half_angles[~small_angles]) / angles[~small_angles]
- )
- # for x small, sin(x/2) is about x/2 - (x/2)^3/6
- # so sin(x/2)/x is about 1/2 - (x*x)/48
- sin_half_angles_over_angles[small_angles] = (
- 0.5 - (angles[small_angles] * angles[small_angles]) / 48
- )
- return quaternions[..., 1:] / sin_half_angles_over_angles
-
-
-def rotation_6d_to_matrix(d6: torch.Tensor) -> torch.Tensor:
- """
- Converts 6D rotation representation by Zhou et al. [1] to rotation matrix
- using Gram--Schmidt orthogonalisation per Section B of [1].
- Args:
- d6: 6D rotation representation, of size (*, 6)
- Returns:
- batch of rotation matrices of size (*, 3, 3)
- [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
- On the Continuity of Rotation Representations in Neural Networks.
- IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Retrieved from http://arxiv.org/abs/1812.07035
- """
-
- a1, a2 = d6[..., :3], d6[..., 3:]
- b1 = F.normalize(a1, dim=-1)
- b2 = a2 - (b1 * a2).sum(-1, keepdim=True) * b1
- b2 = F.normalize(b2, dim=-1)
- b3 = torch.cross(b1, b2, dim=-1)
- return torch.stack((b1, b2, b3), dim=-2)
-
-
-def matrix_to_rotation_6d(matrix: torch.Tensor) -> torch.Tensor:
- """
- Converts rotation matrices to 6D rotation representation by Zhou et al. [1]
- by dropping the last row. Note that 6D representation is not unique.
- Args:
- matrix: batch of rotation matrices of size (*, 3, 3)
- Returns:
- 6D rotation representation, of size (*, 6)
- [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
- On the Continuity of Rotation Representations in Neural Networks.
- IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Retrieved from http://arxiv.org/abs/1812.07035
- """
- return matrix[..., :2, :].clone().reshape(*matrix.size()[:-2], 6)
-
-def canonicalize_smplh(poses, trans = None):
- bs, nframes, njoints = poses.shape[:3]
-
- global_orient = poses[:, :, 0]
-
- # first global rotations
- rot2d = matrix_to_axis_angle(global_orient[:, 0])
- #rot2d[:, :2] = 0 # Remove the rotation along the vertical axis
- rot2d = axis_angle_to_matrix(rot2d)
-
- # Rotate the global rotation to eliminate Z rotations
- global_orient = torch.einsum("ikj,imkl->imjl", rot2d, global_orient)
-
- # Construct canonicalized version of x
- xc = torch.cat((global_orient[:, :, None], poses[:, :, 1:]), dim=2)
-
- if trans is not None:
- vel = trans[:, 1:] - trans[:, :-1]
- # Turn the translation as well
- vel = torch.einsum("ikj,ilk->ilj", rot2d, vel)
- trans = torch.cat((torch.zeros(bs, 1, 3, device=vel.device),
- torch.cumsum(vel, 1)), 1)
- return xc, trans
- else:
- return xc
-
-
\ No newline at end of file
diff --git a/spaces/AIFILMS/image-to-sound-fx/README.md b/spaces/AIFILMS/image-to-sound-fx/README.md
deleted file mode 100644
index 3e3cce556677dac2d274b16fb305c6664e8af132..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/image-to-sound-fx/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Image To Sound FX
-emoji: 👁👂
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.17.1b2
-app_file: app.py
-pinned: false
-duplicated_from: fffiloni/image-to-sound-fx
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/ssim.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/ssim.py
deleted file mode 100644
index 0d0241f267ef58b24979e022b05f2a9adf768826..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/ssim.py
+++ /dev/null
@@ -1,391 +0,0 @@
-# '''
-# https://github.com/One-sixth/ms_ssim_pytorch/blob/master/ssim.py
-# '''
-#
-# import torch
-# import torch.jit
-# import torch.nn.functional as F
-#
-#
-# @torch.jit.script
-# def create_window(window_size: int, sigma: float, channel: int):
-# '''
-# Create 1-D gauss kernel
-# :param window_size: the size of gauss kernel
-# :param sigma: sigma of normal distribution
-# :param channel: input channel
-# :return: 1D kernel
-# '''
-# coords = torch.arange(window_size, dtype=torch.float)
-# coords -= window_size // 2
-#
-# g = torch.exp(-(coords ** 2) / (2 * sigma ** 2))
-# g /= g.sum()
-#
-# g = g.reshape(1, 1, 1, -1).repeat(channel, 1, 1, 1)
-# return g
-#
-#
-# @torch.jit.script
-# def _gaussian_filter(x, window_1d, use_padding: bool):
-# '''
-# Blur input with 1-D kernel
-# :param x: batch of tensors to be blured
-# :param window_1d: 1-D gauss kernel
-# :param use_padding: padding image before conv
-# :return: blured tensors
-# '''
-# C = x.shape[1]
-# padding = 0
-# if use_padding:
-# window_size = window_1d.shape[3]
-# padding = window_size // 2
-# out = F.conv2d(x, window_1d, stride=1, padding=(0, padding), groups=C)
-# out = F.conv2d(out, window_1d.transpose(2, 3), stride=1, padding=(padding, 0), groups=C)
-# return out
-#
-#
-# @torch.jit.script
-# def ssim(X, Y, window, data_range: float, use_padding: bool = False):
-# '''
-# Calculate ssim index for X and Y
-# :param X: images [B, C, H, N_bins]
-# :param Y: images [B, C, H, N_bins]
-# :param window: 1-D gauss kernel
-# :param data_range: value range of input images. (usually 1.0 or 255)
-# :param use_padding: padding image before conv
-# :return:
-# '''
-#
-# K1 = 0.01
-# K2 = 0.03
-# compensation = 1.0
-#
-# C1 = (K1 * data_range) ** 2
-# C2 = (K2 * data_range) ** 2
-#
-# mu1 = _gaussian_filter(X, window, use_padding)
-# mu2 = _gaussian_filter(Y, window, use_padding)
-# sigma1_sq = _gaussian_filter(X * X, window, use_padding)
-# sigma2_sq = _gaussian_filter(Y * Y, window, use_padding)
-# sigma12 = _gaussian_filter(X * Y, window, use_padding)
-#
-# mu1_sq = mu1.pow(2)
-# mu2_sq = mu2.pow(2)
-# mu1_mu2 = mu1 * mu2
-#
-# sigma1_sq = compensation * (sigma1_sq - mu1_sq)
-# sigma2_sq = compensation * (sigma2_sq - mu2_sq)
-# sigma12 = compensation * (sigma12 - mu1_mu2)
-#
-# cs_map = (2 * sigma12 + C2) / (sigma1_sq + sigma2_sq + C2)
-# # Fixed the issue that the negative value of cs_map caused ms_ssim to output Nan.
-# cs_map = cs_map.clamp_min(0.)
-# ssim_map = ((2 * mu1_mu2 + C1) / (mu1_sq + mu2_sq + C1)) * cs_map
-#
-# ssim_val = ssim_map.mean(dim=(1, 2, 3)) # reduce along CHW
-# cs = cs_map.mean(dim=(1, 2, 3))
-#
-# return ssim_val, cs
-#
-#
-# @torch.jit.script
-# def ms_ssim(X, Y, window, data_range: float, weights, use_padding: bool = False, eps: float = 1e-8):
-# '''
-# interface of ms-ssim
-# :param X: a batch of images, (N,C,H,W)
-# :param Y: a batch of images, (N,C,H,W)
-# :param window: 1-D gauss kernel
-# :param data_range: value range of input images. (usually 1.0 or 255)
-# :param weights: weights for different levels
-# :param use_padding: padding image before conv
-# :param eps: use for avoid grad nan.
-# :return:
-# '''
-# levels = weights.shape[0]
-# cs_vals = []
-# ssim_vals = []
-# for _ in range(levels):
-# ssim_val, cs = ssim(X, Y, window=window, data_range=data_range, use_padding=use_padding)
-# # Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf.
-# ssim_val = ssim_val.clamp_min(eps)
-# cs = cs.clamp_min(eps)
-# cs_vals.append(cs)
-#
-# ssim_vals.append(ssim_val)
-# padding = (X.shape[2] % 2, X.shape[3] % 2)
-# X = F.avg_pool2d(X, kernel_size=2, stride=2, padding=padding)
-# Y = F.avg_pool2d(Y, kernel_size=2, stride=2, padding=padding)
-#
-# cs_vals = torch.stack(cs_vals, dim=0)
-# ms_ssim_val = torch.prod((cs_vals[:-1] ** weights[:-1].unsqueeze(1)) * (ssim_vals[-1] ** weights[-1]), dim=0)
-# return ms_ssim_val
-#
-#
-# class SSIM(torch.jit.ScriptModule):
-# __constants__ = ['data_range', 'use_padding']
-#
-# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False):
-# '''
-# :param window_size: the size of gauss kernel
-# :param window_sigma: sigma of normal distribution
-# :param data_range: value range of input images. (usually 1.0 or 255)
-# :param channel: input channels (default: 3)
-# :param use_padding: padding image before conv
-# '''
-# super().__init__()
-# assert window_size % 2 == 1, 'Window size must be odd.'
-# window = create_window(window_size, window_sigma, channel)
-# self.register_buffer('window', window)
-# self.data_range = data_range
-# self.use_padding = use_padding
-#
-# @torch.jit.script_method
-# def forward(self, X, Y):
-# r = ssim(X, Y, window=self.window, data_range=self.data_range, use_padding=self.use_padding)
-# return r[0]
-#
-#
-# class MS_SSIM(torch.jit.ScriptModule):
-# __constants__ = ['data_range', 'use_padding', 'eps']
-#
-# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False, weights=None,
-# levels=None, eps=1e-8):
-# '''
-# class for ms-ssim
-# :param window_size: the size of gauss kernel
-# :param window_sigma: sigma of normal distribution
-# :param data_range: value range of input images. (usually 1.0 or 255)
-# :param channel: input channels
-# :param use_padding: padding image before conv
-# :param weights: weights for different levels. (default [0.0448, 0.2856, 0.3001, 0.2363, 0.1333])
-# :param levels: number of downsampling
-# :param eps: Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf.
-# '''
-# super().__init__()
-# assert window_size % 2 == 1, 'Window size must be odd.'
-# self.data_range = data_range
-# self.use_padding = use_padding
-# self.eps = eps
-#
-# window = create_window(window_size, window_sigma, channel)
-# self.register_buffer('window', window)
-#
-# if weights is None:
-# weights = [0.0448, 0.2856, 0.3001, 0.2363, 0.1333]
-# weights = torch.tensor(weights, dtype=torch.float)
-#
-# if levels is not None:
-# weights = weights[:levels]
-# weights = weights / weights.sum()
-#
-# self.register_buffer('weights', weights)
-#
-# @torch.jit.script_method
-# def forward(self, X, Y):
-# return ms_ssim(X, Y, window=self.window, data_range=self.data_range, weights=self.weights,
-# use_padding=self.use_padding, eps=self.eps)
-#
-#
-# if __name__ == '__main__':
-# print('Simple Test')
-# im = torch.randint(0, 255, (5, 3, 256, 256), dtype=torch.float, device='cuda')
-# img1 = im / 255
-# img2 = img1 * 0.5
-#
-# losser = SSIM(data_range=1.).cuda()
-# loss = losser(img1, img2).mean()
-#
-# losser2 = MS_SSIM(data_range=1.).cuda()
-# loss2 = losser2(img1, img2).mean()
-#
-# print(loss.item())
-# print(loss2.item())
-#
-# if __name__ == '__main__':
-# print('Training Test')
-# import cv2
-# import torch.optim
-# import numpy as np
-# import imageio
-# import time
-#
-# out_test_video = False
-# # 最好不要直接输出gif图,会非常大,最好先输出mkv文件后用ffmpeg转换到GIF
-# video_use_gif = False
-#
-# im = cv2.imread('test_img1.jpg', 1)
-# t_im = torch.from_numpy(im).cuda().permute(2, 0, 1).float()[None] / 255.
-#
-# if out_test_video:
-# if video_use_gif:
-# fps = 0.5
-# out_wh = (im.shape[1] // 2, im.shape[0] // 2)
-# suffix = '.gif'
-# else:
-# fps = 5
-# out_wh = (im.shape[1], im.shape[0])
-# suffix = '.mkv'
-# video_last_time = time.perf_counter()
-# video = imageio.get_writer('ssim_test' + suffix, fps=fps)
-#
-# # 测试ssim
-# print('Training SSIM')
-# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255.
-# rand_im.requires_grad = True
-# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8)
-# losser = SSIM(data_range=1., channel=t_im.shape[1]).cuda()
-# ssim_score = 0
-# while ssim_score < 0.999:
-# optim.zero_grad()
-# loss = losser(rand_im, t_im)
-# (-loss).sum().backward()
-# ssim_score = loss.item()
-# optim.step()
-# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0]
-# r_im = cv2.putText(r_im, 'ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2)
-#
-# if out_test_video:
-# if time.perf_counter() - video_last_time > 1. / fps:
-# video_last_time = time.perf_counter()
-# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB)
-# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA)
-# if isinstance(out_frame, cv2.UMat):
-# out_frame = out_frame.get()
-# video.append_data(out_frame)
-#
-# cv2.imshow('ssim', r_im)
-# cv2.setWindowTitle('ssim', 'ssim %f' % ssim_score)
-# cv2.waitKey(1)
-#
-# if out_test_video:
-# video.close()
-#
-# # 测试ms_ssim
-# if out_test_video:
-# if video_use_gif:
-# fps = 0.5
-# out_wh = (im.shape[1] // 2, im.shape[0] // 2)
-# suffix = '.gif'
-# else:
-# fps = 5
-# out_wh = (im.shape[1], im.shape[0])
-# suffix = '.mkv'
-# video_last_time = time.perf_counter()
-# video = imageio.get_writer('ms_ssim_test' + suffix, fps=fps)
-#
-# print('Training MS_SSIM')
-# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255.
-# rand_im.requires_grad = True
-# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8)
-# losser = MS_SSIM(data_range=1., channel=t_im.shape[1]).cuda()
-# ssim_score = 0
-# while ssim_score < 0.999:
-# optim.zero_grad()
-# loss = losser(rand_im, t_im)
-# (-loss).sum().backward()
-# ssim_score = loss.item()
-# optim.step()
-# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0]
-# r_im = cv2.putText(r_im, 'ms_ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2)
-#
-# if out_test_video:
-# if time.perf_counter() - video_last_time > 1. / fps:
-# video_last_time = time.perf_counter()
-# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB)
-# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA)
-# if isinstance(out_frame, cv2.UMat):
-# out_frame = out_frame.get()
-# video.append_data(out_frame)
-#
-# cv2.imshow('ms_ssim', r_im)
-# cv2.setWindowTitle('ms_ssim', 'ms_ssim %f' % ssim_score)
-# cv2.waitKey(1)
-#
-# if out_test_video:
-# video.close()
-
-"""
-Adapted from https://github.com/Po-Hsun-Su/pytorch-ssim
-"""
-
-import torch
-import torch.nn.functional as F
-from torch.autograd import Variable
-import numpy as np
-from math import exp
-
-
-def gaussian(window_size, sigma):
- gauss = torch.Tensor([exp(-(x - window_size // 2) ** 2 / float(2 * sigma ** 2)) for x in range(window_size)])
- return gauss / gauss.sum()
-
-
-def create_window(window_size, channel):
- _1D_window = gaussian(window_size, 1.5).unsqueeze(1)
- _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
- window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous())
- return window
-
-
-def _ssim(img1, img2, window, window_size, channel, size_average=True):
- mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel)
- mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel)
-
- mu1_sq = mu1.pow(2)
- mu2_sq = mu2.pow(2)
- mu1_mu2 = mu1 * mu2
-
- sigma1_sq = F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel) - mu1_sq
- sigma2_sq = F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel) - mu2_sq
- sigma12 = F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel) - mu1_mu2
-
- C1 = 0.01 ** 2
- C2 = 0.03 ** 2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))
-
- if size_average:
- return ssim_map.mean()
- else:
- return ssim_map.mean(1)
-
-
-class SSIM(torch.nn.Module):
- def __init__(self, window_size=11, size_average=True):
- super(SSIM, self).__init__()
- self.window_size = window_size
- self.size_average = size_average
- self.channel = 1
- self.window = create_window(window_size, self.channel)
-
- def forward(self, img1, img2):
- (_, channel, _, _) = img1.size()
-
- if channel == self.channel and self.window.data.type() == img1.data.type():
- window = self.window
- else:
- window = create_window(self.window_size, channel)
-
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
-
- self.window = window
- self.channel = channel
-
- return _ssim(img1, img2, window, self.window_size, channel, self.size_average)
-
-
-window = None
-
-
-def ssim(img1, img2, window_size=11, size_average=True):
- (_, channel, _, _) = img1.size()
- global window
- if window is None:
- window = create_window(window_size, channel)
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
- return _ssim(img1, img2, window, window_size, channel, size_average)
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/__init__.py
deleted file mode 100644
index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr
-from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/transforms.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/transforms.py
deleted file mode 100644
index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/transforms.py
+++ /dev/null
@@ -1,234 +0,0 @@
-import numpy as np
-import cv2
-import math
-
-
-def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
- """Rezise the sample to ensure the given size. Keeps aspect ratio.
-
- Args:
- sample (dict): sample
- size (tuple): image size
-
- Returns:
- tuple: new size
- """
- shape = list(sample["disparity"].shape)
-
- if shape[0] >= size[0] and shape[1] >= size[1]:
- return sample
-
- scale = [0, 0]
- scale[0] = size[0] / shape[0]
- scale[1] = size[1] / shape[1]
-
- scale = max(scale)
-
- shape[0] = math.ceil(scale * shape[0])
- shape[1] = math.ceil(scale * shape[1])
-
- # resize
- sample["image"] = cv2.resize(
- sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method
- )
-
- sample["disparity"] = cv2.resize(
- sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST
- )
- sample["mask"] = cv2.resize(
- sample["mask"].astype(np.float32),
- tuple(shape[::-1]),
- interpolation=cv2.INTER_NEAREST,
- )
- sample["mask"] = sample["mask"].astype(bool)
-
- return tuple(shape)
-
-
-class Resize(object):
- """Resize sample to given size (width, height).
- """
-
- def __init__(
- self,
- width,
- height,
- resize_target=True,
- keep_aspect_ratio=False,
- ensure_multiple_of=1,
- resize_method="lower_bound",
- image_interpolation_method=cv2.INTER_AREA,
- ):
- """Init.
-
- Args:
- width (int): desired output width
- height (int): desired output height
- resize_target (bool, optional):
- True: Resize the full sample (image, mask, target).
- False: Resize image only.
- Defaults to True.
- keep_aspect_ratio (bool, optional):
- True: Keep the aspect ratio of the input sample.
- Output sample might not have the given width and height, and
- resize behaviour depends on the parameter 'resize_method'.
- Defaults to False.
- ensure_multiple_of (int, optional):
- Output width and height is constrained to be multiple of this parameter.
- Defaults to 1.
- resize_method (str, optional):
- "lower_bound": Output will be at least as large as the given size.
- "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.)
- "minimal": Scale as least as possible. (Output size might be smaller than given size.)
- Defaults to "lower_bound".
- """
- self.__width = width
- self.__height = height
-
- self.__resize_target = resize_target
- self.__keep_aspect_ratio = keep_aspect_ratio
- self.__multiple_of = ensure_multiple_of
- self.__resize_method = resize_method
- self.__image_interpolation_method = image_interpolation_method
-
- def constrain_to_multiple_of(self, x, min_val=0, max_val=None):
- y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- if max_val is not None and y > max_val:
- y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- if y < min_val:
- y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- return y
-
- def get_size(self, width, height):
- # determine new height and width
- scale_height = self.__height / height
- scale_width = self.__width / width
-
- if self.__keep_aspect_ratio:
- if self.__resize_method == "lower_bound":
- # scale such that output size is lower bound
- if scale_width > scale_height:
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- elif self.__resize_method == "upper_bound":
- # scale such that output size is upper bound
- if scale_width < scale_height:
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- elif self.__resize_method == "minimal":
- # scale as least as possbile
- if abs(1 - scale_width) < abs(1 - scale_height):
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- else:
- raise ValueError(
- f"resize_method {self.__resize_method} not implemented"
- )
-
- if self.__resize_method == "lower_bound":
- new_height = self.constrain_to_multiple_of(
- scale_height * height, min_val=self.__height
- )
- new_width = self.constrain_to_multiple_of(
- scale_width * width, min_val=self.__width
- )
- elif self.__resize_method == "upper_bound":
- new_height = self.constrain_to_multiple_of(
- scale_height * height, max_val=self.__height
- )
- new_width = self.constrain_to_multiple_of(
- scale_width * width, max_val=self.__width
- )
- elif self.__resize_method == "minimal":
- new_height = self.constrain_to_multiple_of(scale_height * height)
- new_width = self.constrain_to_multiple_of(scale_width * width)
- else:
- raise ValueError(f"resize_method {self.__resize_method} not implemented")
-
- return (new_width, new_height)
-
- def __call__(self, sample):
- width, height = self.get_size(
- sample["image"].shape[1], sample["image"].shape[0]
- )
-
- # resize sample
- sample["image"] = cv2.resize(
- sample["image"],
- (width, height),
- interpolation=self.__image_interpolation_method,
- )
-
- if self.__resize_target:
- if "disparity" in sample:
- sample["disparity"] = cv2.resize(
- sample["disparity"],
- (width, height),
- interpolation=cv2.INTER_NEAREST,
- )
-
- if "depth" in sample:
- sample["depth"] = cv2.resize(
- sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST
- )
-
- sample["mask"] = cv2.resize(
- sample["mask"].astype(np.float32),
- (width, height),
- interpolation=cv2.INTER_NEAREST,
- )
- sample["mask"] = sample["mask"].astype(bool)
-
- return sample
-
-
-class NormalizeImage(object):
- """Normlize image by given mean and std.
- """
-
- def __init__(self, mean, std):
- self.__mean = mean
- self.__std = std
-
- def __call__(self, sample):
- sample["image"] = (sample["image"] - self.__mean) / self.__std
-
- return sample
-
-
-class PrepareForNet(object):
- """Prepare sample for usage as network input.
- """
-
- def __init__(self):
- pass
-
- def __call__(self, sample):
- image = np.transpose(sample["image"], (2, 0, 1))
- sample["image"] = np.ascontiguousarray(image).astype(np.float32)
-
- if "mask" in sample:
- sample["mask"] = sample["mask"].astype(np.float32)
- sample["mask"] = np.ascontiguousarray(sample["mask"])
-
- if "disparity" in sample:
- disparity = sample["disparity"].astype(np.float32)
- sample["disparity"] = np.ascontiguousarray(disparity)
-
- if "depth" in sample:
- depth = sample["depth"].astype(np.float32)
- sample["depth"] = np.ascontiguousarray(depth)
-
- return sample
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/vocoder_infer/base_vocoder.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/vocoder_infer/base_vocoder.py
deleted file mode 100644
index a332205b553a0a95b9529c78c1ab5e49099b5d41..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/vocoder_infer/base_vocoder.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import librosa
-from text_to_speech.utils.audio import librosa_wav2spec
-from text_to_speech.utils.commons.hparams import hparams
-import numpy as np
-
-REGISTERED_VOCODERS = {}
-
-
-def register_vocoder(name):
- def _f(cls):
- REGISTERED_VOCODERS[name] = cls
- return cls
-
- return _f
-
-
-def get_vocoder_cls(vocoder_name):
- return REGISTERED_VOCODERS.get(vocoder_name)
-
-
-class BaseVocoder:
- def spec2wav(self, mel):
- """
-
- :param mel: [T, 80]
- :return: wav: [T']
- """
-
- raise NotImplementedError
-
- @staticmethod
- def wav2spec(wav_fn):
- """
-
- :param wav_fn: str
- :return: wav, mel: [T, 80]
- """
- wav_spec_dict = librosa_wav2spec(wav_fn, fft_size=hparams['fft_size'],
- hop_size=hparams['hop_size'],
- win_length=hparams['win_size'],
- num_mels=hparams['audio_num_mel_bins'],
- fmin=hparams['fmin'],
- fmax=hparams['fmax'],
- sample_rate=hparams['audio_sample_rate'],
- loud_norm=hparams['loud_norm'])
- wav = wav_spec_dict['wav']
- mel = wav_spec_dict['mel']
- return wav, mel
-
- @staticmethod
- def wav2mfcc(wav_fn):
- fft_size = hparams['fft_size']
- hop_size = hparams['hop_size']
- win_length = hparams['win_size']
- sample_rate = hparams['audio_sample_rate']
- wav, _ = librosa.core.load(wav_fn, sr=sample_rate)
- mfcc = librosa.feature.mfcc(y=wav, sr=sample_rate, n_mfcc=13,
- n_fft=fft_size, hop_length=hop_size,
- win_length=win_length, pad_mode="constant", power=1.0)
- mfcc_delta = librosa.feature.delta(mfcc, order=1)
- mfcc_delta_delta = librosa.feature.delta(mfcc, order=2)
- mfcc = np.concatenate([mfcc, mfcc_delta, mfcc_delta_delta]).T
- return mfcc
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/clap.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/clap.py
deleted file mode 100644
index 3141e47ec7b7df2e3cb81d11582b4738a5d23c1a..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/clap.py
+++ /dev/null
@@ -1,89 +0,0 @@
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-from transformers import AutoModel
-from .audio import get_audio_encoder
-
-class Projection(nn.Module):
- def __init__(self, d_in: int, d_out: int, p: float=0.5) -> None:
- super().__init__()
- self.linear1 = nn.Linear(d_in, d_out, bias=False)
- self.linear2 = nn.Linear(d_out, d_out, bias=False)
- self.layer_norm = nn.LayerNorm(d_out)
- self.drop = nn.Dropout(p)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- embed1 = self.linear1(x)
- embed2 = self.drop(self.linear2(F.gelu(embed1)))
- embeds = self.layer_norm(embed1 + embed2)
- return embeds
-
-class AudioEncoder(nn.Module):
- def __init__(self, audioenc_name:str, d_in: int, d_out: int, sample_rate: int, window_size: int,
- hop_size: int, mel_bins: int, fmin: int, fmax: int, classes_num: int) -> None:
- super().__init__()
-
- audio_encoder = get_audio_encoder(audioenc_name)
-
- self.base = audio_encoder(
- sample_rate, window_size,
- hop_size, mel_bins, fmin, fmax,
- classes_num, d_in)
-
- self.projection = Projection(d_in, d_out)
-
- def forward(self, x):
- out_dict = self.base(x)
- audio_features, audio_classification_output = out_dict['embedding'], out_dict['clipwise_output']
- projected_vec = self.projection(audio_features)
- return projected_vec, audio_classification_output
-
-class TextEncoder(nn.Module):
- def __init__(self, d_out: int, text_model: str, transformer_embed_dim: int) -> None:
- super().__init__()
- self.base = AutoModel.from_pretrained(text_model)
- self.projection = Projection(transformer_embed_dim, d_out)
-
- def forward(self, x):
- out = self.base(**x)[0]
- out = out[:, 0, :] # get CLS token output
- projected_vec = self.projection(out)
- return projected_vec
-
-class CLAP(nn.Module):
- def __init__(self,
- # audio
- audioenc_name: str,
- sample_rate: int,
- window_size: int,
- hop_size: int,
- mel_bins: int,
- fmin: int,
- fmax: int,
- classes_num: int,
- out_emb: int,
- # text
- text_model: str,
- transformer_embed_dim: int,
- # common
- d_proj: int,
- ):
- super().__init__()
-
-
- self.audio_encoder = AudioEncoder(
- audioenc_name, out_emb, d_proj,
- sample_rate, window_size, hop_size, mel_bins, fmin, fmax, classes_num)
-
- self.caption_encoder = TextEncoder(
- d_proj, text_model, transformer_embed_dim
- )
-
- self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07))
-
- def forward(self, audio, text):
- audio_embed, _ = self.audio_encoder(audio)
- caption_embed = self.caption_encoder(text)
-
- return caption_embed, audio_embed, self.logit_scale.exp()
\ No newline at end of file
diff --git a/spaces/AIGText/GlyphControl/ldm/modules/attention.py b/spaces/AIGText/GlyphControl/ldm/modules/attention.py
deleted file mode 100644
index a0fe28b335a8e27e92b97ca6787fab169477085c..0000000000000000000000000000000000000000
--- a/spaces/AIGText/GlyphControl/ldm/modules/attention.py
+++ /dev/null
@@ -1,340 +0,0 @@
-from inspect import isfunction
-import math
-import torch
-import torch.nn.functional as F
-from torch import nn, einsum
-from einops import rearrange, repeat
-from typing import Optional, Any
-
-from ldm.modules.diffusionmodules.util import checkpoint
-
-
-try:
- import xformers
- import xformers.ops
- XFORMERS_IS_AVAILBLE = True
-except Exception as e:
- print("xformer", e)
- XFORMERS_IS_AVAILBLE = False
-# XFORMERS_IS_AVAILBLE = False
-DETERMISTIC = False
-
-def exists(val):
- return val is not None
-
-
-def uniq(arr):
- return{el: True for el in arr}.keys()
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def max_neg_value(t):
- return -torch.finfo(t.dtype).max
-
-
-def init_(tensor):
- dim = tensor.shape[-1]
- std = 1 / math.sqrt(dim)
- tensor.uniform_(-std, std)
- return tensor
-
-
-# feedforward
-class GEGLU(nn.Module):
- def __init__(self, dim_in, dim_out):
- super().__init__()
- self.proj = nn.Linear(dim_in, dim_out * 2)
-
- def forward(self, x):
- x, gate = self.proj(x).chunk(2, dim=-1)
- return x * F.gelu(gate)
-
-
-class FeedForward(nn.Module):
- def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):
- super().__init__()
- inner_dim = int(dim * mult)
- dim_out = default(dim_out, dim)
- project_in = nn.Sequential(
- nn.Linear(dim, inner_dim),
- nn.GELU()
- ) if not glu else GEGLU(dim, inner_dim)
-
- self.net = nn.Sequential(
- project_in,
- nn.Dropout(dropout),
- nn.Linear(inner_dim, dim_out)
- )
-
- def forward(self, x):
- return self.net(x)
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def Normalize(in_channels):
- return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
-
-
-class SpatialSelfAttention(nn.Module):
- def __init__(self, in_channels):
- super().__init__()
- self.in_channels = in_channels
-
- self.norm = Normalize(in_channels)
- self.q = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.k = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.v = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.proj_out = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x):
- h_ = x
- h_ = self.norm(h_)
- q = self.q(h_)
- k = self.k(h_)
- v = self.v(h_)
-
- # compute attention
- b,c,h,w = q.shape
- q = rearrange(q, 'b c h w -> b (h w) c')
- k = rearrange(k, 'b c h w -> b c (h w)')
- w_ = torch.einsum('bij,bjk->bik', q, k)
-
- w_ = w_ * (int(c)**(-0.5))
- w_ = torch.nn.functional.softmax(w_, dim=2)
-
- # attend to values
- v = rearrange(v, 'b c h w -> b c (h w)')
- w_ = rearrange(w_, 'b i j -> b j i')
- h_ = torch.einsum('bij,bjk->bik', v, w_)
- h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h)
- h_ = self.proj_out(h_)
-
- return x+h_
-
-
-class CrossAttention(nn.Module):
- def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.):
- super().__init__()
- inner_dim = dim_head * heads
- context_dim = default(context_dim, query_dim)
-
- self.scale = dim_head ** -0.5
- self.heads = heads
-
- self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
- self.to_k = nn.Linear(context_dim, inner_dim, bias=False)
- self.to_v = nn.Linear(context_dim, inner_dim, bias=False)
-
- self.to_out = nn.Sequential(
- nn.Linear(inner_dim, query_dim),
- nn.Dropout(dropout)
- )
-
- def forward(self, x, context=None, mask=None):
- h = self.heads
-
- q = self.to_q(x)
- context = default(context, x)
- k = self.to_k(context)
- v = self.to_v(context)
-
- q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
-
- sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
- del q, k
-
- if exists(mask):
- mask = rearrange(mask, 'b ... -> b (...)')
- max_neg_value = -torch.finfo(sim.dtype).max
- mask = repeat(mask, 'b j -> (b h) () j', h=h)
- sim.masked_fill_(~mask, max_neg_value)
-
- # attention, what we cannot get enough of
- sim = sim.softmax(dim=-1)
-
- out = einsum('b i j, b j d -> b i d', sim, v)
- out = rearrange(out, '(b h) n d -> b n (h d)', h=h)
- return self.to_out(out)
-
-
-class MemoryEfficientCrossAttention(nn.Module):
- # https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223
- def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0):
- super().__init__()
- print(f"Setting up {self.__class__.__name__}. Query dim is {query_dim}, context_dim is {context_dim} and using "
- f"{heads} heads.")
- inner_dim = dim_head * heads
- context_dim = default(context_dim, query_dim)
-
- self.heads = heads
- self.dim_head = dim_head
-
- self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
- self.to_k = nn.Linear(context_dim, inner_dim, bias=False)
- self.to_v = nn.Linear(context_dim, inner_dim, bias=False)
-
- self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout))
- self.attention_op: Optional[Any] = None
- print("DETERMISTIC:", DETERMISTIC)
-
- def forward(self, x, context=None, mask=None):
- q = self.to_q(x)
- context = default(context, x)
- k = self.to_k(context)
- v = self.to_v(context)
-
- b, _, _ = q.shape
- q, k, v = map(
- lambda t: t.unsqueeze(3)
- .reshape(b, t.shape[1], self.heads, self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b * self.heads, t.shape[1], self.dim_head)
- .contiguous(),
- (q, k, v),
- )
-
- torch.use_deterministic_algorithms(False)
- # actually compute the attention, what we cannot get enough of
- out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
- if DETERMISTIC:
- torch.use_deterministic_algorithms(True, warn_only=True)
-
- # # actually compute the attention, what we cannot get enough of
- # out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
-
- if exists(mask):
- raise NotImplementedError
- out = (
- out.unsqueeze(0)
- .reshape(b, self.heads, out.shape[1], self.dim_head)
- .permute(0, 2, 1, 3)
- .reshape(b, out.shape[1], self.heads * self.dim_head)
- )
- return self.to_out(out)
-
-
-class BasicTransformerBlock(nn.Module):
- ATTENTION_MODES = {
- "softmax": CrossAttention, # vanilla attention
- "softmax-xformers": MemoryEfficientCrossAttention
- }
- def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True,
- disable_self_attn=False):
- super().__init__()
- attn_mode = "softmax-xformers" if XFORMERS_IS_AVAILBLE else "softmax"
- assert attn_mode in self.ATTENTION_MODES
- attn_cls = self.ATTENTION_MODES[attn_mode]
- self.disable_self_attn = disable_self_attn
- self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout,
- context_dim=context_dim if self.disable_self_attn else None) # is a self-attention if not self.disable_self_attn
- self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)
- self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim,
- heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none
- self.norm1 = nn.LayerNorm(dim)
- self.norm2 = nn.LayerNorm(dim)
- self.norm3 = nn.LayerNorm(dim)
- self.checkpoint = checkpoint
-
- def forward(self, x, context=None):
- return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
-
- def _forward(self, x, context=None): # cross attention
- x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
- x = self.attn2(self.norm2(x), context=context) + x
- x = self.ff(self.norm3(x)) + x
- return x
-
-
-class SpatialTransformer(nn.Module):
- """
- Transformer block for image-like data.
- First, project the input (aka embedding)
- and reshape to b, t, d.
- Then apply standard transformer action.
- Finally, reshape to image
- NEW: use_linear for more efficiency instead of the 1x1 convs
- """
- def __init__(self, in_channels, n_heads, d_head,
- depth=1, dropout=0., context_dim=None,
- disable_self_attn=False, use_linear=False,
- use_checkpoint=True):
- super().__init__()
- if exists(context_dim) and not isinstance(context_dim, list):
- context_dim = [context_dim]
- self.in_channels = in_channels
- inner_dim = n_heads * d_head
- self.norm = Normalize(in_channels)
- if not use_linear:
- self.proj_in = nn.Conv2d(in_channels,
- inner_dim,
- kernel_size=1,
- stride=1,
- padding=0)
- else:
- self.proj_in = nn.Linear(in_channels, inner_dim)
-
- self.transformer_blocks = nn.ModuleList(
- [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d],
- disable_self_attn=disable_self_attn, checkpoint=use_checkpoint)
- for d in range(depth)]
- )
- if not use_linear:
- self.proj_out = zero_module(nn.Conv2d(inner_dim,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0))
- else:
- self.proj_out = zero_module(nn.Linear(in_channels, inner_dim))
- self.use_linear = use_linear
-
- def forward(self, x, context=None):
- # note: if no context is given, cross-attention defaults to self-attention
- if not isinstance(context, list):
- context = [context]
- b, c, h, w = x.shape
- x_in = x
- x = self.norm(x)
- if not self.use_linear:
- x = self.proj_in(x)
- x = rearrange(x, 'b c h w -> b (h w) c').contiguous()
- if self.use_linear:
- x = self.proj_in(x)
- for i, block in enumerate(self.transformer_blocks):
- x = block(x, context=context[i])
- if self.use_linear:
- x = self.proj_out(x)
- x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous()
- if not self.use_linear:
- x = self.proj_out(x)
- return x + x_in
-
diff --git a/spaces/AgentVerse/agentVerse/dataloader/commongen.py b/spaces/AgentVerse/agentVerse/dataloader/commongen.py
deleted file mode 100644
index e7a5e75f9e013cbaa7585d8e3c5ffa2bfd714d7d..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/dataloader/commongen.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from .dataloader import DataLoader
-from . import dataloader_registry
-import json
-
-
-@dataloader_registry.register("tasksolving/commongen/gpt-4")
-@dataloader_registry.register("tasksolving/commongen/gpt-3.5")
-class CommongenLoader(DataLoader):
- def __init__(self, path: str):
- super().__init__(path)
-
- def load(self):
- with open(self.path) as f:
- for line in f:
- line = json.loads(line)
- self.examples.append(
- {
- "input": line["concepts"],
- "answer": None,
- }
- )
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateSizer.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateSizer.js
deleted file mode 100644
index 5c03df2cfe55f20581b16b82b53fc5338a2ceeb9..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateSizer.js
+++ /dev/null
@@ -1,8 +0,0 @@
-import CreateAnySizer from './utils/CreateAnySizer.js';
-import Sizer from '../../sizer/Sizer.js';
-
-var CreateSizer = function (scene, data, view, styles, customBuilders) {
- return CreateAnySizer(scene, data, view, styles, customBuilders, Sizer);
-}
-
-export default CreateSizer;
\ No newline at end of file
diff --git a/spaces/Akira12312/admruul-anything-v3.0/README.md b/spaces/Akira12312/admruul-anything-v3.0/README.md
deleted file mode 100644
index 507f936bdae6e54fdc1e6de73dc9c45b23d32d69..0000000000000000000000000000000000000000
--- a/spaces/Akira12312/admruul-anything-v3.0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Admruul Anything V3.0
-emoji: 🔥
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ameaou/academic-chatgpt3.1/core_functional.py b/spaces/Ameaou/academic-chatgpt3.1/core_functional.py
deleted file mode 100644
index 536ccb609c38cbbebfda4ba17bd51a78857d711e..0000000000000000000000000000000000000000
--- a/spaces/Ameaou/academic-chatgpt3.1/core_functional.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# 'primary' 颜色对应 theme.py 中的 primary_hue
-# 'secondary' 颜色对应 theme.py 中的 neutral_hue
-# 'stop' 颜色对应 theme.py 中的 color_er
-# 默认按钮颜色是 secondary
-from toolbox import clear_line_break
-
-
-def get_core_functions():
- return {
- "英语学术润色": {
- # 前言
- "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
- r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
- r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
- # 后语
- "Suffix": r"",
- "Color": r"secondary", # 按钮颜色
- },
- "中文学术润色": {
- "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
- r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
- "Suffix": r"",
- },
- "查找语法错误": {
- "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " +
- r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." +
- r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " +
- r"put the original text the first column, " +
- r"put the corrected text in the second column and highlight the key words you fixed.""\n"
- r"Example:""\n"
- r"Paragraph: How is you? Do you knows what is it?""\n"
- r"| Original sentence | Corrected sentence |""\n"
- r"| :--- | :--- |""\n"
- r"| How **is** you? | How **are** you? |""\n"
- r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n"
- r"Below is a paragraph from an academic paper. "
- r"You need to report all grammar and spelling mistakes as the example before."
- + "\n\n",
- "Suffix": r"",
- "PreProcess": clear_line_break, # 预处理:清除换行符
- },
- "中译英": {
- "Prefix": r"Please translate following sentence to English:" + "\n\n",
- "Suffix": r"",
- },
- "学术中英互译": {
- "Prefix": r"I want you to act as a scientific English-Chinese translator, " +
- r"I will provide you with some paragraphs in one language " +
- r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
- r"Do not repeat the original provided paragraphs after translation. " +
- r"You should use artificial intelligence tools, " +
- r"such as natural language processing, and rhetorical knowledge " +
- r"and experience about effective writing techniques to reply. " +
- r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
- "Suffix": "",
- "Color": "secondary",
- },
- "英译中": {
- "Prefix": r"翻译成地道的中文:" + "\n\n",
- "Suffix": r"",
- },
- "找图片": {
- "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
- r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
- "Suffix": r"",
- },
- "解释代码": {
- "Prefix": r"请解释以下代码:" + "\n```\n",
- "Suffix": "\n```\n",
- },
- }
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_discrete.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_discrete.py
deleted file mode 100644
index cb126d4b953cd28e23d048c4f1e2cf8ed90cdac0..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_discrete.py
+++ /dev/null
@@ -1,432 +0,0 @@
-# Copyright 2023 Katherine Crowson and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import math
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, logging, randn_tensor
-from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-@dataclass
-# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerDiscrete
-class EulerDiscreteSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
- `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: torch.FloatTensor
- pred_original_sample: Optional[torch.FloatTensor] = None
-
-
-# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar
-def betas_for_alpha_bar(
- num_diffusion_timesteps,
- max_beta=0.999,
- alpha_transform_type="cosine",
-):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
- alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar.
- Choose from `cosine` or `exp`
-
- Returns:
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
- """
- if alpha_transform_type == "cosine":
-
- def alpha_bar_fn(t):
- return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2
-
- elif alpha_transform_type == "exp":
-
- def alpha_bar_fn(t):
- return math.exp(t * -12.0)
-
- else:
- raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}")
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta))
- return torch.tensor(betas, dtype=torch.float32)
-
-
-class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin):
- """
- Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original
- k-diffusion implementation by Katherine Crowson:
- https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear` or `scaled_linear`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- prediction_type (`str`, default `"epsilon"`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- interpolation_type (`str`, default `"linear"`, optional):
- interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be one of
- [`"linear"`, `"log_linear"`].
- use_karras_sigmas (`bool`, *optional*, defaults to `False`):
- This parameter controls whether to use Karras sigmas (Karras et al. (2022) scheme) for step sizes in the
- noise schedule during the sampling process. If True, the sigmas will be determined according to a sequence
- of noise levels {σi} as defined in Equation (5) of the paper https://arxiv.org/pdf/2206.00364.pdf.
- timestep_spacing (`str`, default `"linspace"`):
- The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample
- Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information.
- steps_offset (`int`, default `0`):
- an offset added to the inference steps. You can use a combination of `offset=1` and
- `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in
- stable diffusion.
- """
-
- _compatibles = [e.name for e in KarrasDiffusionSchedulers]
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- prediction_type: str = "epsilon",
- interpolation_type: str = "linear",
- use_karras_sigmas: Optional[bool] = False,
- timestep_spacing: str = "linspace",
- steps_offset: int = 0,
- ):
- if trained_betas is not None:
- self.betas = torch.tensor(trained_betas, dtype=torch.float32)
- elif beta_schedule == "linear":
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = (
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
- )
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
-
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32)
- self.sigmas = torch.from_numpy(sigmas)
-
- # setable values
- self.num_inference_steps = None
- timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy()
- self.timesteps = torch.from_numpy(timesteps)
- self.is_scale_input_called = False
- self.use_karras_sigmas = use_karras_sigmas
-
- @property
- def init_noise_sigma(self):
- # standard deviation of the initial noise distribution
- if self.config.timestep_spacing in ["linspace", "trailing"]:
- return self.sigmas.max()
-
- return (self.sigmas.max() ** 2 + 1) ** 0.5
-
- def scale_model_input(
- self, sample: torch.FloatTensor, timestep: Union[float, torch.FloatTensor]
- ) -> torch.FloatTensor:
- """
- Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.
-
- Args:
- sample (`torch.FloatTensor`): input sample
- timestep (`float` or `torch.FloatTensor`): the current timestep in the diffusion chain
-
- Returns:
- `torch.FloatTensor`: scaled input sample
- """
- if isinstance(timestep, torch.Tensor):
- timestep = timestep.to(self.timesteps.device)
- step_index = (self.timesteps == timestep).nonzero().item()
- sigma = self.sigmas[step_index]
-
- sample = sample / ((sigma**2 + 1) ** 0.5)
-
- self.is_scale_input_called = True
- return sample
-
- def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None):
- """
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- device (`str` or `torch.device`, optional):
- the device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- """
- self.num_inference_steps = num_inference_steps
-
- # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891
- if self.config.timestep_spacing == "linspace":
- timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[
- ::-1
- ].copy()
- elif self.config.timestep_spacing == "leading":
- step_ratio = self.config.num_train_timesteps // self.num_inference_steps
- # creates integer timesteps by multiplying by ratio
- # casting to int to avoid issues when num_inference_step is power of 3
- timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(float)
- timesteps += self.config.steps_offset
- elif self.config.timestep_spacing == "trailing":
- step_ratio = self.config.num_train_timesteps / self.num_inference_steps
- # creates integer timesteps by multiplying by ratio
- # casting to int to avoid issues when num_inference_step is power of 3
- timesteps = (np.arange(self.config.num_train_timesteps, 0, -step_ratio)).round().copy().astype(float)
- timesteps -= 1
- else:
- raise ValueError(
- f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', 'leading' or 'trailing'."
- )
-
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- log_sigmas = np.log(sigmas)
-
- if self.config.interpolation_type == "linear":
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
- elif self.config.interpolation_type == "log_linear":
- sigmas = torch.linspace(np.log(sigmas[-1]), np.log(sigmas[0]), num_inference_steps + 1).exp()
- else:
- raise ValueError(
- f"{self.config.interpolation_type} is not implemented. Please specify interpolation_type to either"
- " 'linear' or 'log_linear'"
- )
-
- if self.use_karras_sigmas:
- sigmas = self._convert_to_karras(in_sigmas=sigmas, num_inference_steps=self.num_inference_steps)
- timesteps = np.array([self._sigma_to_t(sigma, log_sigmas) for sigma in sigmas])
-
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
- self.sigmas = torch.from_numpy(sigmas).to(device=device)
- if str(device).startswith("mps"):
- # mps does not support float64
- self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32)
- else:
- self.timesteps = torch.from_numpy(timesteps).to(device=device)
-
- def _sigma_to_t(self, sigma, log_sigmas):
- # get log sigma
- log_sigma = np.log(sigma)
-
- # get distribution
- dists = log_sigma - log_sigmas[:, np.newaxis]
-
- # get sigmas range
- low_idx = np.cumsum((dists >= 0), axis=0).argmax(axis=0).clip(max=log_sigmas.shape[0] - 2)
- high_idx = low_idx + 1
-
- low = log_sigmas[low_idx]
- high = log_sigmas[high_idx]
-
- # interpolate sigmas
- w = (low - log_sigma) / (low - high)
- w = np.clip(w, 0, 1)
-
- # transform interpolation to time range
- t = (1 - w) * low_idx + w * high_idx
- t = t.reshape(sigma.shape)
- return t
-
- # Copied from https://github.com/crowsonkb/k-diffusion/blob/686dbad0f39640ea25c8a8c6a6e56bb40eacefa2/k_diffusion/sampling.py#L17
- def _convert_to_karras(self, in_sigmas: torch.FloatTensor, num_inference_steps) -> torch.FloatTensor:
- """Constructs the noise schedule of Karras et al. (2022)."""
-
- sigma_min: float = in_sigmas[-1].item()
- sigma_max: float = in_sigmas[0].item()
-
- rho = 7.0 # 7.0 is the value used in the paper
- ramp = np.linspace(0, 1, num_inference_steps)
- min_inv_rho = sigma_min ** (1 / rho)
- max_inv_rho = sigma_max ** (1 / rho)
- sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho
- return sigmas
-
- def step(
- self,
- model_output: torch.FloatTensor,
- timestep: Union[float, torch.FloatTensor],
- sample: torch.FloatTensor,
- s_churn: float = 0.0,
- s_tmin: float = 0.0,
- s_tmax: float = float("inf"),
- s_noise: float = 1.0,
- generator: Optional[torch.Generator] = None,
- return_dict: bool = True,
- ) -> Union[EulerDiscreteSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timestep (`float`): current timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- s_churn (`float`)
- s_tmin (`float`)
- s_tmax (`float`)
- s_noise (`float`)
- generator (`torch.Generator`, optional): Random number generator.
- return_dict (`bool`): option for returning tuple rather than EulerDiscreteSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] if `return_dict` is True, otherwise a
- `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
-
- if (
- isinstance(timestep, int)
- or isinstance(timestep, torch.IntTensor)
- or isinstance(timestep, torch.LongTensor)
- ):
- raise ValueError(
- (
- "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to"
- " `EulerDiscreteScheduler.step()` is not supported. Make sure to pass"
- " one of the `scheduler.timesteps` as a timestep."
- ),
- )
-
- if not self.is_scale_input_called:
- logger.warning(
- "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
- "See `StableDiffusionPipeline` for a usage example."
- )
-
- if isinstance(timestep, torch.Tensor):
- timestep = timestep.to(self.timesteps.device)
-
- step_index = (self.timesteps == timestep).nonzero().item()
- sigma = self.sigmas[step_index]
-
- gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0
-
- noise = randn_tensor(
- model_output.shape, dtype=model_output.dtype, device=model_output.device, generator=generator
- )
-
- eps = noise * s_noise
- sigma_hat = sigma * (gamma + 1)
-
- if gamma > 0:
- sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5
-
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
- # NOTE: "original_sample" should not be an expected prediction_type but is left in for
- # backwards compatibility
- if self.config.prediction_type == "original_sample" or self.config.prediction_type == "sample":
- pred_original_sample = model_output
- elif self.config.prediction_type == "epsilon":
- pred_original_sample = sample - sigma_hat * model_output
- elif self.config.prediction_type == "v_prediction":
- # * c_out + input * c_skip
- pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
- )
-
- # 2. Convert to an ODE derivative
- derivative = (sample - pred_original_sample) / sigma_hat
-
- dt = self.sigmas[step_index + 1] - sigma_hat
-
- prev_sample = sample + derivative * dt
-
- if not return_dict:
- return (prev_sample,)
-
- return EulerDiscreteSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)
-
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- timesteps: torch.FloatTensor,
- ) -> torch.FloatTensor:
- # Make sure sigmas and timesteps have the same device and dtype as original_samples
- sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype)
- if original_samples.device.type == "mps" and torch.is_floating_point(timesteps):
- # mps does not support float64
- schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32)
- timesteps = timesteps.to(original_samples.device, dtype=torch.float32)
- else:
- schedule_timesteps = self.timesteps.to(original_samples.device)
- timesteps = timesteps.to(original_samples.device)
-
- step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps]
-
- sigma = sigmas[step_indices].flatten()
- while len(sigma.shape) < len(original_samples.shape):
- sigma = sigma.unsqueeze(-1)
-
- noisy_samples = original_samples + noise * sigma
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py
deleted file mode 100644
index b7afad8226b87292100270e3e7daad6885be0e7f..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# This file is autogenerated by the command `make fix-copies`, do not edit.
-from ..utils import DummyObject, requires_backends
-
-
-class OnnxStableDiffusionImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers", "onnx"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
-
-class OnnxStableDiffusionInpaintPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers", "onnx"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
-
-class OnnxStableDiffusionInpaintPipelineLegacy(metaclass=DummyObject):
- _backends = ["torch", "transformers", "onnx"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
-
-class OnnxStableDiffusionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers", "onnx"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
-
-class OnnxStableDiffusionUpscalePipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers", "onnx"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
-
-class StableDiffusionOnnxPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers", "onnx"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers", "onnx"])
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/text_to_video/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/text_to_video/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Andy1621/IAT_enhancement/model/global_net.py b/spaces/Andy1621/IAT_enhancement/model/global_net.py
deleted file mode 100644
index 005dcfb7919b62e913694a17083b2a508668cf2b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/IAT_enhancement/model/global_net.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import imp
-import torch
-import torch.nn as nn
-from timm.models.layers import trunc_normal_, DropPath, to_2tuple
-import os
-from .blocks import Mlp
-
-
-class query_Attention(nn.Module):
- def __init__(self, dim, num_heads=2, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
- self.scale = qk_scale or head_dim ** -0.5
-
- self.q = nn.Parameter(torch.ones((1, 10, dim)), requires_grad=True)
- self.k = nn.Linear(dim, dim, bias=qkv_bias)
- self.v = nn.Linear(dim, dim, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x):
- B, N, C = x.shape
- k = self.k(x).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
- v = self.v(x).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
-
- q = self.q.expand(B, -1, -1).view(B, -1, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, 10, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class query_SABlock(nn.Module):
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = norm_layer(dim)
- self.attn = query_Attention(
- dim,
- num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- x = x.flatten(2).transpose(1, 2)
- x = self.drop_path(self.attn(self.norm1(x)))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- return x
-
-
-class conv_embedding(nn.Module):
- def __init__(self, in_channels, out_channels):
- super(conv_embedding, self).__init__()
- self.proj = nn.Sequential(
- nn.Conv2d(in_channels, out_channels // 2, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),
- nn.BatchNorm2d(out_channels // 2),
- nn.GELU(),
- # nn.Conv2d(out_channels // 2, out_channels // 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
- # nn.BatchNorm2d(out_channels // 2),
- # nn.GELU(),
- nn.Conv2d(out_channels // 2, out_channels, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),
- nn.BatchNorm2d(out_channels),
- )
-
- def forward(self, x):
- x = self.proj(x)
- return x
-
-
-class Global_pred(nn.Module):
- def __init__(self, in_channels=3, out_channels=64, num_heads=4, type='exp'):
- super(Global_pred, self).__init__()
- if type == 'exp':
- self.gamma_base = nn.Parameter(torch.ones((1)), requires_grad=False) # False in exposure correction
- else:
- self.gamma_base = nn.Parameter(torch.ones((1)), requires_grad=True)
- self.color_base = nn.Parameter(torch.eye((3)), requires_grad=True) # basic color matrix
- # main blocks
- self.conv_large = conv_embedding(in_channels, out_channels)
- self.generator = query_SABlock(dim=out_channels, num_heads=num_heads)
- self.gamma_linear = nn.Linear(out_channels, 1)
- self.color_linear = nn.Linear(out_channels, 1)
-
- self.apply(self._init_weights)
-
- for name, p in self.named_parameters():
- if name == 'generator.attn.v.weight':
- nn.init.constant_(p, 0)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
-
- def forward(self, x):
- #print(self.gamma_base)
- x = self.conv_large(x)
- x = self.generator(x)
- gamma, color = x[:, 0].unsqueeze(1), x[:, 1:]
- gamma = self.gamma_linear(gamma).squeeze(-1) + self.gamma_base
- #print(self.gamma_base, self.gamma_linear(gamma))
- color = self.color_linear(color).squeeze(-1).view(-1, 3, 3) + self.color_base
- return gamma, color
-
-if __name__ == "__main__":
- os.environ['CUDA_VISIBLE_DEVICES']='3'
- #net = Local_pred_new().cuda()
- img = torch.Tensor(8, 3, 400, 600)
- global_net = Global_pred()
- gamma, color = global_net(img)
- print(gamma.shape, color.shape)
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py
deleted file mode 100644
index 769472352d06a8f2c30d73ae1f57c393f77adfa2..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,62 +0,0 @@
-_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py'
-model = dict(
- bbox_head=dict(
- _delete_=True,
- type='GARetinaHead',
- num_classes=80,
- in_channels=256,
- stacked_convs=4,
- feat_channels=256,
- approx_anchor_generator=dict(
- type='AnchorGenerator',
- octave_base_scale=4,
- scales_per_octave=3,
- ratios=[0.5, 1.0, 2.0],
- strides=[8, 16, 32, 64, 128]),
- square_anchor_generator=dict(
- type='AnchorGenerator',
- ratios=[1.0],
- scales=[4],
- strides=[8, 16, 32, 64, 128]),
- anchor_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loc_filter_thr=0.01,
- loss_loc=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0),
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox=dict(type='SmoothL1Loss', beta=0.04, loss_weight=1.0)),
- # training and testing settings
- train_cfg=dict(
- ga_assigner=dict(
- type='ApproxMaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.4,
- min_pos_iou=0.4,
- ignore_iof_thr=-1),
- ga_sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- assigner=dict(neg_iou_thr=0.5, min_pos_iou=0.0),
- center_ratio=0.2,
- ignore_ratio=0.5))
-optimizer_config = dict(
- _delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_80k_ade20k.py
deleted file mode 100644
index a64dac670ed4d4632e7b9791ec5f8a334dcea78e..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_80k_ade20k.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './ann_r50-d8_512x512_80k_ade20k.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Windows-installation-guide.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Windows-installation-guide.md
deleted file mode 100644
index 83b22efa38b1839d07a5a58494dbc26ba86397ee..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Windows-installation-guide.md
+++ /dev/null
@@ -1,9 +0,0 @@
-If you are having trouble following the installation instructions in the README, Reddit user [Technical_Leather949](https://www.reddit.com/user/Technical_Leather949/) has created a more detailed, step-by-step guide covering:
-
-* Windows installation
-* 8-bit mode on Windows
-* LLaMA
-* LLaMA 4-bit
-
-The guide can be found here: https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/
-
diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/bert.py b/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/bert.py
deleted file mode 100644
index a83d96d2a77ed05198efc05837522bc88d2499cc..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/bert.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from transformers import BertTokenizer, BertModel
-
-tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
-model = BertModel.from_pretrained("bert-base-uncased")
-text = "Replace me by any text you'd like."
-
-
-def bert_embeddings(text):
- # text = "Replace me by any text you'd like."
- encoded_input = tokenizer(text, return_tensors="pt")
- output = model(**encoded_input)
- return output
-
-
-from transformers import RobertaTokenizer, RobertaModel
-
-tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
-model = RobertaModel.from_pretrained("roberta-base")
-text = "Replace me by any text you'd like."
-
-
-def Roberta_embeddings(text):
- # text = "Replace me by any text you'd like."
- encoded_input = tokenizer(text, return_tensors="pt")
- output = model(**encoded_input)
- return output
-
-
-from transformers import BartTokenizer, BartModel
-
-tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
-model = BartModel.from_pretrained("facebook/bart-base")
-text = "Replace me by any text you'd like."
-
-
-def bart_embeddings(text):
- # text = "Replace me by any text you'd like."
- encoded_input = tokenizer(text, return_tensors="pt")
- output = model(**encoded_input)
- return output
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py
deleted file mode 100644
index d7bbdd7d00505f1e51154379c99ab621cb648a6d..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from ..common.optim import SGD as optimizer
-from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
-from ..common.data.coco import dataloader
-from ..common.models.mask_rcnn_fpn import model
-from ..common.train import train
-
-from detectron2.config import LazyCall as L
-from detectron2.modeling.backbone import RegNet
-from detectron2.modeling.backbone.regnet import SimpleStem, ResBottleneckBlock
-
-
-# Replace default ResNet with RegNetX-4GF from the DDS paper. Config source:
-# https://github.com/facebookresearch/pycls/blob/2c152a6e5d913e898cca4f0a758f41e6b976714d/configs/dds_baselines/regnetx/RegNetX-4.0GF_dds_8gpu.yaml#L4-L9 # noqa
-model.backbone.bottom_up = L(RegNet)(
- stem_class=SimpleStem,
- stem_width=32,
- block_class=ResBottleneckBlock,
- depth=23,
- w_a=38.65,
- w_0=96,
- w_m=2.43,
- group_width=40,
- freeze_at=2,
- norm="FrozenBN",
- out_features=["s1", "s2", "s3", "s4"],
-)
-model.pixel_std = [57.375, 57.120, 58.395]
-
-optimizer.weight_decay = 5e-5
-train.init_checkpoint = (
- "https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906383/RegNetX-4.0GF_dds_8gpu.pyth"
-)
-# RegNets benefit from enabling cudnn benchmark mode
-train.cudnn_benchmark = True
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/fast_eval_api.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/fast_eval_api.py
deleted file mode 100644
index 2eb202bd5efa3ec3d366027b1debffc269ae8b17..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/fast_eval_api.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import logging
-import numpy as np
-import time
-from pycocotools.cocoeval import COCOeval
-
-from detectron2 import _C
-
-logger = logging.getLogger(__name__)
-
-
-class COCOeval_opt(COCOeval):
- """
- This is a slightly modified version of the original COCO API, where the functions evaluateImg()
- and accumulate() are implemented in C++ to speedup evaluation
- """
-
- def evaluate(self):
- """
- Run per image evaluation on given images and store results in self.evalImgs_cpp, a
- datastructure that isn't readable from Python but is used by a c++ implementation of
- accumulate(). Unlike the original COCO PythonAPI, we don't populate the datastructure
- self.evalImgs because this datastructure is a computational bottleneck.
- :return: None
- """
- tic = time.time()
-
- p = self.params
- # add backward compatibility if useSegm is specified in params
- if p.useSegm is not None:
- p.iouType = "segm" if p.useSegm == 1 else "bbox"
- logger.info("Evaluate annotation type *{}*".format(p.iouType))
- p.imgIds = list(np.unique(p.imgIds))
- if p.useCats:
- p.catIds = list(np.unique(p.catIds))
- p.maxDets = sorted(p.maxDets)
- self.params = p
-
- self._prepare() # bottleneck
-
- # loop through images, area range, max detection number
- catIds = p.catIds if p.useCats else [-1]
-
- if p.iouType == "segm" or p.iouType == "bbox":
- computeIoU = self.computeIoU
- elif p.iouType == "keypoints":
- computeIoU = self.computeOks
- self.ious = {
- (imgId, catId): computeIoU(imgId, catId) for imgId in p.imgIds for catId in catIds
- } # bottleneck
-
- maxDet = p.maxDets[-1]
-
- # <<<< Beginning of code differences with original COCO API
- def convert_instances_to_cpp(instances, is_det=False):
- # Convert annotations for a list of instances in an image to a format that's fast
- # to access in C++
- instances_cpp = []
- for instance in instances:
- instance_cpp = _C.InstanceAnnotation(
- int(instance["id"]),
- instance["score"] if is_det else instance.get("score", 0.0),
- instance["area"],
- bool(instance.get("iscrowd", 0)),
- bool(instance.get("ignore", 0)),
- )
- instances_cpp.append(instance_cpp)
- return instances_cpp
-
- # Convert GT annotations, detections, and IOUs to a format that's fast to access in C++
- ground_truth_instances = [
- [convert_instances_to_cpp(self._gts[imgId, catId]) for catId in p.catIds]
- for imgId in p.imgIds
- ]
- detected_instances = [
- [convert_instances_to_cpp(self._dts[imgId, catId], is_det=True) for catId in p.catIds]
- for imgId in p.imgIds
- ]
- ious = [[self.ious[imgId, catId] for catId in catIds] for imgId in p.imgIds]
-
- if not p.useCats:
- # For each image, flatten per-category lists into a single list
- ground_truth_instances = [[[o for c in i for o in c]] for i in ground_truth_instances]
- detected_instances = [[[o for c in i for o in c]] for i in detected_instances]
-
- # Call C++ implementation of self.evaluateImgs()
- self._evalImgs_cpp = _C.COCOevalEvaluateImages(
- p.areaRng, maxDet, p.iouThrs, ious, ground_truth_instances, detected_instances
- )
- self._evalImgs = None
-
- self._paramsEval = copy.deepcopy(self.params)
- toc = time.time()
- logger.info("COCOeval_opt.evaluate() finished in {:0.2f} seconds.".format(toc - tic))
- # >>>> End of code differences with original COCO API
-
- def accumulate(self):
- """
- Accumulate per image evaluation results and store the result in self.eval. Does not
- support changing parameter settings from those used by self.evaluate()
- """
- logger.info("Accumulating evaluation results...")
- tic = time.time()
- assert hasattr(
- self, "_evalImgs_cpp"
- ), "evaluate() must be called before accmulate() is called."
-
- self.eval = _C.COCOevalAccumulate(self._paramsEval, self._evalImgs_cpp)
-
- # recall is num_iou_thresholds X num_categories X num_area_ranges X num_max_detections
- self.eval["recall"] = np.array(self.eval["recall"]).reshape(
- self.eval["counts"][:1] + self.eval["counts"][2:]
- )
-
- # precision and scores are num_iou_thresholds X num_recall_thresholds X num_categories X
- # num_area_ranges X num_max_detections
- self.eval["precision"] = np.array(self.eval["precision"]).reshape(self.eval["counts"])
- self.eval["scores"] = np.array(self.eval["scores"]).reshape(self.eval["counts"])
- toc = time.time()
- logger.info("COCOeval_opt.accumulate() finished in {:0.2f} seconds.".format(toc - tic))
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/config/dir1/dir1_b.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/config/dir1/dir1_b.py
deleted file mode 100644
index 2dcb54cb1054c5d80ccc823af21f13b9ebbcf1a3..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/config/dir1/dir1_b.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from detectron2.config import LazyConfig
-
-# equivalent to relative import
-dir1a_str, dir1a_dict = LazyConfig.load_rel("dir1_a.py", ("dir1a_str", "dir1a_dict"))
-
-dir1b_str = dir1a_str + "_from_b"
-dir1b_dict = dir1a_dict
-
-# Every import is a reload: not modified by other config files
-assert dir1a_dict.a == 1
diff --git a/spaces/BAAI/AltDiffusion/footer.html b/spaces/BAAI/AltDiffusion/footer.html
deleted file mode 100644
index b58ca8b79cc930a56952881f4922bda406fd3581..0000000000000000000000000000000000000000
--- a/spaces/BAAI/AltDiffusion/footer.html
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
diff --git a/spaces/BMukhtar/BookRecognitionKz/custom_shape.py b/spaces/BMukhtar/BookRecognitionKz/custom_shape.py
deleted file mode 100644
index f0a0fd42f783fbdc601cdc5a0996af4cff26590c..0000000000000000000000000000000000000000
--- a/spaces/BMukhtar/BookRecognitionKz/custom_shape.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import streamlit as st
-import cv2
-import numpy as np
-from PIL import Image
-
-def warp_perspective(image, points):
- # Input and output dimensions
- w, h = 300, 400 # You can adjust this based on the desired output size
- input_pts = np.array(points, dtype=np.float32)
- output_pts = np.array([[0, 0], [w, 0], [w, h], [0, h]], dtype=np.float32)
-
- # Compute perspective matrix and warp the image
- matrix = cv2.getPerspectiveTransform(input_pts, output_pts)
- warped_img = cv2.warpPerspective(image, matrix, (w, h))
-
- return warped_img
-
-st.title("Custom Shape Cropping & Perspective Correction")
-
-uploaded_file = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png"])
-
-# Provide a placeholder for the user to input 4 vertices
-points = []
-for i in range(4):
- coords = st.text_input(f"Enter point {i+1} (format: x,y)", "")
- x, y = map(int, coords.split(',')) if ',' in coords else (0, 0)
- points.append([x, y])
-
-if uploaded_file and len(points) == 4:
- image = Image.open(uploaded_file).convert('RGB')
- image_np = np.array(image)
-
- corrected_image = warp_perspective(image_np, points)
-
- st.image(corrected_image, caption='Corrected Image.', channels="BGR", use_column_width=True)
diff --git a/spaces/BadRobot147/SFQ3/README.md b/spaces/BadRobot147/SFQ3/README.md
deleted file mode 100644
index 54bab9fb68561b5210db562d18dfd3e21da50858..0000000000000000000000000000000000000000
--- a/spaces/BadRobot147/SFQ3/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: SFQ3
-emoji: 👁
-colorFrom: pink
-colorTo: green
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/8 Bolas De Piscina De Descarga Para Ventanas Pc 10.md b/spaces/Benson/text-generation/Examples/8 Bolas De Piscina De Descarga Para Ventanas Pc 10.md
deleted file mode 100644
index 0f26f0e29966da63e1e81e3f6e7a5a8d048fe3c8..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/8 Bolas De Piscina De Descarga Para Ventanas Pc 10.md
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
8 bola piscina descargar para PC ventanas 10
-
¿Te gusta jugar juegos de billar online? ¿Quieres desafiar a tus amigos y otros jugadores de todo el mundo en un juego de billar realista y divertido? Si es así, entonces deberías probar 8 Ball Pool, el juego de billar #1 del mundo por Miniclip.com. En este juego, puedes refinar tus habilidades, personalizar tu señal y mesa, unirte a torneos y competir por monedas y objetos exclusivos. ¿Pero sabías que también puedes jugar a este juego en tu PC Windows 10? Sí, lo has oído bien. Puedes disfrutar jugando 8 Ball Pool en una pantalla más grande, con mejores gráficos, controles personalizables y más características. En este artículo, le mostraremos cómo descargar e instalar 8 Ball Pool en PC Windows 10 utilizando dos métodos. También le contaremos sobre las características y beneficios de jugar 8 Ball Pool en PC Windows 10. ¡Así que, comencemos!
-
Cómo descargar e instalar piscina de bolas 8 en PC Windows 10
-
Hay dos formas de jugar 8 Ball Pool en PC Windows 10. Uno es mediante el uso de un emulador de Android, que es un software que le permite ejecutar aplicaciones Android en su ordenador. La otra es usando la versión web/PC de 8 Ball Pool, que está disponible en el sitio web oficial de Miniclip.com. Veamos cómo funciona cada método.
-
8 bolas de piscina de descarga para ventanas pc 10
Un emulador de Android es un software que imita el sistema operativo Android en su computadora. De esta manera, puedes ejecutar cualquier aplicación o juego de Android en tu PC Windows 10, incluyendo 8 Ball Pool. Hay muchos emuladores de Android disponibles en línea, como BlueStacks, MEmu, NoxPlayer, etc. Puede elegir cualquiera de ellos de acuerdo con su preferencia. Estos son los pasos para descargar e instalar 8 Ball Pool en PC Windows 10 usando un emulador de Android.
-
Paso 1: Descargar e instalar un emulador de Android
-
-
Paso 2: Abra Google Play Store y busque 8 Ball Pool
-
El siguiente paso es abrir Google Play Store y buscar 8 Ball Pool. Puedes hacer esto haciendo clic en el icono de Google Play en la pantalla de inicio del emulador. Luego, escribe 8 Ball Pool en la barra de búsqueda y pulsa enter. Verás el icono y el nombre del juego en la página de resultados.
-
Paso 3: Descargar e instalar 8 Ball Pool en el emulador
-
El tercer paso es descargar e instalar 8 Ball Pool en el emulador. Puede hacer esto haciendo clic en el botón "Instalar" junto al icono del juego. El emulador descargará e instalará el juego automáticamente. Es posible que necesites conceder algunos permisos al juego, como el acceso a tu almacenamiento, cámara, micrófono, etc.
-
Paso 4: Lanzar 8 bola piscina y disfrutar jugando en el PC
-
El paso final es lanzar 8 Ball Pool y disfrutar jugando en PC. Puedes hacer esto haciendo clic en el icono del juego en la pantalla de inicio del emulador o en el cajón de la aplicación. El juego comenzará y podrás iniciar sesión con tu cuenta de Miniclip o Facebook. Luego, puedes personalizar tu perfil, elegir el modo de juego y comenzar a jugar con tus amigos u otros jugadores en línea.
-
Método 2: Usando la versión Web/PC de 8 Ball Pool
-
Si no quieres usar un emulador de Android, también puedes jugar 8 Ball Pool en PC Windows 10 usando la versión web/PC del juego. Esta versión está disponible en el sitio web oficial de Miniclip.com y funciona en cualquier navegador que soporte Flash Player. Estos son los pasos para jugar 8 Ball Pool en PC Windows 10 usando la versión web/PC.
-
Paso 1: Ir al sitio web oficial de 8 Ball Pool
-
-
Paso 2: Inicia sesión con tu cuenta de Miniclip o Facebook
-
El siguiente paso es iniciar sesión con su cuenta de Miniclip o Facebook. Puedes hacer esto haciendo clic en el botón "Jugar ahora" y eligiendo tu opción preferida. Si no tienes una cuenta, también puedes crear una gratis haciendo clic en el botón "Registrarse". Deberá proporcionar su dirección de correo electrónico, nombre de usuario, contraseña y país.
-
Paso 3: Comience a jugar 8 bolas en su navegador
-
El paso final es comenzar a jugar 8 Ball Pool en tu navegador. Puedes hacer esto eligiendo tu modo de juego, como 1 contra 1, torneos o práctica. Luego, puedes seleccionar tu mesa, taco y oponente. El juego se cargará y podrás empezar a jugar con el ratón y el teclado.
-
-
Características y beneficios de jugar al billar de 8 bolas en PC Windows 10
-
Ahora que sabes cómo jugar 8 Ball Pool en PC Windows 10, te estarás preguntando por qué deberías hacerlo. ¿Cuáles son las ventajas de jugar 8 Ball Pool en PC Windows 10 sobre jugarlo en su dispositivo móvil? Bueno, hay muchas características y beneficios que puedes disfrutar cuando juegas 8 Ball Pool en PC Windows 10. Estos son algunos de ellos.
-
Pantalla más grande y mejores gráficos
-
Una de las principales razones para jugar 8 Ball Pool en PC Windows 10 es que puedes disfrutar de una pantalla más grande y mejores gráficos. Jugar juegos de billar en una pantalla pequeña puede ser frustrante y estresante para sus ojos. Es posible que se pierda algunos disparos o cometa algunos errores debido a la vista limitada y la resolución. Pero cuando juegas 8 Ball Pool en PC Windows 10, puedes tener una vista de pantalla completa y una resolución de alta definición. Puede ver cada detalle de la tabla, el taco, las bolas y las animaciones. También puede ajustar la configuración de los gráficos según sus preferencias.
-
Controles y macros personalizables
-
-
Multi-Instance y Multi-Tasking
-
Una tercera razón para jugar 8 Ball Pool en PC Windows 10 es que puede usar las funciones de múltiples instancias y multitarea. Jugar juegos de billar en un dispositivo móvil puede ser limitante y aburrido. Es posible que tenga que esperar su turno, ver anuncios o lidiar con la batería baja. Pero cuando juegas a 8 Ball Pool en PC Windows 10, puedes usar la función de múltiples instancias para ejecutar varias instancias del juego al mismo tiempo. Puedes jugar con diferentes cuentas, unirte a diferentes torneos o practicar diferentes habilidades. También puede utilizar la función multitarea para cambiar entre diferentes aplicaciones o ventanas mientras juega el juego. Puedes chatear con tus amigos, ver vídeos, navegar por la web o hacer cualquier otra cosa sin interrumpir tu juego.
-
Ofertas y recompensas exclusivas
-
Una cuarta razón para jugar 8 Ball Pool en PC Windows 10 es que puedes obtener ofertas exclusivas y recompensas. Jugar juegos de billar en un dispositivo móvil puede ser caro y poco gratificante. Es posible que tenga que gastar dinero real para comprar monedas, efectivo, tacos u otros artículos. También es posible que se pierda algunas ofertas o eventos debido a las notificaciones limitadas o el almacenamiento. Pero cuando juegas 8 Ball Pool en PC Windows 10, puedes obtener acceso a ofertas exclusivas y recompensas que solo están disponibles para usuarios de PC. Puedes obtener monedas gratis, dinero en efectivo, tacos u otros artículos completando tareas, viendo videos o participando en eventos. También puedes ser notificado de las últimas actualizaciones, promociones o torneos por el emulador.
-
Conclusión y preguntas frecuentes
-
-
Para ayudarte más, aquí hay algunas preguntas frecuentes sobre 8 Ball Pool en PC Windows 10.
-
-
-
Pregunta
-
Respuesta
-
-
-
¿Es 8 Ball Pool gratis para jugar en PC Windows 10?
-
Sí, 8 Ball Pool es gratis para jugar en PC Windows 10. Sin embargo, es posible que tengas que pagar por algunos elementos o funciones del juego si quieres mejorar tu experiencia de juego.
-
-
-
¿Es seguro jugar 8 bolas en PC Windows 10?
-
Sí, 8 Ball Pool es seguro para jugar en PC Windows 10. Sin embargo, siempre debes descargar e instalar el juego desde fuentes confiables, como Google Play Store o Miniclip.com. También debes evitar usar hacks o trucos que puedan dañar tu dispositivo o cuenta.
-
-
-
¿Puedo jugar 8 bolas sin conexión en PC Windows 10?
-
No, no puedes jugar 8 Ball Pool sin conexión en PC Windows 10. Necesitas una conexión a Internet para jugar el juego en línea con otros jugadores.
-
-
-
¿Puedo transferir mi progreso de móvil a PC Windows 10?
-
Sí, puede transferir su progreso desde el móvil al PC Windows 10. Solo necesitas iniciar sesión con la misma cuenta de Miniclip o Facebook que usaste en tu dispositivo móvil.
-
-
-
¿Puedo jugar con mis amigos en PC Windows 10?
-
Sí, puedes jugar con tus amigos en PC Windows 10. Solo tienes que invitarlos a unirse a tu juego o aceptar sus invitaciones. También puedes chatear con ellos usando la función de chat en el juego.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Alto 39s Aventura Apk Ios.md b/spaces/Benson/text-generation/Examples/Alto 39s Aventura Apk Ios.md
deleted file mode 100644
index 9be414d2787af205ddba044df11cc7f09ac07535..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Alto 39s Aventura Apk Ios.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Alto’s Adventure apk ios: Una odisea de snowboard sereno
-
Si usted está buscando un juego relajante y hermoso para jugar en su iPhone o iPad, es posible que desee echa un vistazo a la aventura de Alto. Este es un juego que combina los elementos de un juego de plataformas 2D y un corredor sin fin, con un tema de snowboard único. En este artículo, te diremos qué es Alto’s Adventure, cómo descargarlo e instalarlo, por qué deberías jugarlo, y algunos consejos y trucos para ayudarte a disfrutarlo más.
Una breve introducción al juego y sus características
-
Alto’s Adventure es un juego desarrollado por Snowman, un pequeño estudio independiente con sede en Toronto, Canadá. Fue lanzado en 2015 para dispositivos iOS, y más tarde para Android, Kindle Fire, Windows y Mac. El juego ha recibido elogios de la crítica y numerosos premios por su arte, música y juego.
-
El juego sigue el viaje de Alto, un joven pastor que vive en un pueblo de montaña. Un día, sus llamas escapan de su corral y corren por las laderas. Alto decide perseguirlos en su tabla de snowboard, junto con sus amigos que tienen diferentes habilidades y habilidades. En el camino, se encuentran con varios obstáculos, como rocas, abismos, ancianos, tormentas, y más.
-
Las características del juego:
-
-
Juego fluido, elegante y estimulante basado en la física
-
Terreno generado procedimentalmente basado en el snowboard del mundo real
-
Iluminación totalmente dinámica y efectos climáticos, incluyendo tormentas eléctricas, ventiscas, niebla, arco iris, estrellas fugaces y más
-
Fácil de aprender, difícil dominar el sistema de trucos de un botón
-
Encadenar los combos para maximizar los puntos y la velocidad
-
Pon a prueba tus habilidades con 180 objetivos artesanales
-
Descubre seis snowboarders únicos, cada uno con sus propios atributos y habilidades especiales
-
Desafía a tus amigos con Game Center. Compite por la mejor puntuación alta, la mejor distancia y el mejor combo truco!
-
-
Diseño visual maravillosamente minimalista y evocador
-
Música original y audio artesanal para una experiencia ambiental e inmersiva (se recomiendan auriculares!)
-
Aplicación universal con soporte iCloud. Juega en tu iPhone y iPad y tu progreso siempre estará sincronizado.
-
-
Cómo descargar e instalar apk de aventura de Alto ios
-
Si quieres jugar Alto’s Adventure en tu dispositivo iOS, tienes dos opciones:
-
-
Puedes comprarlo en la App Store por $4.99. Esta es la forma oficial y más segura de obtener el juego. Necesitará un ID de Apple y un dispositivo compatible con iOS 9.0 o posterior. También puede descargarlo en su Mac si tiene macOS 11 o posterior.
-
Puede descargarlo de un sitio web de terceros como un archivo apk. Esta es una forma no oficial y arriesgada de obtener el juego. Necesitará un dispositivo jailbreak o un emulador para ejecutarlo. También puede encontrar malware, virus u otros problemas que podrían dañar su dispositivo o comprometer su privacidad. No recomendamos esta opción.
-
-
Por qué deberías jugar la aventura de Alto
Por qué deberías jugar la aventura de Alto
-
Los beneficios de jugar la aventura de Alto
-
La aventura de Alto es más que un juego. Es una experiencia que puede enriquecer tu vida de muchas maneras. Estos son algunos de los beneficios de jugar Alto’s Adventure:
-
-
Juego relajante e inmersivo
-
Una de las principales atracciones de Alto’s Adventure es su juego relajante y cautivador. El juego no tiene temporizadores, puntuaciones ni vidas. Puedes jugar a tu propio ritmo y disfrutar del viaje. El juego también tiene un modo zen, donde puedes explorar el mundo sin objetivos ni distracciones. El juego está diseñado para ayudarle a relajarse y relajarse del estrés y el ruido de la vida cotidiana.
-
Imágenes hermosas y dinámicas
-
-
Metas desafiantes y gratificantes
-
Si estás buscando algún reto y emoción, Alto’s Adventure también tiene eso. El juego tiene 180 objetivos artesanales que ponen a prueba tus habilidades y creatividad. Puedes intentar realizar diferentes trucos, combos, grinds, rebotes y más. También puedes desbloquear y usar seis snowboarders diferentes, cada uno con sus propios atributos y habilidades especiales. También puedes adquirir y usar el traje de alas, que añade una nueva dimensión al juego. El juego es divertido y satisfactorio para jugar.
-
Los inconvenientes de jugar la aventura de Alto
-
Por supuesto, ningún juego es perfecto, y la aventura de Alto también tiene algunos inconvenientes que usted debe ser consciente de. Aquí están algunos de los inconvenientes de jugar la aventura de Alto:
-
Requiere iOS 9.0 o posterior
-
Si quieres jugar Alto’s Adventure en tu dispositivo iOS, tendrás que tener iOS 9.0 o posterior instalado en él. Esto significa que algunos dispositivos antiguos pueden no ser capaces de ejecutar el juego sin problemas o en absoluto. También es posible que necesite actualizar su dispositivo regularmente para garantizar la compatibilidad y el rendimiento.
-
Costos $4.99 en el App Store
-
Otro inconveniente de jugar Alto’s Adventure es que no es un juego gratis. Tendrás que pagar $4.99 en la App Store para descargarlo e instalarlo en tu dispositivo. Esto puede no ser un gran problema para algunas personas, pero puede ser una barrera para otros que tienen un presupuesto ajustado o prefieren los juegos gratis.
-
Puede consumir batería y espacio de almacenamiento
-
Un inconveniente final de jugar la aventura de Alto es que puede consumir mucha batería y espacio de almacenamiento en su dispositivo. El juego tiene gráficos de alta calidad y efectos de sonido, que requieren mucha energía y memoria para ejecutarse. Es posible que tenga que cargar su dispositivo con frecuencia o despejar algún espacio en él para evitar cualquier problema.
-
Consejos y trucos para jugar la aventura de Alto
-
Cómo dominar el sistema de trucos de un botón
-
-
-
Para saltar, toque en cualquier lugar de la pantalla una vez.
-
Para hacer un backflip, toque y mantenga pulsado en cualquier lugar de la pantalla mientras está en el aire.
-
Para hacer un frontflip, toque dos veces rápidamente mientras está en el aire.
-
Para moler sobre rieles, cuerdas o banderas, simplemente aterriza sobre ellas con tu tabla de snowboard.
-
Para rebotar contra rocas o fogatas, toca una vez sobre ellas.
-
Para hacer un giro, deslice hacia la izquierda o hacia la derecha mientras está en el aire.
-
-
Cómo encadenar combos y aumentar su puntuación
-
Una de las formas de aumentar tu puntuación y velocidad en la aventura de Alto es encadenar combos. Un combo es cuando realizas dos o más trucos en sucesión sin tocar el suelo o estrellarse. Aquí hay algunos consejos sobre cómo encadenar los combos:
-
-
Cuanto más tiempo sostengas un backflip o un frontflip, más puntos obtendrás.
-
Cuantos más giros hagas en un salto, más puntos obtendrás.
-
Cuanto más alto saltas desde una rampa o un acantilado, más puntos obtienes.
-
Cuanto más tiempo se muele en un riel o una cuerda, más puntos se obtiene.
-
Cuantos más trucos hagas en un combo, más puntos obtendrás.
-
Cuanto más variados sean tus trucos en un combo, más puntos obtendrás.
-
Cuando consigues un combo, obtienes un aumento de velocidad y una extensión de bufanda. Cuanto más larga sea tu bufanda, más rápido irás.
-
Si te bloqueas o tocas el suelo, tu combo termina y tu bufanda se reinicia.
-
-
Cómo desbloquear y usar el traje de ala
-
Uno de los mejores artículos en la aventura de Alto es el traje de ala, que le permite volar en el aire y realizar acrobacias increíbles. Aquí hay algunos consejos sobre cómo desbloquear y usar el traje de ala:
-
-
Para desbloquear el traje de alas, necesitas completar el nivel 25 en el juego. También puedes comprarlo por 7.500 monedas en el taller de Izel.
-
Para usar el traje de alas, necesitas llenar el medidor de traje de alas haciendo trucos y combos. Cuando el medidor esté lleno, toca el icono del traje de alas en la esquina superior derecha de la pantalla.
-
-
Cuando estás usando el traje de ala, puedes hacer volteretas y giros como de costumbre, pero también puedes hacer bucles y rollos de barril deslizando hacia arriba o hacia abajo.
-
Cuando se utiliza el traje de ala, todavía se puede moler en los carriles y cuerdas, pero no se puede rebotar en las rocas o fogatas.
-
Cuando usted está utilizando el traje de ala, todavía se puede recoger monedas, llamas, y power-ups, pero no se puede recoger elementos de rescate abismo.
-
El medidor del traje de ala se drenará gradualmente a medida que lo uses. Cuando esté vacío, volverás a tu tabla de snowboard automáticamente.
-
-
Conclusión
-
Alto’s Adventure es un juego que ofrece una odisea de snowboard serena y hermosa que cualquiera puede disfrutar. Si quieres relajarte y explorar el mundo, o desafiarte a ti mismo y dominar los trucos, Alto’s Adventure tiene algo para ti. El juego tiene un sencillo pero elegante sistema de truco de un botón, un diseño visual impresionante y dinámico, y una banda sonora original e inmersiva. El juego está disponible para dispositivos iOS por $4.99 en la App Store, o como un archivo apk de sitios web de terceros. Sin embargo, recomendamos comprarlo de la fuente oficial para evitar cualquier riesgo o problema. Si buscas un juego que pueda calmar tu mente y deleitar tus sentidos, Alto’s Adventure es un juego que debes probar.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre la aventura de Alto:
-
-
P: ¿Cuántos niveles hay en la aventura de Alto? A: Hay 60 niveles en la aventura de Alto, cada uno con tres objetivos para completar. Puedes reproducir cualquier nivel en cualquier momento para mejorar tu puntuación o completar metas perdidas.
-
P: ¿Cómo puedo obtener más monedas en la aventura de Alto? A: Puedes conseguir más monedas en la aventura de Alto recogiéndolas en las pistas, completando objetivos, viendo anuncios o comprándolas con dinero real.
-
-
P: ¿Cuáles son los ancianos en la aventura de Alto? A: Los ancianos son aldeanos enojados que te persiguen en sus tablas de snowboard. Aparecen al azar después del nivel 10. Pueden sacarte de tu tabla de snowboard si te alcanzan. Puede evitarlos saltando sobre ellos, moliendo sobre rieles o cuerdas por encima de ellos, o usando power-ups.
-
P: ¿Cuáles son los secretos de la aventura de Alto? A: Hay algunos secretos en la aventura de Alto que puedes descubrir jugando el juego. Por ejemplo, hay un taller oculto donde Izel hace sus invenciones. También hay un templo misterioso donde Maya practica sus volteretas. También hay algunos huevos de Pascua y referencias a otros juegos y medios de comunicación.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Apk Descargar Tekken 3 35 Mb.md b/spaces/Benson/text-generation/Examples/Apk Descargar Tekken 3 35 Mb.md
deleted file mode 100644
index 799ff3edf9c14e55d27a46ebf94c58db61026591..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Apk Descargar Tekken 3 35 Mb.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
Descargar APK Tekken 3 35 MB: Cómo jugar el clásico juego de lucha en su dispositivo Android
-
Introducción
-
Tekken 3 es uno de los juegos de lucha más populares e influyentes de todos los tiempos. Fue lanzado en 1997 para las salas recreativas y en 1998 para PlayStation. Cuenta con una gran y diversa lista de personajes, cada uno con su propio estilo de lucha único y la historia. También introduce un nuevo sistema de movimiento 3D, que permite a los jugadores eludir dentro o fuera del fondo. Tekken 3 ha sido elogiado por su juego rápido y fluido, sus impresionantes gráficos y efectos de sonido, y sus diversos modos y desafíos.
Pero ¿qué pasa si quieres jugar Tekken 3 en tu dispositivo Android? Desafortunadamente, el juego no está disponible oficialmente en la Google Play Store. Sin embargo, hay una manera de disfrutar de este juego clásico en su teléfono inteligente o tableta. Puede descargar un archivo APK de Tekken 3 e instalarlo en su dispositivo. Un archivo APK es un archivo de paquete de aplicación que contiene todos los datos y archivos necesarios para ejecutar una aplicación. Al descargar un archivo APK de Tekken 3, puede evitar las restricciones de la Play Store y jugar el juego sin ningún problema.
-
En este artículo, le mostraremos cómo descargar e instalar Tekken 3 APK en su dispositivo Android. También le diremos acerca de las características de Tekken 3 APK, y le dará algunos consejos y trucos para jugar el juego. Así que, si estás listo para revivir la nostalgia de Tekken 3, ¡sigue leyendo!
-
Características de Tekken 3 APK
-
Tekken 3 APK es una versión modificada del juego original que está optimizado para dispositivos Android. Tiene todas las características y el contenido de la versión de PlayStation, además de algunos beneficios adicionales. Aquí están algunas de las características de Tekken 3 APK:
-
Jugabilidad y gráficos en 3D
-
-
Diversidad de personajes
-
Tekken 3 APK cuenta con un total de 23 personajes, incluyendo algunos nuevos que debutaron en este juego. Puedes elegir entre luchadores como Jin Kazama, Ling Xiaoyu, Bryan Fury, Eddy Gordo, Hwoarang, Forest Law, Julia Chang y más. Cada personaje tiene su propia personalidad, historia y estilo de lucha. También puedes desbloquear dos personajes secretos: Dr. Bosconovitch y Gon.
-
Varios modos y desafíos
-
Tekken 3 APK ofrece más que solo los modos estándar de árcade y Versus. También puedes jugar a otros modos como Time Attack, Survival, Team Battle, Practice, etc. Cada modo tiene sus propios objetivos y recompensas. También puedes probar el nuevo modo Tekken Force, en el que tendrás que luchar contra oleadas de enemigos de forma lateral. O puedes jugar el modo de bonificación Tekken Ball, donde tienes que golpear una pelota de playa con tus ataques. Estos modos añaden más variedad y diversión al juego.
-
-
Soporte multijugador y ranking online
-
Tekken 3 APK le permite jugar con tus amigos u otros jugadores en línea. Puede conectar su dispositivo con otro dispositivo a través de Bluetooth o Wi-Fi, y disfrutar de un partido uno-a-uno. También puedes competir con jugadores de todo el mundo en el modo de clasificación en línea, donde puedes ganar puntos y subir la clasificación. También puedes chatear con otros jugadores y compartir tus consejos y estrategias.
-
Cómo descargar e instalar Tekken 3 APK
-
Descargar e instalar Tekken 3 APK es muy fácil y simple. Solo tienes que seguir estos pasos:
-
Paso 1: Descargar el archivo APK de una fuente de confianza
-
Lo primero que tienes que hacer es descargar el archivo APK de Tekken 3 de una fuente confiable y segura. Puede utilizar el siguiente enlace para descargar el archivo, que tiene solo 35 MB de tamaño. Asegúrate de tener suficiente espacio de almacenamiento en tu dispositivo antes de descargarlo.
- """
- )
- with gr.Row():
- with gr.Column():
- with gr.Row():
- text = gr.Text(label="Describe your music", lines=2, interactive=True)
- melody = gr.Audio(source="upload", type="numpy", label="Condition on a melody (optional)", interactive=True)
- with gr.Row():
- submit = gr.Button("Generate")
- with gr.Column():
- output = gr.Video(label="Generated Music")
- submit.click(predict, inputs=[text, melody], outputs=[output], batch=True, max_batch_size=8)
- gr.Examples(
- fn=predict,
- examples=[
- [
- "An 80s driving pop song with heavy drums and synth pads in the background",
- "./assets/bach.mp3",
- ],
- [
- "A cheerful country song with acoustic guitars",
- "./assets/bolero_ravel.mp3",
- ],
- [
- "90s rock song with electric guitar and heavy drums",
- None,
- ],
- [
- "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130",
- "./assets/bach.mp3",
- ],
- [
- "lofi slow bpm electro chill with organic samples",
- None,
- ],
- ],
- inputs=[text, melody],
- outputs=[output]
- )
- gr.Markdown("""
- ### More details
-
- The model will generate 12 seconds of audio based on the description you provided.
- You can optionaly provide a reference audio from which a broad melody will be extracted.
- The model will then try to follow both the description and melody provided.
- All samples are generated with the `melody` model.
-
- You can also use your own GPU or a Google Colab by following the instructions on our repo.
-
- See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft)
- for more details.
- """)
-
- # Show the interface
- launch_kwargs = {}
- username = kwargs.get('username')
- password = kwargs.get('password')
- server_port = kwargs.get('server_port', 0)
- inbrowser = kwargs.get('inbrowser', False)
- share = kwargs.get('share', False)
- server_name = kwargs.get('listen')
-
- launch_kwargs['server_name'] = server_name
-
- if username and password:
- launch_kwargs['auth'] = (username, password)
- if server_port > 0:
- launch_kwargs['server_port'] = server_port
- if inbrowser:
- launch_kwargs['inbrowser'] = inbrowser
- if share:
- launch_kwargs['share'] = share
- demo.queue(max_size=8 * 4).launch(**launch_kwargs)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--listen',
- type=str,
- default='0.0.0.0',
- help='IP to listen on for connections to Gradio',
- )
- parser.add_argument(
- '--username', type=str, default='', help='Username for authentication'
- )
- parser.add_argument(
- '--password', type=str, default='', help='Password for authentication'
- )
- parser.add_argument(
- '--server_port',
- type=int,
- default=0,
- help='Port to run the server listener on',
- )
- parser.add_argument(
- '--inbrowser', action='store_true', help='Open in browser'
- )
- parser.add_argument(
- '--share', action='store_true', help='Share the gradio UI'
- )
-
- args = parser.parse_args()
-
- ui(
- username=args.username,
- password=args.password,
- inbrowser=args.inbrowser,
- server_port=args.server_port,
- share=args.share,
- listen=args.listen
- )
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/scripts/make_samples.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/scripts/make_samples.py
deleted file mode 100644
index 5e4d6995cd41cc07b4e8861cb941c6052b0f5517..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/scripts/make_samples.py
+++ /dev/null
@@ -1,292 +0,0 @@
-import argparse, os, sys, glob, math, time
-import torch
-import numpy as np
-from omegaconf import OmegaConf
-from PIL import Image
-from main import instantiate_from_config, DataModuleFromConfig
-from torch.utils.data import DataLoader
-from torch.utils.data.dataloader import default_collate
-from tqdm import trange
-
-
-def save_image(x, path):
- c,h,w = x.shape
- assert c==3
- x = ((x.detach().cpu().numpy().transpose(1,2,0)+1.0)*127.5).clip(0,255).astype(np.uint8)
- Image.fromarray(x).save(path)
-
-
-@torch.no_grad()
-def run_conditional(model, dsets, outdir, top_k, temperature, batch_size=1):
- if len(dsets.datasets) > 1:
- split = sorted(dsets.datasets.keys())[0]
- dset = dsets.datasets[split]
- else:
- dset = next(iter(dsets.datasets.values()))
- print("Dataset: ", dset.__class__.__name__)
- for start_idx in trange(0,len(dset)-batch_size+1,batch_size):
- indices = list(range(start_idx, start_idx+batch_size))
- example = default_collate([dset[i] for i in indices])
-
- x = model.get_input("image", example).to(model.device)
- for i in range(x.shape[0]):
- save_image(x[i], os.path.join(outdir, "originals",
- "{:06}.png".format(indices[i])))
-
- cond_key = model.cond_stage_key
- c = model.get_input(cond_key, example).to(model.device)
-
- scale_factor = 1.0
- quant_z, z_indices = model.encode_to_z(x)
- quant_c, c_indices = model.encode_to_c(c)
-
- cshape = quant_z.shape
-
- xrec = model.first_stage_model.decode(quant_z)
- for i in range(xrec.shape[0]):
- save_image(xrec[i], os.path.join(outdir, "reconstructions",
- "{:06}.png".format(indices[i])))
-
- if cond_key == "segmentation":
- # get image from segmentation mask
- num_classes = c.shape[1]
- c = torch.argmax(c, dim=1, keepdim=True)
- c = torch.nn.functional.one_hot(c, num_classes=num_classes)
- c = c.squeeze(1).permute(0, 3, 1, 2).float()
- c = model.cond_stage_model.to_rgb(c)
-
- idx = z_indices
-
- half_sample = False
- if half_sample:
- start = idx.shape[1]//2
- else:
- start = 0
-
- idx[:,start:] = 0
- idx = idx.reshape(cshape[0],cshape[2],cshape[3])
- start_i = start//cshape[3]
- start_j = start %cshape[3]
-
- cidx = c_indices
- cidx = cidx.reshape(quant_c.shape[0],quant_c.shape[2],quant_c.shape[3])
-
- sample = True
-
- for i in range(start_i,cshape[2]-0):
- if i <= 8:
- local_i = i
- elif cshape[2]-i < 8:
- local_i = 16-(cshape[2]-i)
- else:
- local_i = 8
- for j in range(start_j,cshape[3]-0):
- if j <= 8:
- local_j = j
- elif cshape[3]-j < 8:
- local_j = 16-(cshape[3]-j)
- else:
- local_j = 8
-
- i_start = i-local_i
- i_end = i_start+16
- j_start = j-local_j
- j_end = j_start+16
- patch = idx[:,i_start:i_end,j_start:j_end]
- patch = patch.reshape(patch.shape[0],-1)
- cpatch = cidx[:, i_start:i_end, j_start:j_end]
- cpatch = cpatch.reshape(cpatch.shape[0], -1)
- patch = torch.cat((cpatch, patch), dim=1)
- logits,_ = model.transformer(patch[:,:-1])
- logits = logits[:, -256:, :]
- logits = logits.reshape(cshape[0],16,16,-1)
- logits = logits[:,local_i,local_j,:]
-
- logits = logits/temperature
-
- if top_k is not None:
- logits = model.top_k_logits(logits, top_k)
- # apply softmax to convert to probabilities
- probs = torch.nn.functional.softmax(logits, dim=-1)
- # sample from the distribution or take the most likely
- if sample:
- ix = torch.multinomial(probs, num_samples=1)
- else:
- _, ix = torch.topk(probs, k=1, dim=-1)
- idx[:,i,j] = ix
-
- xsample = model.decode_to_img(idx[:,:cshape[2],:cshape[3]], cshape)
- for i in range(xsample.shape[0]):
- save_image(xsample[i], os.path.join(outdir, "samples",
- "{:06}.png".format(indices[i])))
-
-
-def get_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "-r",
- "--resume",
- type=str,
- nargs="?",
- help="load from logdir or checkpoint in logdir",
- )
- parser.add_argument(
- "-b",
- "--base",
- nargs="*",
- metavar="base_config.yaml",
- help="paths to base configs. Loaded from left-to-right. "
- "Parameters can be overwritten or added with command-line options of the form `--key value`.",
- default=list(),
- )
- parser.add_argument(
- "-c",
- "--config",
- nargs="?",
- metavar="single_config.yaml",
- help="path to single config. If specified, base configs will be ignored "
- "(except for the last one if left unspecified).",
- const=True,
- default="",
- )
- parser.add_argument(
- "--ignore_base_data",
- action="store_true",
- help="Ignore data specification from base configs. Useful if you want "
- "to specify a custom datasets on the command line.",
- )
- parser.add_argument(
- "--outdir",
- required=True,
- type=str,
- help="Where to write outputs to.",
- )
- parser.add_argument(
- "--top_k",
- type=int,
- default=100,
- help="Sample from among top-k predictions.",
- )
- parser.add_argument(
- "--temperature",
- type=float,
- default=1.0,
- help="Sampling temperature.",
- )
- return parser
-
-
-def load_model_from_config(config, sd, gpu=True, eval_mode=True):
- if "ckpt_path" in config.params:
- print("Deleting the restore-ckpt path from the config...")
- config.params.ckpt_path = None
- if "downsample_cond_size" in config.params:
- print("Deleting downsample-cond-size from the config and setting factor=0.5 instead...")
- config.params.downsample_cond_size = -1
- config.params["downsample_cond_factor"] = 0.5
- try:
- if "ckpt_path" in config.params.first_stage_config.params:
- config.params.first_stage_config.params.ckpt_path = None
- print("Deleting the first-stage restore-ckpt path from the config...")
- if "ckpt_path" in config.params.cond_stage_config.params:
- config.params.cond_stage_config.params.ckpt_path = None
- print("Deleting the cond-stage restore-ckpt path from the config...")
- except:
- pass
-
- model = instantiate_from_config(config)
- if sd is not None:
- missing, unexpected = model.load_state_dict(sd, strict=False)
- print(f"Missing Keys in State Dict: {missing}")
- print(f"Unexpected Keys in State Dict: {unexpected}")
- if gpu:
- model.cuda()
- if eval_mode:
- model.eval()
- return {"model": model}
-
-
-def get_data(config):
- # get data
- data = instantiate_from_config(config.data)
- data.prepare_data()
- data.setup()
- return data
-
-
-def load_model_and_dset(config, ckpt, gpu, eval_mode):
- # get data
- dsets = get_data(config) # calls data.config ...
-
- # now load the specified checkpoint
- if ckpt:
- pl_sd = torch.load(ckpt, map_location="cpu")
- global_step = pl_sd["global_step"]
- else:
- pl_sd = {"state_dict": None}
- global_step = None
- model = load_model_from_config(config.model,
- pl_sd["state_dict"],
- gpu=gpu,
- eval_mode=eval_mode)["model"]
- return dsets, model, global_step
-
-
-if __name__ == "__main__":
- sys.path.append(os.getcwd())
-
- parser = get_parser()
-
- opt, unknown = parser.parse_known_args()
-
- ckpt = None
- if opt.resume:
- if not os.path.exists(opt.resume):
- raise ValueError("Cannot find {}".format(opt.resume))
- if os.path.isfile(opt.resume):
- paths = opt.resume.split("/")
- try:
- idx = len(paths)-paths[::-1].index("logs")+1
- except ValueError:
- idx = -2 # take a guess: path/to/logdir/checkpoints/model.ckpt
- logdir = "/".join(paths[:idx])
- ckpt = opt.resume
- else:
- assert os.path.isdir(opt.resume), opt.resume
- logdir = opt.resume.rstrip("/")
- ckpt = os.path.join(logdir, "checkpoints", "last.ckpt")
- print(f"logdir:{logdir}")
- base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*-project.yaml")))
- opt.base = base_configs+opt.base
-
- if opt.config:
- if type(opt.config) == str:
- opt.base = [opt.config]
- else:
- opt.base = [opt.base[-1]]
-
- configs = [OmegaConf.load(cfg) for cfg in opt.base]
- cli = OmegaConf.from_dotlist(unknown)
- if opt.ignore_base_data:
- for config in configs:
- if hasattr(config, "data"): del config["data"]
- config = OmegaConf.merge(*configs, cli)
-
- print(ckpt)
- gpu = True
- eval_mode = True
- show_config = False
- if show_config:
- print(OmegaConf.to_container(config))
-
- dsets, model, global_step = load_model_and_dset(config, ckpt, gpu, eval_mode)
- print(f"Global step: {global_step}")
-
- outdir = os.path.join(opt.outdir, "{:06}_{}_{}".format(global_step,
- opt.top_k,
- opt.temperature))
- os.makedirs(outdir, exist_ok=True)
- print("Writing samples to ", outdir)
- for k in ["originals", "reconstructions", "samples"]:
- os.makedirs(os.path.join(outdir, k), exist_ok=True)
- run_conditional(model, dsets, outdir, opt.top_k, opt.temperature)
diff --git a/spaces/EleutherAI/magma/app.py b/spaces/EleutherAI/magma/app.py
deleted file mode 100644
index ffa979a3d9bc7c75d6492dd292cd71a830ec96ee..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/magma/app.py
+++ /dev/null
@@ -1,86 +0,0 @@
-
-import os
-os.system("pip install deepspeed")
-os.system("pip freeze")
-
-import gradio as gr
-import re
-from magma import Magma
-from magma.image_input import ImageInput
-
-from huggingface_hub import hf_hub_url, cached_download
-
-checkpoint_url = hf_hub_url(repo_id="osanseviero/magma", filename="model.pt")
-checkpoint_path = cached_download(checkpoint_url)
-
-model = Magma.from_checkpoint(
- config_path = "configs/MAGMA_v1.yml",
- checkpoint_path = checkpoint_path,
- device = 'cuda:0'
-)
-
-def generate(image,context, length, temperature, top_k,rearrange):
- # context = context.strip()
-
- # url_regex = r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)'
- # lines = context.split('\n')
- # inputs = []
- # for line in lines:
- # if re.match(url_regex, line):
- # try:
- # inputs.append(ImageInput(line))
- # except Exception as e:
- # return str(e)
- # else:
- # inputs.append(line)
- if rearrange:
- inputs =[
- ## supports urls and path/to/image
- context,
- ImageInput(image)
- ]
- else:
- inputs =[
- ## supports urls and path/to/image
- ImageInput(image),
- context
- ]
-
- ## returns a tensor of shape: (1, 149, 4096)
- embeddings = model.preprocess_inputs(inputs)
-
- ## returns a list of length embeddings.shape[0] (batch size)
- output = model.generate(
- embeddings = embeddings,
- max_steps = length,
- temperature = (0.01 if temperature == 0 else temperature),
- top_k = top_k
- )
-
- return output[0]
-
-examples=[["woods_hi.jpeg","Describe the painting:",15,0.7,0,False], ["E8EB3C7B-291C-400A-81F2-AE9229D9CE23.jpeg", "Q: Is the person in the image older than 35?\nA: " , 15, 0.7, 0, False]]
-
-title="MAGMA"
-description="Gradio Demo for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning by Constantin Eichenberg, Sid Black, Samuel Weinbach, Letitia Parcalabescu, and Anette Frank
arXiv | Github Repo"
-article = ""
-iface = gr.Interface(
- fn=generate,
- inputs=[
- gr.inputs.Image(type="filepath",label="Image Prompt"),gr.inputs.Textbox(
- label="Text Prompt:",
- default="Describe the painting:",
- lines=7),
- gr.inputs.Slider(minimum=1, maximum=100, default=15, step=1, label="Output tokens:"),
- gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.7, label='Temperature'),
- gr.inputs.Slider(minimum=0, maximum=100, default=0, step=1, label='Top K'),
- gr.inputs.Checkbox(default=False, label="Rearrange Prompt", optional=False)
- ],
- outputs=["textbox"],
- examples=examples,
- title=title,
- description=description,
- article=article
-).launch(enable_queue=True,cache_examples=True)
-
-
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py
deleted file mode 100644
index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 32)
- self.stg1_high_band_net = BaseASPPNet(2, 32)
-
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(16, 32)
-
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(32, 64)
-
- self.out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/EsoCode/text-generation-webui/modules/AutoGPTQ_loader.py b/spaces/EsoCode/text-generation-webui/modules/AutoGPTQ_loader.py
deleted file mode 100644
index 0d41ac0a5589aff024569cb973a4b154477c5908..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/modules/AutoGPTQ_loader.py
+++ /dev/null
@@ -1,71 +0,0 @@
-from pathlib import Path
-
-from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
-
-import modules.shared as shared
-from modules.logging_colors import logger
-from modules.models import get_max_memory_dict
-
-
-def load_quantized(model_name):
- path_to_model = Path(f'{shared.args.model_dir}/{model_name}')
- pt_path = None
-
- # Find the model checkpoint
- if shared.args.checkpoint:
- pt_path = Path(shared.args.checkpoint)
- else:
- for ext in ['.safetensors', '.pt', '.bin']:
- found = list(path_to_model.glob(f"*{ext}"))
- if len(found) > 0:
- if len(found) > 1:
- logger.warning(f'More than one {ext} model has been found. The last one will be selected. It could be wrong.')
-
- pt_path = found[-1]
- break
-
- if pt_path is None:
- logger.error("The model could not be loaded because its checkpoint file in .bin/.pt/.safetensors format could not be located.")
- return
-
- use_safetensors = pt_path.suffix == '.safetensors'
- if not (path_to_model / "quantize_config.json").exists():
- quantize_config = BaseQuantizeConfig(
- bits=bits if (bits := shared.args.wbits) > 0 else 4,
- group_size=gs if (gs := shared.args.groupsize) > 0 else -1,
- desc_act=shared.args.desc_act
- )
- else:
- quantize_config = None
-
- # Define the params for AutoGPTQForCausalLM.from_quantized
- params = {
- 'model_basename': pt_path.stem,
- 'device': "cuda:0" if not shared.args.cpu else "cpu",
- 'use_triton': shared.args.triton,
- 'inject_fused_attention': not shared.args.no_inject_fused_attention,
- 'inject_fused_mlp': not shared.args.no_inject_fused_mlp,
- 'use_safetensors': use_safetensors,
- 'trust_remote_code': shared.args.trust_remote_code,
- 'max_memory': get_max_memory_dict(),
- 'quantize_config': quantize_config,
- 'use_cuda_fp16': not shared.args.no_use_cuda_fp16,
- }
-
- logger.info(f"The AutoGPTQ params are: {params}")
- model = AutoGPTQForCausalLM.from_quantized(path_to_model, **params)
-
- # These lines fix the multimodal extension when used with AutoGPTQ
- if hasattr(model, 'model'):
- if not hasattr(model, 'dtype'):
- if hasattr(model.model, 'dtype'):
- model.dtype = model.model.dtype
-
- if hasattr(model.model, 'model') and hasattr(model.model.model, 'embed_tokens'):
- if not hasattr(model, 'embed_tokens'):
- model.embed_tokens = model.model.model.embed_tokens
-
- if not hasattr(model.model, 'embed_tokens'):
- model.model.embed_tokens = model.model.model.embed_tokens
-
- return model
diff --git a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/639-7bf6be9a90be8cdb.js b/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/639-7bf6be9a90be8cdb.js
deleted file mode 100644
index 512d7e18a8c17a4d33c3e5560d8ad69e641da70d..0000000000000000000000000000000000000000
--- a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/639-7bf6be9a90be8cdb.js
+++ /dev/null
@@ -1,181 +0,0 @@
-(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[639,931],{4875:function(e,t){var n;/*!
- Copyright (c) 2018 Jed Watson.
- Licensed under the MIT License (MIT), see
- http://jedwatson.github.io/classnames
-*/!function(){"use strict";var r={}.hasOwnProperty;function i(){for(var e=[],t=0;t=r&&n<8;n++,r*=128);if(!t)for(var i=r+e,a=n-1;a>=0;a--){var o=i%256;this.source[this.offset+a]=o,i=(i-o)/256}this.offset+=n},o.prototype.writeSections=function(e){this.offset=0;for(var t=0;t=s.getValue()))return n("[fix-webm-duration] Duration section is present"),!1;n("[fix-webm-duration] Duration section is present, but the value is empty"),s.setValue(e)}else n("[fix-webm-duration] Duration section is missing"),(s=new a("Duration","Float")).setValue(e),i.data.push({id:1161,data:s});return o.setValue(1e6),i.updateByData(),r.updateByData(),this.updateByData(),!0},s.prototype.toBlob=function(e){return new Blob([this.source.buffer],{type:e||"video/webm"})},l.default=l,l})?i.call(t,n,t,e):i)&&(e.exports=r)},820:function(e,t,n){"use strict";var r,i;e.exports=(null==(r=n.g.process)?void 0:r.env)&&"object"==typeof(null==(i=n.g.process)?void 0:i.env)?n.g.process:n(3488)},7632:function(e,t,n){"use strict";n.d(t,{ko:function(){return l.ko},tX:function(){return y},Fd:function(){return l.Fd},Sj:function(){return u}});var r=n(931),i=n(4499),a=n(9485),o=function(e,t,n,r){return new(n||(n=Promise))(function(i,a){function o(e){try{l(r.next(e))}catch(t){a(t)}}function s(e){try{l(r.throw(e))}catch(t){a(t)}}function l(e){var t;e.done?i(e.value):((t=e.value)instanceof n?t:new n(function(e){e(t)})).then(o,s)}l((r=r.apply(e,t||[])).next())})};class s{initSession(e,t){return o(this,void 0,void 0,function*(){return yield this.session.initSession(e,t)})}transcribe(e,t,n){return o(this,void 0,void 0,function*(){return null==this.session?a.x4.err(Error("Session not initialized")):n?this.session instanceof r.z?yield this.session.stream(e,t,n):yield this.session.stream(e,t,i.sj(n)):yield this.session.run(e)})}destroy(){null!==this.innerWorker&&(console.warn("Terminating worker"),this.innerWorker.terminate()),this.session=null}constructor(e,t){this.session=e,this.innerWorker=t||null}}var l=n(5453),c=function(e,t,n,r){return new(n||(n=Promise))(function(i,a){function o(e){try{l(r.next(e))}catch(t){a(t)}}function s(e){try{l(r.throw(e))}catch(t){a(t)}}function l(e){var t;e.done?i(e.value):((t=e.value)instanceof n?t:new n(function(e){e(t)})).then(o,s)}l((r=r.apply(e,t||[])).next())})};class u{loadModel(e,t,n){return c(this,void 0,void 0,function*(){let r=yield this.createSession(!0,e,n);return r.isErr?a.x4.err(r.error):(t(r.value),a.x4.ok(r.value))})}createSession(e,t,o){return c(this,void 0,void 0,function*(){if(e&&"undefined"!=typeof document){let l=new Worker(n.tu(new URL(n.p+n.u(931),n.b)),{type:void 0}),c=i.Ud(l),u=yield new c,d=yield u.initSession(t,i.sj(o)),[p,m]=d.repr;return"Err"===p?a.x4.err(Error("Session initialization failed: "+m.toString())):a.x4.ok(new s(u,l))}{let y=new r.z,f=yield y.initSession(t,o);return f.isErr?(console.error("Error initializing session: ",f),a.x4.err(f.error)):a.x4.ok(new s(y))}})}}var d=n(7280),p=n.n(d),m=function(e,t,n,r){return new(n||(n=Promise))(function(i,a){function o(e){try{l(r.next(e))}catch(t){a(t)}}function s(e){try{l(r.throw(e))}catch(t){a(t)}}function l(e){var t;e.done?i(e.value):((t=e.value)instanceof n?t:new n(function(e){e(t)})).then(o,s)}l((r=r.apply(e,t||[])).next())})};class y{static start(){return m(this,void 0,void 0,function*(){if(!navigator.mediaDevices)throw Error("Media device not available");let e=yield navigator.mediaDevices.getUserMedia({audio:!0}),t=new MediaRecorder(e,{mimeType:y.supportedMimes.find(e=>MediaRecorder.isTypeSupported(e))}),n=new y(t);return n.currentStream=e,t.addEventListener("dataavailable",e=>{n.audioChunks.push(e.data)}),t.start(),n.currentStart=Date.now(),n})}isRecording(){return null!==this.inner&&"recording"===this.inner.state}stop(){return m(this,void 0,void 0,function*(){if(!this.inner)throw Error("Please start the recorder first");let e=new Promise(e=>{this.inner.addEventListener("stop",()=>m(this,void 0,void 0,function*(){let t=Date.now()-this.currentStart,n=new Blob(this.audioChunks,{type:this.inner.mimeType});this.inner.mimeType.includes("webm")&&(n=yield p()(n,t,{logger:!1}));let r=yield n.arrayBuffer();e({blob:n,buffer:r})})),this.inner.stop(),this.currentStream.getTracks().forEach(e=>e.stop())});return e})}constructor(e){this.currentStart=null,this.currentStream=null,this.inner=null,this.audioChunks=[],this.inner=e}}y.supportedMimes=["audio/webm","audio/ogg"]},5453:function(e,t,n){"use strict";n.d(t,{Fd:function(){return o},Hn:function(){return s},ko:function(){return i}});var r,i,a=n(9485);(r=i||(i={})).WHISPER_TINY="tiny",r.WHISPER_BASE="base",r.WHISPER_SMALL="small",r.WHISPER_MEDIUM="medium",r.WHISPER_LARGE="large";let o=new Map([[i.WHISPER_TINY,51444634],[i.WHISPER_BASE,96834130],[i.WHISPER_SMALL,313018088],[i.WHISPER_MEDIUM,972263884],[i.WHISPER_LARGE,1954315876]]);class s{static fromDBModel(e,t){var n,r,i,o;return n=this,r=void 0,i=void 0,o=function*(){let n=yield t.getTokenizer(e.ID);if(n.isErr)return a.x4.err(n.error);let r=n.value.bytes;return a.x4.ok(new s(e.name,e.bytes,r))},new(i||(i=Promise))(function(e,t){function a(e){try{l(o.next(e))}catch(n){t(n)}}function s(e){try{l(o.throw(e))}catch(n){t(n)}}function l(t){var n;t.done?e(t.value):((n=t.value)instanceof i?n:new i(function(e){e(n)})).then(a,s)}l((o=o.apply(n,r||[])).next())})}constructor(e,t,n){this.name=e,this.data=t,this.tokenizer=n}}},931:function(e,t,n){"use strict";n.d(t,{z:function(){return c}});var r=n(8054),i=n(4499),a=n(9485),o=n(5453),s=n(4208),l=function(e,t,n,r){return new(n||(n=Promise))(function(i,a){function o(e){try{l(r.next(e))}catch(t){a(t)}}function s(e){try{l(r.throw(e))}catch(t){a(t)}}function l(e){var t;e.done?i(e.value):((t=e.value)instanceof n?t:new n(function(e){e(t)})).then(o,s)}l((r=r.apply(e,t||[])).next())})};class c{initSession(e,t){return l(this,void 0,void 0,function*(){if(this.whisperSession)return a.x4.err(Error("Session already initialized. Call `destroy()` first."));let n=yield this.loadModel(e,t);if(n.isErr)return a.x4.err(n.error);let i=n.value;yield r.ZP();let o=new r.hE,s=yield o.setModel(i.data).setTokenizer(i.tokenizer).build();return this.whisperSession=s,a.x4.ok(void 0)})}loadModel(e,t){return l(this,void 0,void 0,function*(){let n=yield s.Z.create(),r=yield n.getModel(e,t);if(r.isErr)return a.x4.err(Error("Failed to load model ".concat(e," with error: ").concat(r.error)));let i=r.value,l=yield o.Hn.fromDBModel(i,n);if(l.isErr)return a.x4.err(Error("Failed to transmute model ".concat(e," with error: ").concat(l.error)));let c=l.value;return a.x4.ok(c)})}run(e){return l(this,void 0,void 0,function*(){return this.whisperSession?a.x4.ok((yield this.whisperSession.run(e))):a.x4.err(Error("The session is not initialized. Call `initSession()` method first."))})}stream(e,t,n){return l(this,void 0,void 0,function*(){return this.whisperSession?a.x4.ok((yield this.whisperSession.stream(e,t,n))):a.x4.err(Error("The session is not initialized. Call `initSession()` method first."))})}}"undefined"!=typeof self&&i.Jj(c)},9172:function(e){e.exports={style:{fontFamily:"'__VT323_2a9463', '__VT323_Fallback_2a9463'",fontWeight:400,fontStyle:"normal"},className:"__className_2a9463"}},3488:function(e){!function(){var t={229:function(e){var t,n,r,i=e.exports={};function a(){throw Error("setTimeout has not been defined")}function o(){throw Error("clearTimeout has not been defined")}function s(e){if(t===setTimeout)return setTimeout(e,0);if((t===a||!t)&&setTimeout)return t=setTimeout,setTimeout(e,0);try{return t(e,0)}catch(r){try{return t.call(null,e,0)}catch(n){return t.call(this,e,0)}}}!function(){try{t="function"==typeof setTimeout?setTimeout:a}catch(e){t=a}try{n="function"==typeof clearTimeout?clearTimeout:o}catch(r){n=o}}();var l=[],c=!1,u=-1;function d(){c&&r&&(c=!1,r.length?l=r.concat(l):u=-1,l.length&&p())}function p(){if(!c){var e=s(d);c=!0;for(var t=l.length;t;){for(r=l,l=[];++u1)for(var n=1;n1),u=[],d=!1,p=-1,m=void 0,y=void 0,f=function(e){return u.some(function(t){return!!(t.options.allowTouchMove&&t.options.allowTouchMove(e))})},h=function(e){var t=e||window.event;return!!f(t.target)||t.touches.length>1||(t.preventDefault&&t.preventDefault(),!1)},v=function(e){if(void 0===y){var t=!!e&&!0===e.reserveScrollBarGap,n=window.innerWidth-document.documentElement.clientWidth;t&&n>0&&(y=document.body.style.paddingRight,document.body.style.paddingRight=n+"px")}void 0===m&&(m=document.body.style.overflow,document.body.style.overflow="hidden")},g=function(){void 0!==y&&(document.body.style.paddingRight=y,y=void 0),void 0!==m&&(document.body.style.overflow=m,m=void 0)},b=function(e,t){var n=e.targetTouches[0].clientY-p;return!f(e.target)&&(t&&0===t.scrollTop&&n>0?h(e):t&&t.scrollHeight-t.scrollTop<=t.clientHeight&&n<0?h(e):(e.stopPropagation(),!0))},C=function(e,t){if(!e){console.error("disableBodyScroll unsuccessful - targetElement must be provided when calling disableBodyScroll on IOS devices.");return}!u.some(function(t){return t.targetElement===e})&&(u=[].concat(function(e){if(!Array.isArray(e))return Array.from(e);for(var t=0,n=Array(e.length);t-1&&!(null===a.offsetParent||"hidden"===getComputedStyle(a).visibility)&&function(e){if("INPUT"!==e.tagName||"radio"!==e.type||!e.name)return!0;var t=(e.form||e.ownerDocument).querySelectorAll('input[type="radio"][name="'+e.name+'"]'),n=function(e,t){for(var n=0;nt,set:e=>{Object.is(t,e)||(t=e,n(e))}}),i}(null),i=(0,r.useRef)(null),a=t.isStateful?n:i;return r.useEffect(()=>{e&&("function"==typeof e?e(a.current):e.current=a.current)}),a}(t),$=(0,r.useRef)(null),G=(0,r.useRef)(null),q=(0,r.useRef)(null);null===q.current&&U&&(q.current=document.createElement("div"));var Y=(0,r.useState)(!1),K=Y[0],Z=Y[1];(0,r.useEffect)(function(){return c&&B.add($),function(){B.remove($)}},[c,$]),I($,c,K,void 0===d||d,W);var J=function(){!q.current||h||document.body.contains(q.current)||document.body.appendChild(q.current),document.addEventListener("keydown",Q)},X=function(){q.current&&!h&&document.body.contains(q.current)&&document.body.removeChild(q.current),document.removeEventListener("keydown",Q)},Q=function(e){27===e.keyCode&&B.isTopModal($)&&(null==j||j(e),m&&O())};(0,r.useEffect)(function(){return function(){K&&X()}},[K]),(0,r.useEffect)(function(){c&&!K&&(Z(!0),J())},[c]);var ee=function(){G.current=!1},et=h||q.current,en=c?null!=(n=null==D?void 0:D.overlayAnimationIn)?n:A.overlayAnimationIn:null!=(a=null==D?void 0:D.overlayAnimationOut)?a:A.overlayAnimationOut,er=c?null!=(s=null==D?void 0:D.modalAnimationIn)?s:A.modalAnimationIn:null!=(l=null==D?void 0:D.modalAnimationOut)?l:A.modalAnimationOut;return K&&et?i.createPortal(r.createElement("div",{className:o()(A.root,null==D?void 0:D.root),style:null==P?void 0:P.root,"data-testid":"root"},r.createElement("div",{className:o()(A.overlay,null==D?void 0:D.overlay),"data-testid":"overlay","aria-hidden":!0,style:w({animation:en+" "+k+"ms"},null==P?void 0:P.overlay)}),r.createElement("div",{ref:$,id:L,className:o()(A.modalContainer,u&&A.modalContainerCenter,null==D?void 0:D.modalContainer),style:null==P?void 0:P.modalContainer,"data-testid":"modal-container",onClick:function(e){if(null===G.current&&(G.current=!0),!G.current){G.current=null;return}null==z||z(e),f&&O(),G.current=null}},r.createElement("div",{ref:V,className:o()(A.modal,null==D?void 0:D.modal),style:w({animation:er+" "+k+"ms"},null==P?void 0:P.modal),onMouseDown:ee,onMouseUp:ee,onClick:ee,onAnimationEnd:function(){c||Z(!1),null==H||H()},id:N,role:void 0===F?"dialog":F,"aria-modal":"true","aria-labelledby":M,"aria-describedby":R,"data-testid":"modal",tabIndex:-1},(void 0===C||C)&&r.createElement(x,{container:V,initialFocusRef:void 0===S?void 0:S}),_,(void 0===v||v)&&r.createElement(E,{classes:A,classNames:D,styles:P,closeIcon:b,onClick:O,id:g})))),et):null})},4499:function(e,t,n){"use strict";n.d(t,{Jj:function(){return c},Ud:function(){return d},sj:function(){return f}});let r=Symbol("Comlink.proxy"),i=Symbol("Comlink.endpoint"),a=Symbol("Comlink.releaseProxy"),o=Symbol("Comlink.thrown"),s=e=>"object"==typeof e&&null!==e||"function"==typeof e,l=new Map([["proxy",{canHandle:e=>s(e)&&e[r],serialize(e){let{port1:t,port2:n}=new MessageChannel;return c(e,t),[n,[n]]},deserialize:e=>(e.start(),d(e))}],["throw",{canHandle:e=>s(e)&&o in e,serialize:({value:e})=>[e instanceof Error?{isError:!0,value:{message:e.message,name:e.name,stack:e.stack}}:{isError:!1,value:e},[]],deserialize(e){if(e.isError)throw Object.assign(Error(e.value.message),e.value);throw e.value}}]]);function c(e,t=self){t.addEventListener("message",function n(r){let i;if(!r||!r.data)return;let{id:a,type:s,path:l}=Object.assign({path:[]},r.data),d=(r.data.argumentList||[]).map(v);try{let p=l.slice(0,-1).reduce((e,t)=>e[t],e),m=l.reduce((e,t)=>e[t],e);switch(s){case"GET":i=m;break;case"SET":p[l.slice(-1)[0]]=v(r.data.value),i=!0;break;case"APPLY":i=m.apply(p,d);break;case"CONSTRUCT":{let g=new m(...d);i=f(g)}break;case"ENDPOINT":{let{port1:b,port2:C}=new MessageChannel;c(e,C),y.set(b,[b]),i=b}break;case"RELEASE":i=void 0;break;default:return}}catch(S){i={value:S,[o]:0}}Promise.resolve(i).catch(e=>({value:e,[o]:0})).then(e=>{let[r,i]=h(e);t.postMessage(Object.assign(Object.assign({},r),{id:a}),i),"RELEASE"===s&&(t.removeEventListener("message",n),u(t))})}),t.start&&t.start()}function u(e){"MessagePort"===e.constructor.name&&e.close()}function d(e,t){return function e(t,n=[],r=function(){}){let o=!1,s=new Proxy(r,{get(r,i){if(p(o),i===a)return()=>g(t,{type:"RELEASE",path:n.map(e=>e.toString())}).then(()=>{u(t),o=!0});if("then"===i){if(0===n.length)return{then:()=>s};let l=g(t,{type:"GET",path:n.map(e=>e.toString())}).then(v);return l.then.bind(l)}return e(t,[...n,i])},set(e,r,i){p(o);let[a,s]=h(i);return g(t,{type:"SET",path:[...n,r].map(e=>e.toString()),value:a},s).then(v)},apply(r,a,s){p(o);let l=n[n.length-1];if(l===i)return g(t,{type:"ENDPOINT"}).then(v);if("bind"===l)return e(t,n.slice(0,-1));let[c,u]=m(s);return g(t,{type:"APPLY",path:n.map(e=>e.toString()),argumentList:c},u).then(v)},construct(e,r){p(o);let[i,a]=m(r);return g(t,{type:"CONSTRUCT",path:n.map(e=>e.toString()),argumentList:i},a).then(v)}});return s}(e,[],t)}function p(e){if(e)throw Error("Proxy has been released and is not useable")}function m(e){var t;let n=e.map(h);return[n.map(e=>e[0]),(t=n.map(e=>e[1]),Array.prototype.concat.apply([],t))]}let y=new WeakMap;function f(e){return Object.assign(e,{[r]:!0})}function h(e){for(let[t,n]of l)if(n.canHandle(e)){let[r,i]=n.serialize(e);return[{type:"HANDLER",name:t,value:r},i]}return[{type:"RAW",value:e},y.get(e)||[]]}function v(e){switch(e.type){case"HANDLER":return l.get(e.name).deserialize(e.value);case"RAW":return e.value}}function g(e,t,n){return new Promise(r=>{let i=[,,,,].fill(0).map(()=>Math.floor(Math.random()*Number.MAX_SAFE_INTEGER).toString(16)).join("-");e.addEventListener("message",function t(n){n.data&&n.data.id&&n.data.id===i&&(e.removeEventListener("message",t),r(n.data))}),e.start&&e.start(),e.postMessage(Object.assign({id:i},t),n)})}},1953:function(e,t,n){"use strict";let r,i;n.d(t,{x7:function(){return ei},ZP:function(){return ea}});var a,o=n(959);let s={data:""},l=e=>"object"==typeof window?((e?e.querySelector("#_goober"):window._goober)||Object.assign((e||document.head).appendChild(document.createElement("style")),{innerHTML:" ",id:"_goober"})).firstChild:e||s,c=/(?:([\u0080-\uFFFF\w-%@]+) *:? *([^{;]+?);|([^;}{]*?) *{)|(}\s*)/g,u=/\/\*[^]*?\*\/| +/g,d=/\n+/g,p=(e,t)=>{let n="",r="",i="";for(let a in e){let o=e[a];"@"==a[0]?"i"==a[1]?n=a+" "+o+";":r+="f"==a[1]?p(o,a):a+"{"+p(o,"k"==a[1]?"":t)+"}":"object"==typeof o?r+=p(o,t?t.replace(/([^,])+/g,e=>a.replace(/(^:.*)|([^,])+/g,t=>/&/.test(t)?t.replace(/&/g,e):e?e+" "+t:t)):a):null!=o&&(a=/^--/.test(a)?a:a.replace(/[A-Z]/g,"-$&").toLowerCase(),i+=p.p?p.p(a,o):a+":"+o+";")}return n+(t&&i?t+"{"+i+"}":i)+r},m={},y=e=>{if("object"==typeof e){let t="";for(let n in e)t+=n+y(e[n]);return t}return e},f=(e,t,n,r,i)=>{var a,o;let s=y(e),l=m[s]||(m[s]=(e=>{let t=0,n=11;for(;t>>0;return"go"+n})(s));if(!m[l]){let f=s!==e?e:(e=>{let t,n,r=[{}];for(;t=c.exec(e.replace(u,""));)t[4]?r.shift():t[3]?(n=t[3].replace(d," ").trim(),r.unshift(r[0][n]=r[0][n]||{})):r[0][t[1]]=t[2].replace(d," ").trim();return r[0]})(e);m[l]=p(i?{["@keyframes "+l]:f}:f,n?"":"."+l)}let h=n&&m.g?m.g:null;return n&&(m.g=m[l]),a=m[l],o=t,h?o.data=o.data.replace(h,a):-1===o.data.indexOf(a)&&(o.data=r?a+o.data:o.data+a),l},h=(e,t,n)=>e.reduce((e,r,i)=>{let a=t[i];if(a&&a.call){let o=a(n),s=o&&o.props&&o.props.className||/^go/.test(o)&&o;a=s?"."+s:o&&"object"==typeof o?o.props?"":p(o,""):!1===o?"":o}return e+r+(null==a?"":a)},"");function v(e){let t=this||{},n=e.call?e(t.p):e;return f(n.unshift?n.raw?h(n,[].slice.call(arguments,1),t.p):n.reduce((e,n)=>Object.assign(e,n&&n.call?n(t.p):n),{}):n,l(t.target),t.g,t.o,t.k)}v.bind({g:1});let g,b,C,S=v.bind({k:1});function w(e,t){let n=this||{};return function(){let r=arguments;function i(a,o){let s=Object.assign({},a),l=s.className||i.className;n.p=Object.assign({theme:b&&b()},s),n.o=/ *go\d+/.test(l),s.className=v.apply(n,r)+(l?" "+l:""),t&&(s.ref=o);let c=e;return e[0]&&(c=s.as||e,delete s.as),C&&c[0]&&C(s),g(c,s)}return t?t(i):i}}var E=e=>"function"==typeof e,U=(e,t)=>E(e)?e(t):e,T=(r=0,()=>(++r).toString()),k=()=>{if(void 0===i&&"u">typeof window){let e=matchMedia("(prefers-reduced-motion: reduce)");i=!e||e.matches}return i},x=new Map,D=e=>{if(x.has(e))return;let t=setTimeout(()=>{x.delete(e),F({type:4,toastId:e})},1e3);x.set(e,t)},B=e=>{let t=x.get(e);t&&clearTimeout(t)},I=(e,t)=>{switch(t.type){case 0:return{...e,toasts:[t.toast,...e.toasts].slice(0,20)};case 1:return t.toast.id&&B(t.toast.id),{...e,toasts:e.toasts.map(e=>e.id===t.toast.id?{...e,...t.toast}:e)};case 2:let{toast:n}=t;return e.toasts.find(e=>e.id===n.id)?I(e,{type:1,toast:n}):I(e,{type:0,toast:n});case 3:let{toastId:r}=t;return r?D(r):e.toasts.forEach(e=>{D(e.id)}),{...e,toasts:e.toasts.map(e=>e.id===r||void 0===r?{...e,visible:!1}:e)};case 4:return void 0===t.toastId?{...e,toasts:[]}:{...e,toasts:e.toasts.filter(e=>e.id!==t.toastId)};case 5:return{...e,pausedAt:t.time};case 6:let i=t.time-(e.pausedAt||0);return{...e,pausedAt:void 0,toasts:e.toasts.map(e=>({...e,pauseDuration:e.pauseDuration+i}))}}},A=[],P={toasts:[],pausedAt:void 0},F=e=>{P=I(P,e),A.forEach(e=>{e(P)})},R={blank:4e3,error:4e3,success:2e3,loading:1/0,custom:4e3},M=(e={})=>{let[t,n]=(0,o.useState)(P);(0,o.useEffect)(()=>(A.push(n),()=>{let e=A.indexOf(n);e>-1&&A.splice(e,1)}),[t]);let r=t.toasts.map(t=>{var n,r;return{...e,...e[t.type],...t,duration:t.duration||(null==(n=e[t.type])?void 0:n.duration)||(null==e?void 0:e.duration)||R[t.type],style:{...e.style,...null==(r=e[t.type])?void 0:r.style,...t.style}}});return{...t,toasts:r}},L=(e,t="blank",n)=>({createdAt:Date.now(),visible:!0,type:t,ariaProps:{role:"status","aria-live":"polite"},message:e,pauseDuration:0,...n,id:(null==n?void 0:n.id)||T()}),N=e=>(t,n)=>{let r=L(t,e,n);return F({type:2,toast:r}),r.id},O=(e,t)=>N("blank")(e,t);O.error=N("error"),O.success=N("success"),O.loading=N("loading"),O.custom=N("custom"),O.dismiss=e=>{F({type:3,toastId:e})},O.remove=e=>F({type:4,toastId:e}),O.promise=(e,t,n)=>{let r=O.loading(t.loading,{...n,...null==n?void 0:n.loading});return e.then(e=>(O.success(U(t.success,e),{id:r,...n,...null==n?void 0:n.success}),e)).catch(e=>{O.error(U(t.error,e),{id:r,...n,...null==n?void 0:n.error})}),e};var j=(e,t)=>{F({type:1,toast:{id:e,height:t}})},z=()=>{F({type:5,time:Date.now()})},H=e=>{let{toasts:t,pausedAt:n}=M(e);(0,o.useEffect)(()=>{if(n)return;let e=Date.now(),r=t.map(t=>{if(t.duration===1/0)return;let n=(t.duration||0)+t.pauseDuration-(e-t.createdAt);if(n<0){t.visible&&O.dismiss(t.id);return}return setTimeout(()=>O.dismiss(t.id),n)});return()=>{r.forEach(e=>e&&clearTimeout(e))}},[t,n]);let r=(0,o.useCallback)(()=>{n&&F({type:6,time:Date.now()})},[n]),i=(0,o.useCallback)((e,n)=>{let{reverseOrder:r=!1,gutter:i=8,defaultPosition:a}=n||{},o=t.filter(t=>(t.position||a)===(e.position||a)&&t.height),s=o.findIndex(t=>t.id===e.id),l=o.filter((e,t)=>te.visible).slice(...r?[l+1]:[0,l]).reduce((e,t)=>e+(t.height||0)+i,0)},[t]);return{toasts:t,handlers:{updateHeight:j,startPause:z,endPause:r,calculateOffset:i}}},_=w("div")`
- width: 20px;
- opacity: 0;
- height: 20px;
- border-radius: 10px;
- background: ${e=>e.primary||"#ff4b4b"};
- position: relative;
- transform: rotate(45deg);
-
- animation: ${S`
-from {
- transform: scale(0) rotate(45deg);
- opacity: 0;
-}
-to {
- transform: scale(1) rotate(45deg);
- opacity: 1;
-}`} 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275)
- forwards;
- animation-delay: 100ms;
-
- &:after,
- &:before {
- content: '';
- animation: ${S`
-from {
- transform: scale(0);
- opacity: 0;
-}
-to {
- transform: scale(1);
- opacity: 1;
-}`} 0.15s ease-out forwards;
- animation-delay: 150ms;
- position: absolute;
- border-radius: 3px;
- opacity: 0;
- background: ${e=>e.secondary||"#fff"};
- bottom: 9px;
- left: 4px;
- height: 2px;
- width: 12px;
- }
-
- &:before {
- animation: ${S`
-from {
- transform: scale(0) rotate(90deg);
- opacity: 0;
-}
-to {
- transform: scale(1) rotate(90deg);
- opacity: 1;
-}`} 0.15s ease-out forwards;
- animation-delay: 180ms;
- transform: rotate(90deg);
- }
-`,W=w("div")`
- width: 12px;
- height: 12px;
- box-sizing: border-box;
- border: 2px solid;
- border-radius: 100%;
- border-color: ${e=>e.secondary||"#e0e0e0"};
- border-right-color: ${e=>e.primary||"#616161"};
- animation: ${S`
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
-`} 1s linear infinite;
-`,V=w("div")`
- width: 20px;
- opacity: 0;
- height: 20px;
- border-radius: 10px;
- background: ${e=>e.primary||"#61d345"};
- position: relative;
- transform: rotate(45deg);
-
- animation: ${S`
-from {
- transform: scale(0) rotate(45deg);
- opacity: 0;
-}
-to {
- transform: scale(1) rotate(45deg);
- opacity: 1;
-}`} 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275)
- forwards;
- animation-delay: 100ms;
- &:after {
- content: '';
- box-sizing: border-box;
- animation: ${S`
-0% {
- height: 0;
- width: 0;
- opacity: 0;
-}
-40% {
- height: 0;
- width: 6px;
- opacity: 1;
-}
-100% {
- opacity: 1;
- height: 10px;
-}`} 0.2s ease-out forwards;
- opacity: 0;
- animation-delay: 200ms;
- position: absolute;
- border-right: 2px solid;
- border-bottom: 2px solid;
- border-color: ${e=>e.secondary||"#fff"};
- bottom: 6px;
- left: 6px;
- height: 10px;
- width: 6px;
- }
-`,$=w("div")`
- position: absolute;
-`,G=w("div")`
- position: relative;
- display: flex;
- justify-content: center;
- align-items: center;
- min-width: 20px;
- min-height: 20px;
-`,q=w("div")`
- position: relative;
- transform: scale(0.6);
- opacity: 0.4;
- min-width: 20px;
- animation: ${S`
-from {
- transform: scale(0.6);
- opacity: 0.4;
-}
-to {
- transform: scale(1);
- opacity: 1;
-}`} 0.3s 0.12s cubic-bezier(0.175, 0.885, 0.32, 1.275)
- forwards;
-`,Y=({toast:e})=>{let{icon:t,type:n,iconTheme:r}=e;return void 0!==t?"string"==typeof t?o.createElement(q,null,t):t:"blank"===n?null:o.createElement(G,null,o.createElement(W,{...r}),"loading"!==n&&o.createElement($,null,"error"===n?o.createElement(_,{...r}):o.createElement(V,{...r})))},K=e=>`
-0% {transform: translate3d(0,${-200*e}%,0) scale(.6); opacity:.5;}
-100% {transform: translate3d(0,0,0) scale(1); opacity:1;}
-`,Z=e=>`
-0% {transform: translate3d(0,0,-1px) scale(1); opacity:1;}
-100% {transform: translate3d(0,${-150*e}%,-1px) scale(.6); opacity:0;}
-`,J=w("div")`
- display: flex;
- align-items: center;
- background: #fff;
- color: #363636;
- line-height: 1.3;
- will-change: transform;
- box-shadow: 0 3px 10px rgba(0, 0, 0, 0.1), 0 3px 3px rgba(0, 0, 0, 0.05);
- max-width: 350px;
- pointer-events: auto;
- padding: 8px 10px;
- border-radius: 8px;
-`,X=w("div")`
- display: flex;
- justify-content: center;
- margin: 4px 10px;
- color: inherit;
- flex: 1 1 auto;
- white-space: pre-line;
-`,Q=(e,t)=>{let n=e.includes("top")?1:-1,[r,i]=k()?["0%{opacity:0;} 100%{opacity:1;}","0%{opacity:1;} 100%{opacity:0;}"]:[K(n),Z(n)];return{animation:t?`${S(r)} 0.35s cubic-bezier(.21,1.02,.73,1) forwards`:`${S(i)} 0.4s forwards cubic-bezier(.06,.71,.55,1)`}},ee=o.memo(({toast:e,position:t,style:n,children:r})=>{let i=e.height?Q(e.position||t||"top-center",e.visible):{opacity:0},a=o.createElement(Y,{toast:e}),s=o.createElement(X,{...e.ariaProps},U(e.message,e));return o.createElement(J,{className:e.className,style:{...i,...n,...e.style}},"function"==typeof r?r({icon:a,message:s}):o.createElement(o.Fragment,null,a,s))});a=o.createElement,p.p=void 0,g=a,b=void 0,C=void 0;var et=({id:e,className:t,style:n,onHeightUpdate:r,children:i})=>{let a=o.useCallback(t=>{if(t){let n=()=>{r(e,t.getBoundingClientRect().height)};n(),new MutationObserver(n).observe(t,{subtree:!0,childList:!0,characterData:!0})}},[e,r]);return o.createElement("div",{ref:a,className:t,style:n},i)},en=(e,t)=>{let n=e.includes("top"),r=e.includes("center")?{justifyContent:"center"}:e.includes("right")?{justifyContent:"flex-end"}:{};return{left:0,right:0,display:"flex",position:"absolute",transition:k()?void 0:"all 230ms cubic-bezier(.21,1.02,.73,1)",transform:`translateY(${t*(n?1:-1)}px)`,...n?{top:0}:{bottom:0},...r}},er=v`
- z-index: 9999;
- > * {
- pointer-events: auto;
- }
-`,ei=({reverseOrder:e,position:t="top-center",toastOptions:n,gutter:r,children:i,containerStyle:a,containerClassName:s})=>{let{toasts:l,handlers:c}=H(n);return o.createElement("div",{style:{position:"fixed",zIndex:9999,top:16,left:16,right:16,bottom:16,pointerEvents:"none",...a},className:s,onMouseEnter:c.startPause,onMouseLeave:c.endPause},l.map(n=>{let a=n.position||t,s=en(a,c.calculateOffset(n,{reverseOrder:e,gutter:r,defaultPosition:t}));return o.createElement(et,{id:n.id,key:n.id,onHeightUpdate:c.updateHeight,className:n.visible?er:"",style:s},"custom"===n.type?U(n.message,n):i?i(n):o.createElement(ee,{toast:n,position:a}))}))},ea=O},9485:function(e,t,n){"use strict";n.d(t,{x4:function(){return r.x4}}),n(4826);var r=n(3807);n(1866),n(113)}}]);
\ No newline at end of file
diff --git a/spaces/Fakermiya/Nsfw-Sfw_Classifier/Dockerfile b/spaces/Fakermiya/Nsfw-Sfw_Classifier/Dockerfile
deleted file mode 100644
index 7389a194e4f9307a2920c398ec6ad8fd3509e88d..0000000000000000000000000000000000000000
--- a/spaces/Fakermiya/Nsfw-Sfw_Classifier/Dockerfile
+++ /dev/null
@@ -1,99 +0,0 @@
-FROM heartexlabs/label-studio:hf-latest
-
-################################################################################
-#
-# How to Disable Public Account Creation
-# --------------------------------------
-# By default this space allows for the unrestricted creation of new accounts
-# will full access to all projects and data. This is great for trying out
-# Label Studio and collaborating on projects, but you may want to restrict
-# access to your space to only authorized users. Uncomment the following line
-# to disable public account creation for this space.
-#
-# ENV LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true
-#
-# Set secrets in your space to create an inital user, and log in with your
-# provided username and password. Do not set these in your Dockerfile, as they
-# globally visible on a public space.
-#
-# LABEL_STUDIO_USERNAME
-# LABEL_STUDIO_PASSWORD
-#
-# You will need to provide new users with an invitation link to join the space.
-#
-################################################################################
-
-################################################################################
-#
-# How to Enable Configuration Persistence
-# ---------------------------------------
-# By default this space stores all project configuration and data annotations
-# in local storage with Sqlite. If the space is reset, all configuration and
-# annotation data in the space will be lost. You can enable configuration
-# persistence by connecting an external Postgres database to your space,
-# guaranteeing that all project and annotation settings are preserved.
-#
-# Set the following secret variables to match your own hosted instance of
-# Postgres. We strongly recommend setting these as secrets to prevent leaking
-# information about your database service to the public in your spaces
-# definition.
-#
-# ENV DJANGO_DB=default
-# ENV POSTGRE_NAME=
-# ENV POSTGRE_PORT=
-# ENV POSTGRE_USER=
-# ENV POSTGRE_PASSWORD=
-# ENV POSTGRE_PORT=
-# ENV POSTGRE_HOST=
-#
-# Uncomment the following line to remove the warning about ephemeral storage
-#
-# ENV STORAGE_PERSISTENCE=1
-#
-# Note that you will need to connect cloud storage to host data items that you
-# want to annotate, as local storage will not be preserved across a space reset.
-#
-################################################################################
-
-################################################################################
-#
-# How to Enable Cloud Storage
-# ---------------------------
-# By default the only data storage enabled for this space is local. In the case
-# of a space reset, all data will be lost. To enable permanent storage, you
-# must enable a cloud storage connector. We also strongly recommend enabling
-# configuration persistence to preserve project data, annotations, and user
-# settings. Choose the appropriate cloud connector and configure the secrets
-# for it.
-#
-# Amazon S3
-# =========
-# STORAGE_TYPE=s3
-# STORAGE_AWS_ACCESS_KEY_ID=""
-# STORAGE_AWS_SECRET_ACCESS_KEY=""
-# STORAGE_AWS_BUCKET_NAME=""
-# STORAGE_AWS_REGION_NAME=""
-# STORAGE_AWS_FOLDER=""
-#
-# Google Cloud Storage
-# ====================
-#
-# STORAGE_TYPE=gcs
-# STORAGE_GCS_BUCKET_NAME=""
-# STORAGE_GCS_PROJECT_ID=""
-# STORAGE_GCS_FOLDER=""
-# GOOGLE_APPLICATION_CREDENTIALS="/opt/heartex/secrets/key.json"
-#
-# Azure Blob Storage
-# ==================
-#
-# STORAGE_TYPE=azure
-# STORAGE_AZURE_ACCOUNT_NAME=""
-# STORAGE_AZURE_ACCOUNT_KEY=""
-# STORAGE_AZURE_CONTAINER_NAME=""
-# STORAGE_AZURE_FOLDER=""
-#
-#
-################################################################################
-
-CMD exec label-studio --host=$SPACE_HOST
diff --git a/spaces/Fengbinbin/gpt-academic/docs/README_FR.md b/spaces/Fengbinbin/gpt-academic/docs/README_FR.md
deleted file mode 100644
index f21e90035ef2ddea91382155e0ad46b6740f5322..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/docs/README_FR.md
+++ /dev/null
@@ -1,296 +0,0 @@
-> **Note**
->
-> Ce fichier README est généré automatiquement par le plugin de traduction markdown de ce projet et n'est peut - être pas correct à 100%.
->
-
-# ChatGPT Optimisation Académique
-
-**Si vous aimez ce projet, donnez-lui une étoile; si vous avez inventé des raccourcis académiques plus utiles ou des plugins fonctionnels, n'hésitez pas à ouvrir une demande ou une demande de traction. Nous avons également un fichier README en [anglais|](docs/README_EN.md)[japonais|](docs/README_JP.md)[russe|](docs/README_RS.md)[français](docs/README_FR.md) traduit par ce projet lui-même.**
-
-> **Note**
->
-> 1. Veuillez noter que seuls les plugins de fonction signalés en **rouge** sont capables de lire les fichiers, certains plugins se trouvent dans le **menu déroulant** de la section plugin. Nous sommes également les bienvenus avec la plus haute priorité pour traiter et accepter tout nouveau PR de plugin!
->
-> 2. Chaque fichier dans ce projet est expliqué en détail dans l'auto-analyse [self_analysis.md](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins fonctionnels pertinents pour appeler GPT et générer un rapport d'auto-analyse projet mis à jour. Les questions fréquemment posées sont résumées dans le [wiki](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98).
->
-
-
-
-Fonctionnalité | Description
---- | ---
-Polissage en un clic | Prend en charge la correction en un clic et la recherche d'erreurs de syntaxe dans les documents de recherche.
-Traduction Chinois-Anglais en un clic | Une touche pour traduire la partie chinoise en anglais ou celle anglaise en chinois.
-Explication de code en un clic | Affiche et explique correctement le code.
-[Raccourcis clavier personnalisables](https://www.bilibili.com/video/BV14s4y1E7jN) | Prend en charge les raccourcis clavier personnalisables.
-[Configuration du serveur proxy](https://www.bilibili.com/video/BV1rc411W7Dr) | Prend en charge la configuration du serveur proxy.
-Conception modulaire | Prend en charge la personnalisation des plugins de fonctions et des [plugins] de fonctions hiérarchiques personnalisés, et les plugins prennent en charge [la mise à jour à chaud](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-[Auto-analyse du programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] [Lire en un clic](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) le code source de ce projet.
-[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] En un clic, les projets Python/C/C++/Java/Lua/... peuvent être analysés.
-Lire le document de recherche | [Plugins] Lisez le résumé de l'article en latex et générer un résumé.
-Traduction et polissage de l'article complet en LaTeX | [Plugins] Une touche pour traduire ou corriger en LaTeX
-Génération Commentaire de fonction en vrac | [Plugins] Lisez en un clic les fonctions et générez des commentaires de fonction.
-Rapport d'analyse automatique des chats générés | [Plugins] Génère un rapport de synthèse après l'exécution.
-[Assistant arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugins] Entrez l'url de l'article arxiv pour traduire le résumé + télécharger le PDF en un clic
-[Traduction complète des articles PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugins] Extraire le titre et le résumé de l'article PDF + Traduire le texte entier (multithread)
-[Aide à la recherche Google Academ](https://www.bilibili.com/video/BV19L411U7ia) | [Plugins] Donnez à GPT l'URL de n'importe quelle page de recherche Google Academ pour vous aider à sélectionner des articles intéressants
-Affichage de formules/images/tableaux | Afficher la forme traduite et rendue d'une formule en même temps, plusieurs formules et surlignage du code prend en charge
-Prise en charge des plugins multithread | Prise en charge de l'appel multithread de chatgpt, traitement en masse de texte ou de programmes en un clic
-Activer le thème Gradio sombre [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) au démarrage | Ajoutez ```/?__dark-theme=true``` à l'URL du navigateur pour basculer vers le thème sombre
-[Prise en charge de plusieurs modèles LLM](https://www.bilibili.com/video/BV1wT411p7yf), [prise en charge de l'interface API2D](https://api2d.com/) | Comment cela serait-il de se faire servir par GPT3.5, GPT4 et la [ChatGLM de Tsinghua](https://github.com/THUDM/ChatGLM-6B) en même temps?
-Expérience en ligne d'huggingface sans science | Après vous être connecté à huggingface, copiez [cet espace](https://huggingface.co/spaces/qingxu98/gpt-academic)
-... | ...
-
-
-
-
-Vous êtes un traducteur professionnel d'articles universitaires en français.
-
-Ceci est un fichier Markdown, veuillez le traduire en français sans modifier les commandes Markdown existantes :
-
-- Nouvelle interface (modifiable en modifiant l'option de mise en page dans config.py pour basculer entre les mises en page gauche-droite et haut-bas)
-
-
-
-
-
-- Tous les boutons sont générés dynamiquement en lisant functional.py, les utilisateurs peuvent ajouter librement des fonctions personnalisées pour libérer le presse-papiers.
-
-
-
-
-- Correction/amélioration
-
-
-
-
-- Si la sortie contient des formules, elles seront affichées simultanément sous forme de de texte brut et de forme rendue pour faciliter la copie et la lecture.
-
-
-
-
-- Pas envie de lire le code du projet ? Faites votre propre démo avec ChatGPT.
-
-
-
-
-- Utilisation combinée de plusieurs modèles de langage sophistiqués (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
-
-
-
-
-Utilisation combinée de plusieurs modèles de langage sophistiqués en version de test [huggingface](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (la version huggingface ne prend pas en charge Chatglm).
-
-
----
-
-## Installation - Méthode 1 : Exécution directe (Windows, Linux or MacOS)
-
-1. Téléchargez le projet
-```sh
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-```
-
-2. Configuration de l'API_KEY et des paramètres de proxy
-
-Dans `config.py`, configurez les paramètres de proxy et de clé d'API OpenAI, comme indiqué ci-dessous
-```
-1. Si vous êtes en Chine, vous devez configurer un proxy étranger pour utiliser l'API OpenAI en toute transparence. Pour ce faire, veuillez lire attentivement le fichier config.py (1. Modifiez l'option USE_PROXY ; 2. Modifiez les paramètres de proxies comme indiqué dans les instructions).
-2. Configurez votre clé API OpenAI. Vous devez vous inscrire sur le site web d'OpenAI pour obtenir une clé API. Une fois que vous avez votre clé API, vous pouvez la configurer dans le fichier config.py.
-3. Tous les problèmes liés aux réseaux de proxy (temps d'attente, non-fonctionnement des proxies) sont résumés dans https://github.com/binary-husky/chatgpt_academic/issues/1.
-```
-(Remarque : le programme vérifie d'abord s'il existe un fichier de configuration privé nommé `config_private.py`, et utilise les configurations de celui-ci à la place de celles du fichier `config.py`. Par conséquent, si vous comprenez notre logique de lecture de configuration, nous vous recommandons fortement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de celui-ci dans `config_private.py`. `config_private.py` n'est pas contrôlé par git et rend vos informations personnelles plus sûres.)
-
-3. Installation des dépendances
-```sh
-# (Option 1) Recommandé
-python -m pip install -r requirements.txt
-
-# (Option 2) Si vous utilisez anaconda, les étapes sont similaires :
-# (Option 2.1) conda create -n gptac_venv python=3.11
-# (Option 2.2) conda activate gptac_venv
-# (Option 2.3) python -m pip install -r requirements.txt
-
-# note : Utilisez la source pip officielle ou la source pip Alibaba. D'autres sources (comme celles des universités) pourraient poser problème. Pour utiliser temporairement une autre source, utilisez :
-# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-```
-
-Si vous avez besoin de soutenir ChatGLM de Tsinghua, vous devez installer plus de dépendances (si vous n'êtes pas familier avec Python ou que votre ordinateur n'est pas assez performant, nous vous recommandons de ne pas essayer) :
-```sh
-python -m pip install -r request_llm/requirements_chatglm.txt
-```
-
-4. Exécution
-```sh
-python main.py
-```
-
-5. Tester les plugins de fonctions
-```
-- Test Python Project Analysis
- Dans la zone de saisie, entrez `./crazy_functions/test_project/python/dqn`, puis cliquez sur "Parse Entire Python Project"
-- Test d'auto-lecture du code
- Cliquez sur "[Démo multi-thread] Parser ce projet lui-même (auto-traduction de la source)"
-- Test du modèle de fonctionnalité expérimentale (exige une réponse de l'IA à ce qui est arrivé aujourd'hui dans l'histoire). Vous pouvez utiliser cette fonctionnalité comme modèle pour des fonctions plus complexes.
- Cliquez sur "[Démo modèle de plugin de fonction] Histoire du Jour"
-- Le menu déroulant de la zone de plugin de fonctionnalité contient plus de fonctionnalités à sélectionner.
-```
-
-## Installation - Méthode 2 : Utilisation de docker (Linux)
-
-
-Vous êtes un traducteur professionnel d'articles académiques en français.
-
-1. ChatGPT seul (recommandé pour la plupart des gens)
-``` sh
-# Télécharger le projet
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-# Configurer le proxy outre-mer et la clé API OpenAI
-Modifier le fichier config.py avec n'importe quel éditeur de texte
-# Installer
-docker build -t gpt-academic .
-# Exécuter
-docker run --rm -it --net=host gpt-academic
-
-# Tester les modules de fonction
-## Tester la fonction modèle des modules (requiert la réponse de GPT à "qu'est-ce qui s'est passé dans l'histoire aujourd'hui ?"), vous pouvez utiliser cette fonction en tant que modèle pour implémenter des fonctions plus complexes.
-Cliquez sur "[Exemple de modèle de module] Histoire d'aujourd'hui"
-## Tester le résumé écrit pour le projet LaTeX
-Dans la zone de saisie, tapez ./crazy_functions/test_project/latex/attention, puis cliquez sur "Lire le résumé de l'article de recherche LaTeX"
-## Tester l'analyse du projet Python
-Dans la zone de saisie, tapez ./crazy_functions/test_project/python/dqn, puis cliquez sur "Analyser l'ensemble du projet Python"
-
-D'autres fonctions sont disponibles dans la liste déroulante des modules de fonction.
-```
-
-2. ChatGPT+ChatGLM (nécessite une grande connaissance de docker et une configuration informatique suffisamment puissante)
-``` sh
-# Modifier le dockerfile
-cd docs && nano Dockerfile+ChatGLM
-# Comment construire | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs)
-docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
-# Comment exécuter | 如何运行 (1) Directement exécuter :
-docker run --rm -it --net=host --gpus=all gpt-academic
-# Comment exécuter | 如何运行 (2) Je veux effectuer quelques ajustements dans le conteneur avant de lancer :
-docker run --rm -it --net=host --gpus=all gpt-academic bash
-```
-
-## Installation - Méthode 3 : Autres méthodes de déploiement
-
-1. Déploiement sur un cloud serveur distant
-Veuillez consulter le [wiki de déploiement-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-2. Utilisation de WSL2 (Windows Subsystem for Linux)
-Veuillez consulter le [wiki de déploiement-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-
-## Configuration de la procuration de l'installation
-### Méthode 1 : Méthode conventionnelle
-[Configuration de la procuration](https://github.com/binary-husky/chatgpt_academic/issues/1)
-
-### Méthode 2 : Tutoriel pour débutant pur
-[Tutoriel pour débutant pur](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
-
-
----
-
-## Personnalisation des nouveaux boutons pratiques (personnalisation des raccourcis académiques)
-Ouvrez le fichier `core_functional.py` avec n'importe quel éditeur de texte, ajoutez les éléments suivants, puis redémarrez le programme. (Si le bouton a déjà été ajouté avec succès et est visible, le préfixe et le suffixe pris en charge peuvent être modifiés à chaud sans avoir besoin de redémarrer le programme.)
-Par exemple:
-```
-"Traduction Français-Chinois": {
- # Préfixe, qui sera ajouté avant votre saisie. Par exemple, pour décrire votre demande, telle que la traduction, le débogage de code, l'amélioration, etc.
- "Prefix": "Veuillez traduire le contenu ci-dessous en chinois, puis expliquer chaque terme propre mentionné dans un tableau Markdown :\n\n",
-
- # Suffixe, qui sera ajouté après votre saisie. Par exemple, en combinaison avec un préfixe, vous pouvez mettre le contenu de votre saisie entre guillemets.
- "Suffix": "",
-},
-```
-
-
-
-
-
----
-
-
-## Présentation de certaines fonctionnalités
-
-### Affichage des images:
-
-
-
-
-
-
-### Si un programme peut comprendre et décomposer lui-même :
-
-
-
-
-
-
-
-
-
-
-### Analyse de tout projet Python/Cpp quelconque :
-
-
-
-
-
-
-
-
-### Lecture et résumé générés automatiquement pour les articles en Latex
-
-
-
-
-### Génération de rapports automatique
-
-
-
-
-
-
-### Conception de fonctionnalités modulaires
-
-
-
-
-
-
-### Traduction de code source en anglais
-
-
-
-
-
-## À faire et planification de version :
-- version 3.2+ (à faire) : Prise en charge de plus de paramètres d'interface de plugin de fonction
-- version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Prise en charge de l'API2d, prise en charge de la répartition de charge de plusieurs clés API
-- version 3.0 : Prise en charge de chatglm et d'autres petits llm
-- version 2.6 : Réorganisation de la structure du plugin, amélioration de l'interactivité, ajout de plus de plugins
-- version 2.5 : Mise à jour automatique, résolution du problème de dépassement de jeton et de texte trop long lors de la compilation du code source complet
-- version 2.4 : (1) Ajout de la fonctionnalité de traduction intégrale de PDF ; (2) Ajout d'une fonctionnalité de changement de position de zone de saisie ; (3) Ajout d'une option de disposition verticale ; (4) Optimisation du plugin de fonction multi-thread.
-- version 2.3 : Amélioration de l'interactivité multi-thread
-- version 2.2 : Prise en charge du rechargement à chaud du plugin de fonction
-- version 2.1 : Mise en page pliable
-- version 2.0 : Introduction du plugin de fonction modulaire
-- version 1.0 : Fonctionnalité de base
-
-## Références et apprentissage
-
-```
-De nombreux designs d'autres projets exceptionnels ont été utilisés pour référence dans le code, notamment :
-
-# Projet 1 : De nombreuses astuces ont été empruntées à ChuanhuChatGPT
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Projet 2 : ChatGLM-6B de Tsinghua :
-https://github.com/THUDM/ChatGLM-6B
-```
-
diff --git a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/cleaners.py b/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/cleaners.py
deleted file mode 100644
index 263df9c0f7c185290600454abfff464e7f774576..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/cleaners.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import re
-from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3
-from text.korean import latin_to_hangul, number_to_hangul, divide_hangul, korean_to_lazy_ipa, korean_to_ipa
-from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo, chinese_to_romaji, chinese_to_lazy_ipa, chinese_to_ipa, chinese_to_ipa2
-from text.sanskrit import devanagari_to_ipa
-from text.english import english_to_lazy_ipa, english_to_ipa2, english_to_lazy_ipa2
-from text.thai import num_to_thai, latin_to_thai
-# from text.shanghainese import shanghainese_to_ipa
-# from text.cantonese import cantonese_to_ipa
-# from text.ngu_dialect import ngu_dialect_to_ipa
-
-
-def japanese_cleaners(text):
- text = japanese_to_romaji_with_accent(text)
- text = re.sub(r'([A-Za-z])$', r'\1.', text)
- return text
-
-
-def japanese_cleaners2(text):
- return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…')
-
-
-def korean_cleaners(text):
- '''Pipeline for Korean text'''
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text = divide_hangul(text)
- text = re.sub(r'([\u3131-\u3163])$', r'\1.', text)
- return text
-
-
-# def chinese_cleaners(text):
-# '''Pipeline for Chinese text'''
-# text = number_to_chinese(text)
-# text = chinese_to_bopomofo(text)
-# text = latin_to_bopomofo(text)
-# text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text)
-# return text
-
-def chinese_cleaners(text):
- from pypinyin import Style, pinyin
- text = text.replace("[ZH]", "")
- phones = [phone[0] for phone in pinyin(text, style=Style.TONE3)]
- return ' '.join(phones)
-
-
-def zh_ja_mixture_cleaners(text):
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_romaji(x.group(1))+' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent(
- x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def sanskrit_cleaners(text):
- text = text.replace('॥', '।').replace('ॐ', 'ओम्')
- text = re.sub(r'([^।])$', r'\1।', text)
- return text
-
-
-def cjks_cleaners(text):
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]',
- lambda x: japanese_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[KO\](.*?)\[KO\]',
- lambda x: korean_to_lazy_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[SA\](.*?)\[SA\]',
- lambda x: devanagari_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]',
- lambda x: english_to_lazy_ipa(x.group(1))+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def cjke_cleaners(text):
- text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace(
- 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace(
- 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text)
- text = re.sub(r'\[KO\](.*?)\[KO\]',
- lambda x: korean_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace(
- 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def cjke_cleaners2(text):
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]',
- lambda x: japanese_to_ipa2(x.group(1))+' ', text)
- text = re.sub(r'\[KO\](.*?)\[KO\]',
- lambda x: korean_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]',
- lambda x: english_to_ipa2(x.group(1))+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def thai_cleaners(text):
- text = num_to_thai(text)
- text = latin_to_thai(text)
- return text
-
-
-# def shanghainese_cleaners(text):
-# text = shanghainese_to_ipa(text)
-# text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
-# return text
-
-
-# def chinese_dialect_cleaners(text):
-# text = re.sub(r'\[ZH\](.*?)\[ZH\]',
-# lambda x: chinese_to_ipa2(x.group(1))+' ', text)
-# text = re.sub(r'\[JA\](.*?)\[JA\]',
-# lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text)
-# text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5',
-# '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text)
-# text = re.sub(r'\[GD\](.*?)\[GD\]',
-# lambda x: cantonese_to_ipa(x.group(1))+' ', text)
-# text = re.sub(r'\[EN\](.*?)\[EN\]',
-# lambda x: english_to_lazy_ipa2(x.group(1))+' ', text)
-# text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group(
-# 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text)
-# text = re.sub(r'\s+$', '', text)
-# text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
-# return text
diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_537227KB.py b/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_537227KB.py
deleted file mode 100644
index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_537227KB.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv6 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv7 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- feat6 = self.conv6(x)
- feat7 = self.conv7(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/patch_match.py b/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/patch_match.py
deleted file mode 100644
index 14febe43c78f49120c8be9f02941c3c1f8fdc3b1..0000000000000000000000000000000000000000
--- a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/patch_match.py
+++ /dev/null
@@ -1,263 +0,0 @@
-#! /usr/bin/env python3
-# -*- coding: utf-8 -*-
-# File : patch_match.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 01/09/2020
-#
-# Distributed under terms of the MIT license.
-
-import ctypes
-import os.path as osp
-from typing import Optional, Union
-
-import numpy as np
-from PIL import Image
-
-
-import os
-if os.name!="nt":
- # Otherwise, fall back to the subprocess.
- import subprocess
- print('Compiling and loading c extensions from "{}".'.format(osp.realpath(osp.dirname(__file__))))
- # subprocess.check_call(['./travis.sh'], cwd=osp.dirname(__file__))
- subprocess.check_call("make clean && make", cwd=osp.dirname(__file__), shell=True)
-
-
-__all__ = ['set_random_seed', 'set_verbose', 'inpaint', 'inpaint_regularity']
-
-
-class CShapeT(ctypes.Structure):
- _fields_ = [
- ('width', ctypes.c_int),
- ('height', ctypes.c_int),
- ('channels', ctypes.c_int),
- ]
-
-
-class CMatT(ctypes.Structure):
- _fields_ = [
- ('data_ptr', ctypes.c_void_p),
- ('shape', CShapeT),
- ('dtype', ctypes.c_int)
- ]
-
-import tempfile
-from urllib.request import urlopen, Request
-import shutil
-from pathlib import Path
-from tqdm import tqdm
-
-def download_url_to_file(url, dst, hash_prefix=None, progress=True):
- r"""Download object at the given URL to a local path.
-
- Args:
- url (string): URL of the object to download
- dst (string): Full path where object will be saved, e.g. ``/tmp/temporary_file``
- hash_prefix (string, optional): If not None, the SHA256 downloaded file should start with ``hash_prefix``.
- Default: None
- progress (bool, optional): whether or not to display a progress bar to stderr
- Default: True
- https://pytorch.org/docs/stable/_modules/torch/hub.html#load_state_dict_from_url
- """
- file_size = None
- req = Request(url)
- u = urlopen(req)
- meta = u.info()
- if hasattr(meta, 'getheaders'):
- content_length = meta.getheaders("Content-Length")
- else:
- content_length = meta.get_all("Content-Length")
- if content_length is not None and len(content_length) > 0:
- file_size = int(content_length[0])
-
- # We deliberately save it in a temp file and move it after
- # download is complete. This prevents a local working checkpoint
- # being overridden by a broken download.
- dst = os.path.expanduser(dst)
- dst_dir = os.path.dirname(dst)
- f = tempfile.NamedTemporaryFile(delete=False, dir=dst_dir)
-
- try:
- with tqdm(total=file_size, disable=not progress,
- unit='B', unit_scale=True, unit_divisor=1024) as pbar:
- while True:
- buffer = u.read(8192)
- if len(buffer) == 0:
- break
- f.write(buffer)
- pbar.update(len(buffer))
-
- f.close()
- shutil.move(f.name, dst)
- finally:
- f.close()
- if os.path.exists(f.name):
- os.remove(f.name)
-
-if os.name!="nt":
- PMLIB = ctypes.CDLL(osp.join(osp.dirname(__file__), 'libpatchmatch.so'))
-else:
- if not os.path.exists(osp.join(osp.dirname(__file__), 'libpatchmatch.dll')):
- download_url_to_file(url="https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/libpatchmatch.dll",dst=osp.join(osp.dirname(__file__), 'libpatchmatch.dll'))
- if not os.path.exists(osp.join(osp.dirname(__file__), 'opencv_world460.dll')):
- download_url_to_file(url="https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/opencv_world460.dll",dst=osp.join(osp.dirname(__file__), 'opencv_world460.dll'))
- if not os.path.exists(osp.join(osp.dirname(__file__), 'libpatchmatch.dll')):
- print("[Dependency Missing] Please download https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/libpatchmatch.dll and put it into the PyPatchMatch folder")
- if not os.path.exists(osp.join(osp.dirname(__file__), 'opencv_world460.dll')):
- print("[Dependency Missing] Please download https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/opencv_world460.dll and put it into the PyPatchMatch folder")
- PMLIB = ctypes.CDLL(osp.join(osp.dirname(__file__), 'libpatchmatch.dll'))
-
-PMLIB.PM_set_random_seed.argtypes = [ctypes.c_uint]
-PMLIB.PM_set_verbose.argtypes = [ctypes.c_int]
-PMLIB.PM_free_pymat.argtypes = [CMatT]
-PMLIB.PM_inpaint.argtypes = [CMatT, CMatT, ctypes.c_int]
-PMLIB.PM_inpaint.restype = CMatT
-PMLIB.PM_inpaint_regularity.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float]
-PMLIB.PM_inpaint_regularity.restype = CMatT
-PMLIB.PM_inpaint2.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int]
-PMLIB.PM_inpaint2.restype = CMatT
-PMLIB.PM_inpaint2_regularity.argtypes = [CMatT, CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float]
-PMLIB.PM_inpaint2_regularity.restype = CMatT
-
-
-def set_random_seed(seed: int):
- PMLIB.PM_set_random_seed(ctypes.c_uint(seed))
-
-
-def set_verbose(verbose: bool):
- PMLIB.PM_set_verbose(ctypes.c_int(verbose))
-
-
-def inpaint(
- image: Union[np.ndarray, Image.Image],
- mask: Optional[Union[np.ndarray, Image.Image]] = None,
- *,
- global_mask: Optional[Union[np.ndarray, Image.Image]] = None,
- patch_size: int = 15
-) -> np.ndarray:
- """
- PatchMatch based inpainting proposed in:
-
- PatchMatch : A Randomized Correspondence Algorithm for Structural Image Editing
- C.Barnes, E.Shechtman, A.Finkelstein and Dan B.Goldman
- SIGGRAPH 2009
-
- Args:
- image (Union[np.ndarray, Image.Image]): the input image, should be 3-channel RGB/BGR.
- mask (Union[np.array, Image.Image], optional): the mask of the hole(s) to be filled, should be 1-channel.
- If not provided (None), the algorithm will treat all purely white pixels as the holes (255, 255, 255).
- global_mask (Union[np.array, Image.Image], optional): the target mask of the output image.
- patch_size (int): the patch size for the inpainting algorithm.
-
- Return:
- result (np.ndarray): the repaired image, of the same size as the input image.
- """
-
- if isinstance(image, Image.Image):
- image = np.array(image)
- image = np.ascontiguousarray(image)
- assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8'
-
- if mask is None:
- mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8')
- mask = np.ascontiguousarray(mask)
- else:
- mask = _canonize_mask_array(mask)
-
- if global_mask is None:
- ret_pymat = PMLIB.PM_inpaint(np_to_pymat(image), np_to_pymat(mask), ctypes.c_int(patch_size))
- else:
- global_mask = _canonize_mask_array(global_mask)
- ret_pymat = PMLIB.PM_inpaint2(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), ctypes.c_int(patch_size))
-
- ret_npmat = pymat_to_np(ret_pymat)
- PMLIB.PM_free_pymat(ret_pymat)
-
- return ret_npmat
-
-
-def inpaint_regularity(
- image: Union[np.ndarray, Image.Image],
- mask: Optional[Union[np.ndarray, Image.Image]],
- ijmap: np.ndarray,
- *,
- global_mask: Optional[Union[np.ndarray, Image.Image]] = None,
- patch_size: int = 15, guide_weight: float = 0.25
-) -> np.ndarray:
- if isinstance(image, Image.Image):
- image = np.array(image)
- image = np.ascontiguousarray(image)
-
- assert isinstance(ijmap, np.ndarray) and ijmap.ndim == 3 and ijmap.shape[2] == 3 and ijmap.dtype == 'float32'
- ijmap = np.ascontiguousarray(ijmap)
-
- assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8'
- if mask is None:
- mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8')
- mask = np.ascontiguousarray(mask)
- else:
- mask = _canonize_mask_array(mask)
-
-
- if global_mask is None:
- ret_pymat = PMLIB.PM_inpaint_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight))
- else:
- global_mask = _canonize_mask_array(global_mask)
- ret_pymat = PMLIB.PM_inpaint2_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight))
-
- ret_npmat = pymat_to_np(ret_pymat)
- PMLIB.PM_free_pymat(ret_pymat)
-
- return ret_npmat
-
-
-def _canonize_mask_array(mask):
- if isinstance(mask, Image.Image):
- mask = np.array(mask)
- if mask.ndim == 2 and mask.dtype == 'uint8':
- mask = mask[..., np.newaxis]
- assert mask.ndim == 3 and mask.shape[2] == 1 and mask.dtype == 'uint8'
- return np.ascontiguousarray(mask)
-
-
-dtype_pymat_to_ctypes = [
- ctypes.c_uint8,
- ctypes.c_int8,
- ctypes.c_uint16,
- ctypes.c_int16,
- ctypes.c_int32,
- ctypes.c_float,
- ctypes.c_double,
-]
-
-
-dtype_np_to_pymat = {
- 'uint8': 0,
- 'int8': 1,
- 'uint16': 2,
- 'int16': 3,
- 'int32': 4,
- 'float32': 5,
- 'float64': 6,
-}
-
-
-def np_to_pymat(npmat):
- assert npmat.ndim == 3
- return CMatT(
- ctypes.cast(npmat.ctypes.data, ctypes.c_void_p),
- CShapeT(npmat.shape[1], npmat.shape[0], npmat.shape[2]),
- dtype_np_to_pymat[str(npmat.dtype)]
- )
-
-
-def pymat_to_np(pymat):
- npmat = np.ctypeslib.as_array(
- ctypes.cast(pymat.data_ptr, ctypes.POINTER(dtype_pymat_to_ctypes[pymat.dtype])),
- (pymat.shape.height, pymat.shape.width, pymat.shape.channels)
- )
- ret = np.empty(npmat.shape, npmat.dtype)
- ret[:] = npmat
- return ret
-
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_indistribution_bn.sh b/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_indistribution_bn.sh
deleted file mode 100644
index 431482e3f11169b372e96f1e361479c922fa5060..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_indistribution_bn.sh
+++ /dev/null
@@ -1,62 +0,0 @@
-#!/bin/bash
-
-DATA_DIR=$1
-TRAINTASK=${2-'[rainbow-stack,bowl-ball-placement]'}
-TASKNAME=${3-'mix-two'}
-STEPS=${4-'20000'}
-
-DISP=False
-
-echo "Training multi-task dataset... Folder: $DATA_DIR Task $TASK"
-trap "kill 0" SIGINT
-# You can parallelize these depending on how much resources you have
-
-#############################
-## Language-Conditioned Tasks
-# [align-rope,assembling-kits-seq-seen-colors,assembling-kits-seq-unseen-colors,packing-shapes]
-
-
-# TRAIN
-python cliport/train.py train.task=$TRAINTASK \
- train.agent=cliport \
- train.model_task=$TASKNAME \
- train.attn_stream_fusion_type=add \
- train.trans_stream_fusion_type=conv \
- train.lang_fusion_type=mult \
- train.n_demos=200 \
- train.n_steps=${STEPS} \
- dataset.cache=True \
- train.exp_folder=exps/exp-$TASKNAME \
- dataset.type=multi \
- train.load_from_last_ckpt=False \
- train.batchnorm=True
-
-# Convert Python list to Bash array
-bash_array=$(python3 -c "import sys; print(' '.join((sys.argv[1])[1:-1].split(',')))" "$TRAINTASK")
-
-# Convert the space-separated string to a bash array
-echo "Testing multi-task dataset... Folder: $DATA_DIR Task $TASK"
-
-
-for task in $bash_array
- do
- echo "Testing $task"
- # TEST
- bash scripts/generate_gpt_datasets.sh data $task
-
- python cliport/eval.py model_task=$TASKNAME \
- eval_task=$task \
- agent=cliport \
- mode=test \
- n_demos=100 \
- train_demos=200 \
- checkpoint_type=test_best \
- type=single \
- exp_folder=exps/exp-$TASKNAME \
- update_results=True \
- train.batchnorm=True &
- done
-wait
-
-python notebooks/print_results.py -r=exps/exp-$TASKNAME
-echo "Finished Training."
\ No newline at end of file
diff --git a/spaces/GilbertClaus/VideoCutter/megaDL.py b/spaces/GilbertClaus/VideoCutter/megaDL.py
deleted file mode 100644
index cb7d02a50155e7d9d45e8d277197ca1edb639a31..0000000000000000000000000000000000000000
--- a/spaces/GilbertClaus/VideoCutter/megaDL.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import os
-import shutil
-from mega import Mega
-from others import *
-
-def download_mega(name, directory, url):
- if not os.path.exists(directory):
- os.makedirs(directory)
-
- mega = Mega()
- m = mega.login()
-
- # Download the file to a temporary location
- file = m.download_url(url, dest_filename=name)
-
- # Rename the file and move it to the specified directory
- filename = os.path.join(directory, file)
- shutil.move(file, filename)
-
- return filename
-
-def mega_dl(url, judul):
- judul = judul + '.mp4'
- download = '/home/user/app/Mega'
- filename = download_mega(judul, download, url)
- output_file = convert_videos(720, download)
- return output_file
diff --git a/spaces/GooglyBlox/DalleFork/index.html b/spaces/GooglyBlox/DalleFork/index.html
deleted file mode 100644
index 74d65ba18bf356ce52b1d00b0e7c1903d5e285f2..0000000000000000000000000000000000000000
--- a/spaces/GooglyBlox/DalleFork/index.html
+++ /dev/null
@@ -1,64 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py
deleted file mode 100644
index a01df33c94e1f8b5f51a51a780b30a77ce99b2c0..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = '../cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_2x_coco.py
deleted file mode 100644
index 34975959f27f0ef8b985ab7d2857c7f2d70e47ae..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_2x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './fcos_hrnetv2p_w18_gn-head_4x4_1x_coco.py'
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/legacy_1.x/retinanet_r50_caffe_fpn_1x_coco_v1.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/legacy_1.x/retinanet_r50_caffe_fpn_1x_coco_v1.py
deleted file mode 100644
index ef9392f7e351f489d6d9e97936925b6a16d1212e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/legacy_1.x/retinanet_r50_caffe_fpn_1x_coco_v1.py
+++ /dev/null
@@ -1,37 +0,0 @@
-_base_ = './retinanet_r50_fpn_1x_coco_v1.py'
-model = dict(
- pretrained='open-mmlab://detectron/resnet50_caffe',
- backbone=dict(
- norm_cfg=dict(requires_grad=False), norm_eval=True, style='caffe'))
-# use caffe img_norm
-img_norm_cfg = dict(
- mean=[102.9801, 115.9465, 122.7717], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index 8c707c79d659bc544d242352bcb29686eb40b004..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './deeplabv3_r50-d8_512x1024_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index f36d490e9c9b31de7eedf735d2712e55f35db998..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './dmnet_r50-d8_769x769_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/formating.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/formating.py
deleted file mode 100644
index 34061c1dd160d4b00aac8dbdc82dccf5c3883ce8..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/formating.py
+++ /dev/null
@@ -1,288 +0,0 @@
-from collections.abc import Sequence
-
-import mmcv
-import numpy as np
-import torch
-from mmcv.parallel import DataContainer as DC
-
-from ..builder import PIPELINES
-
-
-def to_tensor(data):
- """Convert objects of various python types to :obj:`torch.Tensor`.
-
- Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`,
- :class:`Sequence`, :class:`int` and :class:`float`.
-
- Args:
- data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to
- be converted.
- """
-
- if isinstance(data, torch.Tensor):
- return data
- elif isinstance(data, np.ndarray):
- return torch.from_numpy(data)
- elif isinstance(data, Sequence) and not mmcv.is_str(data):
- return torch.tensor(data)
- elif isinstance(data, int):
- return torch.LongTensor([data])
- elif isinstance(data, float):
- return torch.FloatTensor([data])
- else:
- raise TypeError(f'type {type(data)} cannot be converted to tensor.')
-
-
-@PIPELINES.register_module()
-class ToTensor(object):
- """Convert some results to :obj:`torch.Tensor` by given keys.
-
- Args:
- keys (Sequence[str]): Keys that need to be converted to Tensor.
- """
-
- def __init__(self, keys):
- self.keys = keys
-
- def __call__(self, results):
- """Call function to convert data in results to :obj:`torch.Tensor`.
-
- Args:
- results (dict): Result dict contains the data to convert.
-
- Returns:
- dict: The result dict contains the data converted
- to :obj:`torch.Tensor`.
- """
-
- for key in self.keys:
- results[key] = to_tensor(results[key])
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + f'(keys={self.keys})'
-
-
-@PIPELINES.register_module()
-class ImageToTensor(object):
- """Convert image to :obj:`torch.Tensor` by given keys.
-
- The dimension order of input image is (H, W, C). The pipeline will convert
- it to (C, H, W). If only 2 dimension (H, W) is given, the output would be
- (1, H, W).
-
- Args:
- keys (Sequence[str]): Key of images to be converted to Tensor.
- """
-
- def __init__(self, keys):
- self.keys = keys
-
- def __call__(self, results):
- """Call function to convert image in results to :obj:`torch.Tensor` and
- transpose the channel order.
-
- Args:
- results (dict): Result dict contains the image data to convert.
-
- Returns:
- dict: The result dict contains the image converted
- to :obj:`torch.Tensor` and transposed to (C, H, W) order.
- """
-
- for key in self.keys:
- img = results[key]
- if len(img.shape) < 3:
- img = np.expand_dims(img, -1)
- results[key] = to_tensor(img.transpose(2, 0, 1))
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + f'(keys={self.keys})'
-
-
-@PIPELINES.register_module()
-class Transpose(object):
- """Transpose some results by given keys.
-
- Args:
- keys (Sequence[str]): Keys of results to be transposed.
- order (Sequence[int]): Order of transpose.
- """
-
- def __init__(self, keys, order):
- self.keys = keys
- self.order = order
-
- def __call__(self, results):
- """Call function to convert image in results to :obj:`torch.Tensor` and
- transpose the channel order.
-
- Args:
- results (dict): Result dict contains the image data to convert.
-
- Returns:
- dict: The result dict contains the image converted
- to :obj:`torch.Tensor` and transposed to (C, H, W) order.
- """
-
- for key in self.keys:
- results[key] = results[key].transpose(self.order)
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + \
- f'(keys={self.keys}, order={self.order})'
-
-
-@PIPELINES.register_module()
-class ToDataContainer(object):
- """Convert results to :obj:`mmcv.DataContainer` by given fields.
-
- Args:
- fields (Sequence[dict]): Each field is a dict like
- ``dict(key='xxx', **kwargs)``. The ``key`` in result will
- be converted to :obj:`mmcv.DataContainer` with ``**kwargs``.
- Default: ``(dict(key='img', stack=True),
- dict(key='gt_semantic_seg'))``.
- """
-
- def __init__(self,
- fields=(dict(key='img',
- stack=True), dict(key='gt_semantic_seg'))):
- self.fields = fields
-
- def __call__(self, results):
- """Call function to convert data in results to
- :obj:`mmcv.DataContainer`.
-
- Args:
- results (dict): Result dict contains the data to convert.
-
- Returns:
- dict: The result dict contains the data converted to
- :obj:`mmcv.DataContainer`.
- """
-
- for field in self.fields:
- field = field.copy()
- key = field.pop('key')
- results[key] = DC(results[key], **field)
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + f'(fields={self.fields})'
-
-
-@PIPELINES.register_module()
-class DefaultFormatBundle(object):
- """Default formatting bundle.
-
- It simplifies the pipeline of formatting common fields, including "img"
- and "gt_semantic_seg". These fields are formatted as follows.
-
- - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True)
- - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor,
- (3)to DataContainer (stack=True)
- """
-
- def __call__(self, results):
- """Call function to transform and format common fields in results.
-
- Args:
- results (dict): Result dict contains the data to convert.
-
- Returns:
- dict: The result dict contains the data that is formatted with
- default bundle.
- """
-
- if 'img' in results:
- img = results['img']
- if len(img.shape) < 3:
- img = np.expand_dims(img, -1)
- img = np.ascontiguousarray(img.transpose(2, 0, 1))
- results['img'] = DC(to_tensor(img), stack=True)
- if 'gt_semantic_seg' in results:
- # convert to long
- results['gt_semantic_seg'] = DC(
- to_tensor(results['gt_semantic_seg'][None,
- ...].astype(np.int64)),
- stack=True)
- return results
-
- def __repr__(self):
- return self.__class__.__name__
-
-
-@PIPELINES.register_module()
-class Collect(object):
- """Collect data from the loader relevant to the specific task.
-
- This is usually the last stage of the data loader pipeline. Typically keys
- is set to some subset of "img", "gt_semantic_seg".
-
- The "img_meta" item is always populated. The contents of the "img_meta"
- dictionary depends on "meta_keys". By default this includes:
-
- - "img_shape": shape of the image input to the network as a tuple
- (h, w, c). Note that images may be zero padded on the bottom/right
- if the batch tensor is larger than this shape.
-
- - "scale_factor": a float indicating the preprocessing scale
-
- - "flip": a boolean indicating if image flip transform was used
-
- - "filename": path to the image file
-
- - "ori_shape": original shape of the image as a tuple (h, w, c)
-
- - "pad_shape": image shape after padding
-
- - "img_norm_cfg": a dict of normalization information:
- - mean - per channel mean subtraction
- - std - per channel std divisor
- - to_rgb - bool indicating if bgr was converted to rgb
-
- Args:
- keys (Sequence[str]): Keys of results to be collected in ``data``.
- meta_keys (Sequence[str], optional): Meta keys to be converted to
- ``mmcv.DataContainer`` and collected in ``data[img_metas]``.
- Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape',
- 'pad_shape', 'scale_factor', 'flip', 'flip_direction',
- 'img_norm_cfg')``
- """
-
- def __init__(self,
- keys,
- meta_keys=('filename', 'ori_filename', 'ori_shape',
- 'img_shape', 'pad_shape', 'scale_factor', 'flip',
- 'flip_direction', 'img_norm_cfg')):
- self.keys = keys
- self.meta_keys = meta_keys
-
- def __call__(self, results):
- """Call function to collect keys in results. The keys in ``meta_keys``
- will be converted to :obj:mmcv.DataContainer.
-
- Args:
- results (dict): Result dict contains the data to collect.
-
- Returns:
- dict: The result dict contains the following keys
- - keys in``self.keys``
- - ``img_metas``
- """
-
- data = {}
- img_meta = {}
- for key in self.meta_keys:
- img_meta[key] = results[key]
- data['img_metas'] = DC(img_meta, cpu_only=True)
- for key in self.keys:
- data[key] = results[key]
- return data
-
- def __repr__(self):
- return self.__class__.__name__ + \
- f'(keys={self.keys}, meta_keys={self.meta_keys})'
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/w2l_decoder.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/w2l_decoder.py
deleted file mode 100644
index fbf2d3524ee40bd0d08b6a9560047d96e49b6045..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/w2l_decoder.py
+++ /dev/null
@@ -1,486 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Flashlight decoders.
-"""
-
-import gc
-import itertools as it
-import os.path as osp
-from typing import List
-import warnings
-from collections import deque, namedtuple
-
-import numpy as np
-import torch
-from examples.speech_recognition.data.replabels import unpack_replabels
-from fairseq import tasks
-from fairseq.utils import apply_to_sample
-from omegaconf import open_dict
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-
-
-try:
- from flashlight.lib.text.dictionary import create_word_dict, load_words
- from flashlight.lib.sequence.criterion import CpuViterbiPath, get_data_ptr_as_bytes
- from flashlight.lib.text.decoder import (
- CriterionType,
- LexiconDecoderOptions,
- KenLM,
- LM,
- LMState,
- SmearingMode,
- Trie,
- LexiconDecoder,
- )
-except:
- warnings.warn(
- "flashlight python bindings are required to use this functionality. Please install from https://github.com/facebookresearch/flashlight/tree/master/bindings/python"
- )
- LM = object
- LMState = object
-
-
-class W2lDecoder(object):
- def __init__(self, args, tgt_dict):
- self.tgt_dict = tgt_dict
- self.vocab_size = len(tgt_dict)
- self.nbest = args.nbest
-
- # criterion-specific init
- self.criterion_type = CriterionType.CTC
- self.blank = (
- tgt_dict.index("")
- if "" in tgt_dict.indices
- else tgt_dict.bos()
- )
- if "" in tgt_dict.indices:
- self.silence = tgt_dict.index("")
- elif "|" in tgt_dict.indices:
- self.silence = tgt_dict.index("|")
- else:
- self.silence = tgt_dict.eos()
- self.asg_transitions = None
-
- def generate(self, models, sample, **unused):
- """Generate a batch of inferences."""
- # model.forward normally channels prev_output_tokens into the decoder
- # separately, but SequenceGenerator directly calls model.encoder
- encoder_input = {
- k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens"
- }
- emissions = self.get_emissions(models, encoder_input)
- return self.decode(emissions)
-
- def get_emissions(self, models, encoder_input):
- """Run encoder and normalize emissions"""
- model = models[0]
- encoder_out = model(**encoder_input)
- if hasattr(model, "get_logits"):
- emissions = model.get_logits(encoder_out) # no need to normalize emissions
- else:
- emissions = model.get_normalized_probs(encoder_out, log_probs=True)
- return emissions.transpose(0, 1).float().cpu().contiguous()
-
- def get_tokens(self, idxs):
- """Normalize tokens by handling CTC blank, ASG replabels, etc."""
- idxs = (g[0] for g in it.groupby(idxs))
- idxs = filter(lambda x: x != self.blank, idxs)
- return torch.LongTensor(list(idxs))
-
-
-class W2lViterbiDecoder(W2lDecoder):
- def __init__(self, args, tgt_dict):
- super().__init__(args, tgt_dict)
-
- def decode(self, emissions):
- B, T, N = emissions.size()
- hypos = []
- if self.asg_transitions is None:
- transitions = torch.FloatTensor(N, N).zero_()
- else:
- transitions = torch.FloatTensor(self.asg_transitions).view(N, N)
- viterbi_path = torch.IntTensor(B, T)
- workspace = torch.ByteTensor(CpuViterbiPath.get_workspace_size(B, T, N))
- CpuViterbiPath.compute(
- B,
- T,
- N,
- get_data_ptr_as_bytes(emissions),
- get_data_ptr_as_bytes(transitions),
- get_data_ptr_as_bytes(viterbi_path),
- get_data_ptr_as_bytes(workspace),
- )
- return [
- [{"tokens": self.get_tokens(viterbi_path[b].tolist()), "score": 0}]
- for b in range(B)
- ]
-
-
-class W2lKenLMDecoder(W2lDecoder):
- def __init__(self, args, tgt_dict):
- super().__init__(args, tgt_dict)
-
- self.unit_lm = getattr(args, "unit_lm", False)
-
- if args.lexicon:
- self.lexicon = load_words(args.lexicon)
- self.word_dict = create_word_dict(self.lexicon)
- self.unk_word = self.word_dict.get_index("")
-
- self.lm = KenLM(args.kenlm_model, self.word_dict)
- self.trie = Trie(self.vocab_size, self.silence)
-
- start_state = self.lm.start(False)
- for i, (word, spellings) in enumerate(self.lexicon.items()):
- word_idx = self.word_dict.get_index(word)
- _, score = self.lm.score(start_state, word_idx)
- for spelling in spellings:
- spelling_idxs = [tgt_dict.index(token) for token in spelling]
- assert (
- tgt_dict.unk() not in spelling_idxs
- ), f"{spelling} {spelling_idxs}"
- self.trie.insert(spelling_idxs, word_idx, score)
- self.trie.smear(SmearingMode.MAX)
-
- self.decoder_opts = LexiconDecoderOptions(
- beam_size=args.beam,
- beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))),
- beam_threshold=args.beam_threshold,
- lm_weight=args.lm_weight,
- word_score=args.word_score,
- unk_score=args.unk_weight,
- sil_score=args.sil_weight,
- log_add=False,
- criterion_type=self.criterion_type,
- )
-
- if self.asg_transitions is None:
- N = 768
- # self.asg_transitions = torch.FloatTensor(N, N).zero_()
- self.asg_transitions = []
-
- self.decoder = LexiconDecoder(
- self.decoder_opts,
- self.trie,
- self.lm,
- self.silence,
- self.blank,
- self.unk_word,
- self.asg_transitions,
- self.unit_lm,
- )
- else:
- assert args.unit_lm, "lexicon free decoding can only be done with a unit language model"
- from flashlight.lib.text.decoder import LexiconFreeDecoder, LexiconFreeDecoderOptions
-
- d = {w: [[w]] for w in tgt_dict.symbols}
- self.word_dict = create_word_dict(d)
- self.lm = KenLM(args.kenlm_model, self.word_dict)
- self.decoder_opts = LexiconFreeDecoderOptions(
- beam_size=args.beam,
- beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))),
- beam_threshold=args.beam_threshold,
- lm_weight=args.lm_weight,
- sil_score=args.sil_weight,
- log_add=False,
- criterion_type=self.criterion_type,
- )
- self.decoder = LexiconFreeDecoder(
- self.decoder_opts, self.lm, self.silence, self.blank, []
- )
-
- def get_timesteps(self, token_idxs: List[int]) -> List[int]:
- """Returns frame numbers corresponding to every non-blank token.
-
- Parameters
- ----------
- token_idxs : List[int]
- IDs of decoded tokens.
-
- Returns
- -------
- List[int]
- Frame numbers corresponding to every non-blank token.
- """
- timesteps = []
- for i, token_idx in enumerate(token_idxs):
- if token_idx == self.blank:
- continue
- if i == 0 or token_idx != token_idxs[i-1]:
- timesteps.append(i)
- return timesteps
-
- def decode(self, emissions):
- B, T, N = emissions.size()
- hypos = []
- for b in range(B):
- emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0)
- results = self.decoder.decode(emissions_ptr, T, N)
-
- nbest_results = results[: self.nbest]
- hypos.append(
- [
- {
- "tokens": self.get_tokens(result.tokens),
- "score": result.score,
- "timesteps": self.get_timesteps(result.tokens),
- "words": [
- self.word_dict.get_entry(x) for x in result.words if x >= 0
- ],
- }
- for result in nbest_results
- ]
- )
- return hypos
-
-
-FairseqLMState = namedtuple("FairseqLMState", ["prefix", "incremental_state", "probs"])
-
-
-class FairseqLM(LM):
- def __init__(self, dictionary, model):
- LM.__init__(self)
- self.dictionary = dictionary
- self.model = model
- self.unk = self.dictionary.unk()
-
- self.save_incremental = False # this currently does not work properly
- self.max_cache = 20_000
-
- model.cuda()
- model.eval()
- model.make_generation_fast_()
-
- self.states = {}
- self.stateq = deque()
-
- def start(self, start_with_nothing):
- state = LMState()
- prefix = torch.LongTensor([[self.dictionary.eos()]])
- incremental_state = {} if self.save_incremental else None
- with torch.no_grad():
- res = self.model(prefix.cuda(), incremental_state=incremental_state)
- probs = self.model.get_normalized_probs(res, log_probs=True, sample=None)
-
- if incremental_state is not None:
- incremental_state = apply_to_sample(lambda x: x.cpu(), incremental_state)
- self.states[state] = FairseqLMState(
- prefix.numpy(), incremental_state, probs[0, -1].cpu().numpy()
- )
- self.stateq.append(state)
-
- return state
-
- def score(self, state: LMState, token_index: int, no_cache: bool = False):
- """
- Evaluate language model based on the current lm state and new word
- Parameters:
- -----------
- state: current lm state
- token_index: index of the word
- (can be lexicon index then you should store inside LM the
- mapping between indices of lexicon and lm, or lm index of a word)
-
- Returns:
- --------
- (LMState, float): pair of (new state, score for the current word)
- """
- curr_state = self.states[state]
-
- def trim_cache(targ_size):
- while len(self.stateq) > targ_size:
- rem_k = self.stateq.popleft()
- rem_st = self.states[rem_k]
- rem_st = FairseqLMState(rem_st.prefix, None, None)
- self.states[rem_k] = rem_st
-
- if curr_state.probs is None:
- new_incremental_state = (
- curr_state.incremental_state.copy()
- if curr_state.incremental_state is not None
- else None
- )
- with torch.no_grad():
- if new_incremental_state is not None:
- new_incremental_state = apply_to_sample(
- lambda x: x.cuda(), new_incremental_state
- )
- elif self.save_incremental:
- new_incremental_state = {}
-
- res = self.model(
- torch.from_numpy(curr_state.prefix).cuda(),
- incremental_state=new_incremental_state,
- )
- probs = self.model.get_normalized_probs(
- res, log_probs=True, sample=None
- )
-
- if new_incremental_state is not None:
- new_incremental_state = apply_to_sample(
- lambda x: x.cpu(), new_incremental_state
- )
-
- curr_state = FairseqLMState(
- curr_state.prefix, new_incremental_state, probs[0, -1].cpu().numpy()
- )
-
- if not no_cache:
- self.states[state] = curr_state
- self.stateq.append(state)
-
- score = curr_state.probs[token_index].item()
-
- trim_cache(self.max_cache)
-
- outstate = state.child(token_index)
- if outstate not in self.states and not no_cache:
- prefix = np.concatenate(
- [curr_state.prefix, torch.LongTensor([[token_index]])], -1
- )
- incr_state = curr_state.incremental_state
-
- self.states[outstate] = FairseqLMState(prefix, incr_state, None)
-
- if token_index == self.unk:
- score = float("-inf")
-
- return outstate, score
-
- def finish(self, state: LMState):
- """
- Evaluate eos for language model based on the current lm state
-
- Returns:
- --------
- (LMState, float): pair of (new state, score for the current word)
- """
- return self.score(state, self.dictionary.eos())
-
- def empty_cache(self):
- self.states = {}
- self.stateq = deque()
- gc.collect()
-
-
-class W2lFairseqLMDecoder(W2lDecoder):
- def __init__(self, args, tgt_dict):
- super().__init__(args, tgt_dict)
-
- self.unit_lm = getattr(args, "unit_lm", False)
-
- self.lexicon = load_words(args.lexicon) if args.lexicon else None
- self.idx_to_wrd = {}
-
- checkpoint = torch.load(args.kenlm_model, map_location="cpu")
-
- if "cfg" in checkpoint and checkpoint["cfg"] is not None:
- lm_args = checkpoint["cfg"]
- else:
- lm_args = convert_namespace_to_omegaconf(checkpoint["args"])
-
- with open_dict(lm_args.task):
- lm_args.task.data = osp.dirname(args.kenlm_model)
-
- task = tasks.setup_task(lm_args.task)
- model = task.build_model(lm_args.model)
- model.load_state_dict(checkpoint["model"], strict=False)
-
- self.trie = Trie(self.vocab_size, self.silence)
-
- self.word_dict = task.dictionary
- self.unk_word = self.word_dict.unk()
- self.lm = FairseqLM(self.word_dict, model)
-
- if self.lexicon:
- start_state = self.lm.start(False)
- for i, (word, spellings) in enumerate(self.lexicon.items()):
- if self.unit_lm:
- word_idx = i
- self.idx_to_wrd[i] = word
- score = 0
- else:
- word_idx = self.word_dict.index(word)
- _, score = self.lm.score(start_state, word_idx, no_cache=True)
-
- for spelling in spellings:
- spelling_idxs = [tgt_dict.index(token) for token in spelling]
- assert (
- tgt_dict.unk() not in spelling_idxs
- ), f"{spelling} {spelling_idxs}"
- self.trie.insert(spelling_idxs, word_idx, score)
- self.trie.smear(SmearingMode.MAX)
-
- self.decoder_opts = LexiconDecoderOptions(
- beam_size=args.beam,
- beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))),
- beam_threshold=args.beam_threshold,
- lm_weight=args.lm_weight,
- word_score=args.word_score,
- unk_score=args.unk_weight,
- sil_score=args.sil_weight,
- log_add=False,
- criterion_type=self.criterion_type,
- )
-
- self.decoder = LexiconDecoder(
- self.decoder_opts,
- self.trie,
- self.lm,
- self.silence,
- self.blank,
- self.unk_word,
- [],
- self.unit_lm,
- )
- else:
- assert args.unit_lm, "lexicon free decoding can only be done with a unit language model"
- from flashlight.lib.text.decoder import LexiconFreeDecoder, LexiconFreeDecoderOptions
-
- d = {w: [[w]] for w in tgt_dict.symbols}
- self.word_dict = create_word_dict(d)
- self.lm = KenLM(args.kenlm_model, self.word_dict)
- self.decoder_opts = LexiconFreeDecoderOptions(
- beam_size=args.beam,
- beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))),
- beam_threshold=args.beam_threshold,
- lm_weight=args.lm_weight,
- sil_score=args.sil_weight,
- log_add=False,
- criterion_type=self.criterion_type,
- )
- self.decoder = LexiconFreeDecoder(
- self.decoder_opts, self.lm, self.silence, self.blank, []
- )
-
- def decode(self, emissions):
- B, T, N = emissions.size()
- hypos = []
-
- def idx_to_word(idx):
- if self.unit_lm:
- return self.idx_to_wrd[idx]
- else:
- return self.word_dict[idx]
-
- def make_hypo(result):
- hypo = {"tokens": self.get_tokens(result.tokens), "score": result.score}
- if self.lexicon:
- hypo["words"] = [idx_to_word(x) for x in result.words if x >= 0]
- return hypo
-
- for b in range(B):
- emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0)
- results = self.decoder.decode(emissions_ptr, T, N)
-
- nbest_results = results[: self.nbest]
- hypos.append([make_hypo(result) for result in nbest_results])
- self.lm.empty_cache()
-
- return hypos
diff --git a/spaces/HarshulNanda/HARM_ML_App_ludwig/app.py b/spaces/HarshulNanda/HARM_ML_App_ludwig/app.py
deleted file mode 100644
index 83a8e961360b93653e16dda96a641b5d9e112285..0000000000000000000000000000000000000000
--- a/spaces/HarshulNanda/HARM_ML_App_ludwig/app.py
+++ /dev/null
@@ -1,340 +0,0 @@
-from matplotlib import pyplot as plt
-from pytube import YouTube
-from streamlit_player import st_player
-from bokeh.models.widgets import Div
-from youtube_dl import YoutubeDL
-from stqdm import stqdm
-from PIL import Image
-from io import BytesIO
-
-from colors import colorOf
-from categoryPredictor import predictCategoryFor
-from statsViewer import generate_channel_video_data
-from eduContentPredictor import eduContentPrediction
-from youtubesearchpython import Video, ResultMode, VideosSearch, Playlist, ChannelsSearch
-
-import streamlit as st
-import base64
-import pandas as pd
-import chime
-import pytube
-import toml
-import webbrowser
-import numpy as np
-import youtube_dl
-
-st.set_page_config(page_title="HARM Bot", page_icon=Image.open("./assets/harmLogo.ico"))
-# primaryColor = toml.load(".streamlit/config.toml")['theme']['primaryColor']
-s = f"""
-
- """
- st.markdown(hideStreamlitStyle, unsafe_allow_html=True)
-
-# MARK: Adding the sidebar menu
-def add_sidebar_menu():
- with st.sidebar:
-
- st.markdown('''
-
By HARM, an intern team, aims to expand the world of AI by providing an useful feature.
- ''', True)
-
- st.markdown("### Team Members ")
-
- if st.button('Harshul Nanda'):
- js = "window.open('https://www.linkedin.com/in/harshulnanda/')"
- html = ''.format(js)
- div = Div(text=html)
- st.bokeh_chart(div)
-
- if st.button('Abhijeet Saroha'):
- js = "window.open('https://www.linkedin.com/in/abhijeet-saroha-a19031229/')"
- html = ''.format(js)
- div = Div(text=html)
- st.bokeh_chart(div)
-
- if st.button('Rishabh Sagar'):
- js = "window.open('https://www.linkedin.com/in/rishabh-sagar-1b0b74229/')"
- html = ''.format(js)
- div = Div(text=html)
- st.bokeh_chart(div)
-
- if st.button('Mayank Arora'):
- js = "window.open('https://www.linkedin.com/in/mayank-arora-24713322a/')"
- html = ''.format(js)
- div = Div(text=html)
- st.bokeh_chart(div)
-
- st.markdown("### Contact us ")
-
- if st.button('Github'):
- js = "window.open('https://github.com/Harshul-18')"
- html = ''.format(js)
- div = Div(text=html)
- st.bokeh_chart(div)
- # webbrowser.open_new_tab('https://github.com/Harshul-18')
-
- if st.button('LinkedIn'):
- js = "window.open('https://www.linkedin.com/company/82157293/admin/')"
- html = ''.format(js)
- div = Div(text=html)
- st.bokeh_chart(div)
- # webbrowser.open_new_tab('https://www.linkedin.com/company/82157293/admin/')
-
- # path = "https://www.buymeacoffee.com/widget/page/HARMBOT?description=Support%20me%20on%20Buy%20me%20a%20coffee!&color=%235F7FF"
- # if st.button("Buy us a coffee"):
- # webbrowser.open_new_tab(path)
-
- st.markdown("""""", unsafe_allow_html=True)
-
- page_bg_img = """
-
- """
- st.markdown(page_bg_img, unsafe_allow_html=True)
-
-# MARK: Adding the HARM logo gif
-def add_image(with_path):
- file_ = open(with_path, "rb")
-
- contents = file_.read()
- data_url = base64.b64encode(contents).decode("utf-8")
- file_.close()
- st.markdown(
- f'
',
- unsafe_allow_html=True,
- )
-
-# MARK: Adding the title
-def add_title_text():
- st.title("Hello, I am a YouTube API Bot!")
- st.text("I am a simple tool, just enter the URL and I will give the statistics.")
-
-# MARK: Adding body for page 1 containing all the fields while the youtube video url text input field is not empty
-def bodyOfPage1():
- youtubeVideoUrl = st.text_input("Enter the URL of the Youtube Video", value="", type="default", help="Enter the URL of the Youtube video you want me to show the statistics and predict the category for.")
-
- try:
- if youtubeVideoUrl:
- video = Video.getInfo(youtubeVideoUrl, mode=ResultMode.json)
-
- with st.expander("Prediction"):
-
- isEdu, isCat, catArr, probArr = predictCategoryFor(url=youtubeVideoUrl)
- if isEdu == "Educational":
- st.markdown(
- f"
",
- unsafe_allow_html=True,
- )
-
-
- with st.expander("View Video"):
-
- if (youtubeVideoUrl is None or len(youtubeVideoUrl) == 0):
- print(colorOf.FAIL + "The url input field is empty, please enter a youtube video url." + colorOf.ENDC)
- chime.error()
-
- st_player(youtubeVideoUrl)
-
- try:
- st.markdown("**Author of this video:** " + str(video["channel"]["name"]))
- st.markdown("**Title of video:** " + str(video["title"]))
- st.markdown("**Description of video:** " + str(video["description"]))
- chime.success()
- except Exception as e:
- print(colorOf.FAIL + f"Unable to view the video details. {e}" + colorOf.ENDC)
- chime.error()
-
- except Exception as e:
- st.markdown(f"{e}, Please enter the correct video URL")
-
-# MARK: Adding body for page 2 containing the fields for channel's statistics
-def bodyOfPage2():
- youtubeChannelUrl = st.text_input("Enter the Video URL to get the stats of that channel", value="", type="default", help="Enter the URL of the Youtube Video you want me to show the data of its channel.")
- # youtubeChannelUrl += "/videos"
- number = st.number_input('How many videos to analyse?', min_value=5, step=5, help="Enter the number or click the + or - buttons to increase or decrease the number with step size 5 for getting the data for the number of videos you entered.")
- if len(youtubeChannelUrl) >= 1:
- try:
- with st.expander("View Statistics"):
- generate_channel_video_data(of_channel=youtubeChannelUrl, with_number_of_videos=number)
- except Exception as e:
- st.markdown(f"{e}, Please enter the correct channel ID")
-
-# MARK: Adding body for page 3 containing the fields for searching a video from youtube
-def bodyOfPage3():
- searchFor = st.text_input("Search for videos", value="", type="default", help="Enter a keyword for searching for a youtube video.")
- number = st.number_input('Show search results', min_value=1, step=1, help="Enter the number or click the + or - buttons to increase or decrease the number for getting the number of videos you entered.")
-
-
- if len(searchFor) >= 1:
- videosSearch = VideosSearch(searchFor, limit=number)
-
- result = [video['link'] for video in videosSearch.result()['result']]
-
- for youtubeVideoUrl in stqdm(result):
-
- with st.container():
- st_player(youtubeVideoUrl)
-
- with st.expander("Prediction"):
-
- isEdu, isCat, catArr, probArr = predictCategoryFor(url=youtubeVideoUrl)
- if isEdu == "Educational":
- st.markdown(
- f"
Navigating the complex mindscape of student life with wisdom and assistance.
-
- """)
- chatbot_component = gr.Chatbot()
- message = gr.Textbox(placeholder=prompt, label="Chat")
- state = gr.State()
- submit = gr.Button('➔')
- submit.click(chatgpt_clone, inputs=[message, state], outputs=[chatbot_component, state])
-
-
-block.launch(debug=True)
diff --git a/spaces/SilenWang/ReviewGPT/lang/Study.zh_cn.md b/spaces/SilenWang/ReviewGPT/lang/Study.zh_cn.md
deleted file mode 100644
index 406c3c22c56deeb03079e8f4e602eaac00266dbd..0000000000000000000000000000000000000000
--- a/spaces/SilenWang/ReviewGPT/lang/Study.zh_cn.md
+++ /dev/null
@@ -1,6 +0,0 @@
-### 使用说明
-
-这个页面中可利用AI辅助阅读文献, 目前有两种模式:
-
-- Paper: 上传PDF文件进行解析, 然后根据文档的内容回答问题, 请确保先上传PDF后再用Paper模式提问
-- Other: 不基于文档直接提问, 等同于直接使用chatGPT, 但是没有上下文对话能力(节省token)
\ No newline at end of file
diff --git a/spaces/Silentlin/DiffSinger/README.md b/spaces/Silentlin/DiffSinger/README.md
deleted file mode 100644
index ac390032c587ed007db56faa13d6100dce7b2a76..0000000000000000000000000000000000000000
--- a/spaces/Silentlin/DiffSinger/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: DiffSinger🎶 Diffusion for Singing Voice Synthesis
-emoji: 🎶
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-app_file: "inference/svs/gradio/infer.py"
-pinned: false
----
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/inputsplitter.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/inputsplitter.py
deleted file mode 100644
index 10707d3d6b6024a3436dad7a11ad125f3f8b393a..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/inputsplitter.py
+++ /dev/null
@@ -1,773 +0,0 @@
-"""DEPRECATED: Input handling and transformation machinery.
-
-This module was deprecated in IPython 7.0, in favour of inputtransformer2.
-
-The first class in this module, :class:`InputSplitter`, is designed to tell when
-input from a line-oriented frontend is complete and should be executed, and when
-the user should be prompted for another line of code instead. The name 'input
-splitter' is largely for historical reasons.
-
-A companion, :class:`IPythonInputSplitter`, provides the same functionality but
-with full support for the extended IPython syntax (magics, system calls, etc).
-The code to actually do these transformations is in :mod:`IPython.core.inputtransformer`.
-:class:`IPythonInputSplitter` feeds the raw code to the transformers in order
-and stores the results.
-
-For more details, see the class docstrings below.
-"""
-
-from warnings import warn
-
-warn('IPython.core.inputsplitter is deprecated since IPython 7 in favor of `IPython.core.inputtransformer2`',
- DeprecationWarning)
-
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-import ast
-import codeop
-import io
-import re
-import sys
-import tokenize
-import warnings
-
-from typing import List
-
-from IPython.core.inputtransformer import (leading_indent,
- classic_prompt,
- ipy_prompt,
- cellmagic,
- assemble_logical_lines,
- help_end,
- escaped_commands,
- assign_from_magic,
- assign_from_system,
- assemble_python_lines,
- )
-
-# These are available in this module for backwards compatibility.
-from IPython.core.inputtransformer import (ESC_SHELL, ESC_SH_CAP, ESC_HELP,
- ESC_HELP2, ESC_MAGIC, ESC_MAGIC2,
- ESC_QUOTE, ESC_QUOTE2, ESC_PAREN, ESC_SEQUENCES)
-
-#-----------------------------------------------------------------------------
-# Utilities
-#-----------------------------------------------------------------------------
-
-# FIXME: These are general-purpose utilities that later can be moved to the
-# general ward. Kept here for now because we're being very strict about test
-# coverage with this code, and this lets us ensure that we keep 100% coverage
-# while developing.
-
-# compiled regexps for autoindent management
-dedent_re = re.compile('|'.join([
- r'^\s+raise(\s.*)?$', # raise statement (+ space + other stuff, maybe)
- r'^\s+raise\([^\)]*\).*$', # wacky raise with immediate open paren
- r'^\s+return(\s.*)?$', # normal return (+ space + other stuff, maybe)
- r'^\s+return\([^\)]*\).*$', # wacky return with immediate open paren
- r'^\s+pass\s*$', # pass (optionally followed by trailing spaces)
- r'^\s+break\s*$', # break (optionally followed by trailing spaces)
- r'^\s+continue\s*$', # continue (optionally followed by trailing spaces)
-]))
-ini_spaces_re = re.compile(r'^([ \t\r\f\v]+)')
-
-# regexp to match pure comment lines so we don't accidentally insert 'if 1:'
-# before pure comments
-comment_line_re = re.compile(r'^\s*\#')
-
-
-def num_ini_spaces(s):
- """Return the number of initial spaces in a string.
-
- Note that tabs are counted as a single space. For now, we do *not* support
- mixing of tabs and spaces in the user's input.
-
- Parameters
- ----------
- s : string
-
- Returns
- -------
- n : int
- """
-
- ini_spaces = ini_spaces_re.match(s)
- if ini_spaces:
- return ini_spaces.end()
- else:
- return 0
-
-# Fake token types for partial_tokenize:
-INCOMPLETE_STRING = tokenize.N_TOKENS
-IN_MULTILINE_STATEMENT = tokenize.N_TOKENS + 1
-
-# The 2 classes below have the same API as TokenInfo, but don't try to look up
-# a token type name that they won't find.
-class IncompleteString:
- type = exact_type = INCOMPLETE_STRING
- def __init__(self, s, start, end, line):
- self.s = s
- self.start = start
- self.end = end
- self.line = line
-
-class InMultilineStatement:
- type = exact_type = IN_MULTILINE_STATEMENT
- def __init__(self, pos, line):
- self.s = ''
- self.start = self.end = pos
- self.line = line
-
-def partial_tokens(s):
- """Iterate over tokens from a possibly-incomplete string of code.
-
- This adds two special token types: INCOMPLETE_STRING and
- IN_MULTILINE_STATEMENT. These can only occur as the last token yielded, and
- represent the two main ways for code to be incomplete.
- """
- readline = io.StringIO(s).readline
- token = tokenize.TokenInfo(tokenize.NEWLINE, '', (1, 0), (1, 0), '')
- try:
- for token in tokenize.generate_tokens(readline):
- yield token
- except tokenize.TokenError as e:
- # catch EOF error
- lines = s.splitlines(keepends=True)
- end = len(lines), len(lines[-1])
- if 'multi-line string' in e.args[0]:
- l, c = start = token.end
- s = lines[l-1][c:] + ''.join(lines[l:])
- yield IncompleteString(s, start, end, lines[-1])
- elif 'multi-line statement' in e.args[0]:
- yield InMultilineStatement(end, lines[-1])
- else:
- raise
-
-def find_next_indent(code):
- """Find the number of spaces for the next line of indentation"""
- tokens = list(partial_tokens(code))
- if tokens[-1].type == tokenize.ENDMARKER:
- tokens.pop()
- if not tokens:
- return 0
- while (tokens[-1].type in {tokenize.DEDENT, tokenize.NEWLINE, tokenize.COMMENT}):
- tokens.pop()
-
- if tokens[-1].type == INCOMPLETE_STRING:
- # Inside a multiline string
- return 0
-
- # Find the indents used before
- prev_indents = [0]
- def _add_indent(n):
- if n != prev_indents[-1]:
- prev_indents.append(n)
-
- tokiter = iter(tokens)
- for tok in tokiter:
- if tok.type in {tokenize.INDENT, tokenize.DEDENT}:
- _add_indent(tok.end[1])
- elif (tok.type == tokenize.NL):
- try:
- _add_indent(next(tokiter).start[1])
- except StopIteration:
- break
-
- last_indent = prev_indents.pop()
-
- # If we've just opened a multiline statement (e.g. 'a = ['), indent more
- if tokens[-1].type == IN_MULTILINE_STATEMENT:
- if tokens[-2].exact_type in {tokenize.LPAR, tokenize.LSQB, tokenize.LBRACE}:
- return last_indent + 4
- return last_indent
-
- if tokens[-1].exact_type == tokenize.COLON:
- # Line ends with colon - indent
- return last_indent + 4
-
- if last_indent:
- # Examine the last line for dedent cues - statements like return or
- # raise which normally end a block of code.
- last_line_starts = 0
- for i, tok in enumerate(tokens):
- if tok.type == tokenize.NEWLINE:
- last_line_starts = i + 1
-
- last_line_tokens = tokens[last_line_starts:]
- names = [t.string for t in last_line_tokens if t.type == tokenize.NAME]
- if names and names[0] in {'raise', 'return', 'pass', 'break', 'continue'}:
- # Find the most recent indentation less than the current level
- for indent in reversed(prev_indents):
- if indent < last_indent:
- return indent
-
- return last_indent
-
-
-def last_blank(src):
- """Determine if the input source ends in a blank.
-
- A blank is either a newline or a line consisting of whitespace.
-
- Parameters
- ----------
- src : string
- A single or multiline string.
- """
- if not src: return False
- ll = src.splitlines()[-1]
- return (ll == '') or ll.isspace()
-
-
-last_two_blanks_re = re.compile(r'\n\s*\n\s*$', re.MULTILINE)
-last_two_blanks_re2 = re.compile(r'.+\n\s*\n\s+$', re.MULTILINE)
-
-def last_two_blanks(src):
- """Determine if the input source ends in two blanks.
-
- A blank is either a newline or a line consisting of whitespace.
-
- Parameters
- ----------
- src : string
- A single or multiline string.
- """
- if not src: return False
- # The logic here is tricky: I couldn't get a regexp to work and pass all
- # the tests, so I took a different approach: split the source by lines,
- # grab the last two and prepend '###\n' as a stand-in for whatever was in
- # the body before the last two lines. Then, with that structure, it's
- # possible to analyze with two regexps. Not the most elegant solution, but
- # it works. If anyone tries to change this logic, make sure to validate
- # the whole test suite first!
- new_src = '\n'.join(['###\n'] + src.splitlines()[-2:])
- return (bool(last_two_blanks_re.match(new_src)) or
- bool(last_two_blanks_re2.match(new_src)) )
-
-
-def remove_comments(src):
- """Remove all comments from input source.
-
- Note: comments are NOT recognized inside of strings!
-
- Parameters
- ----------
- src : string
- A single or multiline input string.
-
- Returns
- -------
- String with all Python comments removed.
- """
-
- return re.sub('#.*', '', src)
-
-
-def get_input_encoding():
- """Return the default standard input encoding.
-
- If sys.stdin has no encoding, 'ascii' is returned."""
- # There are strange environments for which sys.stdin.encoding is None. We
- # ensure that a valid encoding is returned.
- encoding = getattr(sys.stdin, 'encoding', None)
- if encoding is None:
- encoding = 'ascii'
- return encoding
-
-#-----------------------------------------------------------------------------
-# Classes and functions for normal Python syntax handling
-#-----------------------------------------------------------------------------
-
-class InputSplitter(object):
- r"""An object that can accumulate lines of Python source before execution.
-
- This object is designed to be fed python source line-by-line, using
- :meth:`push`. It will return on each push whether the currently pushed
- code could be executed already. In addition, it provides a method called
- :meth:`push_accepts_more` that can be used to query whether more input
- can be pushed into a single interactive block.
-
- This is a simple example of how an interactive terminal-based client can use
- this tool::
-
- isp = InputSplitter()
- while isp.push_accepts_more():
- indent = ' '*isp.indent_spaces
- prompt = '>>> ' + indent
- line = indent + raw_input(prompt)
- isp.push(line)
- print 'Input source was:\n', isp.source_reset(),
- """
- # A cache for storing the current indentation
- # The first value stores the most recently processed source input
- # The second value is the number of spaces for the current indentation
- # If self.source matches the first value, the second value is a valid
- # current indentation. Otherwise, the cache is invalid and the indentation
- # must be recalculated.
- _indent_spaces_cache = None, None
- # String, indicating the default input encoding. It is computed by default
- # at initialization time via get_input_encoding(), but it can be reset by a
- # client with specific knowledge of the encoding.
- encoding = ''
- # String where the current full source input is stored, properly encoded.
- # Reading this attribute is the normal way of querying the currently pushed
- # source code, that has been properly encoded.
- source = ''
- # Code object corresponding to the current source. It is automatically
- # synced to the source, so it can be queried at any time to obtain the code
- # object; it will be None if the source doesn't compile to valid Python.
- code = None
-
- # Private attributes
-
- # List with lines of input accumulated so far
- _buffer: List[str]
- # Command compiler
- _compile: codeop.CommandCompiler
- # Boolean indicating whether the current block is complete
- _is_complete = None
- # Boolean indicating whether the current block has an unrecoverable syntax error
- _is_invalid = False
-
- def __init__(self) -> None:
- """Create a new InputSplitter instance."""
- self._buffer = []
- self._compile = codeop.CommandCompiler()
- self.encoding = get_input_encoding()
-
- def reset(self):
- """Reset the input buffer and associated state."""
- self._buffer[:] = []
- self.source = ''
- self.code = None
- self._is_complete = False
- self._is_invalid = False
-
- def source_reset(self):
- """Return the input source and perform a full reset.
- """
- out = self.source
- self.reset()
- return out
-
- def check_complete(self, source):
- """Return whether a block of code is ready to execute, or should be continued
-
- This is a non-stateful API, and will reset the state of this InputSplitter.
-
- Parameters
- ----------
- source : string
- Python input code, which can be multiline.
-
- Returns
- -------
- status : str
- One of 'complete', 'incomplete', or 'invalid' if source is not a
- prefix of valid code.
- indent_spaces : int or None
- The number of spaces by which to indent the next line of code. If
- status is not 'incomplete', this is None.
- """
- self.reset()
- try:
- self.push(source)
- except SyntaxError:
- # Transformers in IPythonInputSplitter can raise SyntaxError,
- # which push() will not catch.
- return 'invalid', None
- else:
- if self._is_invalid:
- return 'invalid', None
- elif self.push_accepts_more():
- return 'incomplete', self.get_indent_spaces()
- else:
- return 'complete', None
- finally:
- self.reset()
-
- def push(self, lines:str) -> bool:
- """Push one or more lines of input.
-
- This stores the given lines and returns a status code indicating
- whether the code forms a complete Python block or not.
-
- Any exceptions generated in compilation are swallowed, but if an
- exception was produced, the method returns True.
-
- Parameters
- ----------
- lines : string
- One or more lines of Python input.
-
- Returns
- -------
- is_complete : boolean
- True if the current input source (the result of the current input
- plus prior inputs) forms a complete Python execution block. Note that
- this value is also stored as a private attribute (``_is_complete``), so it
- can be queried at any time.
- """
- assert isinstance(lines, str)
- self._store(lines)
- source = self.source
-
- # Before calling _compile(), reset the code object to None so that if an
- # exception is raised in compilation, we don't mislead by having
- # inconsistent code/source attributes.
- self.code, self._is_complete = None, None
- self._is_invalid = False
-
- # Honor termination lines properly
- if source.endswith('\\\n'):
- return False
-
- try:
- with warnings.catch_warnings():
- warnings.simplefilter('error', SyntaxWarning)
- self.code = self._compile(source, symbol="exec")
- # Invalid syntax can produce any of a number of different errors from
- # inside the compiler, so we have to catch them all. Syntax errors
- # immediately produce a 'ready' block, so the invalid Python can be
- # sent to the kernel for evaluation with possible ipython
- # special-syntax conversion.
- except (SyntaxError, OverflowError, ValueError, TypeError,
- MemoryError, SyntaxWarning):
- self._is_complete = True
- self._is_invalid = True
- else:
- # Compilation didn't produce any exceptions (though it may not have
- # given a complete code object)
- self._is_complete = self.code is not None
-
- return self._is_complete
-
- def push_accepts_more(self):
- """Return whether a block of interactive input can accept more input.
-
- This method is meant to be used by line-oriented frontends, who need to
- guess whether a block is complete or not based solely on prior and
- current input lines. The InputSplitter considers it has a complete
- interactive block and will not accept more input when either:
-
- * A SyntaxError is raised
-
- * The code is complete and consists of a single line or a single
- non-compound statement
-
- * The code is complete and has a blank line at the end
-
- If the current input produces a syntax error, this method immediately
- returns False but does *not* raise the syntax error exception, as
- typically clients will want to send invalid syntax to an execution
- backend which might convert the invalid syntax into valid Python via
- one of the dynamic IPython mechanisms.
- """
-
- # With incomplete input, unconditionally accept more
- # A syntax error also sets _is_complete to True - see push()
- if not self._is_complete:
- #print("Not complete") # debug
- return True
-
- # The user can make any (complete) input execute by leaving a blank line
- last_line = self.source.splitlines()[-1]
- if (not last_line) or last_line.isspace():
- #print("Blank line") # debug
- return False
-
- # If there's just a single line or AST node, and we're flush left, as is
- # the case after a simple statement such as 'a=1', we want to execute it
- # straight away.
- if self.get_indent_spaces() == 0:
- if len(self.source.splitlines()) <= 1:
- return False
-
- try:
- code_ast = ast.parse("".join(self._buffer))
- except Exception:
- #print("Can't parse AST") # debug
- return False
- else:
- if len(code_ast.body) == 1 and \
- not hasattr(code_ast.body[0], 'body'):
- #print("Simple statement") # debug
- return False
-
- # General fallback - accept more code
- return True
-
- def get_indent_spaces(self):
- sourcefor, n = self._indent_spaces_cache
- if sourcefor == self.source:
- return n
-
- # self.source always has a trailing newline
- n = find_next_indent(self.source[:-1])
- self._indent_spaces_cache = (self.source, n)
- return n
-
- # Backwards compatibility. I think all code that used .indent_spaces was
- # inside IPython, but we can leave this here until IPython 7 in case any
- # other modules are using it. -TK, November 2017
- indent_spaces = property(get_indent_spaces)
-
- def _store(self, lines, buffer=None, store='source'):
- """Store one or more lines of input.
-
- If input lines are not newline-terminated, a newline is automatically
- appended."""
-
- if buffer is None:
- buffer = self._buffer
-
- if lines.endswith('\n'):
- buffer.append(lines)
- else:
- buffer.append(lines+'\n')
- setattr(self, store, self._set_source(buffer))
-
- def _set_source(self, buffer):
- return u''.join(buffer)
-
-
-class IPythonInputSplitter(InputSplitter):
- """An input splitter that recognizes all of IPython's special syntax."""
-
- # String with raw, untransformed input.
- source_raw = ''
-
- # Flag to track when a transformer has stored input that it hasn't given
- # back yet.
- transformer_accumulating = False
-
- # Flag to track when assemble_python_lines has stored input that it hasn't
- # given back yet.
- within_python_line = False
-
- # Private attributes
-
- # List with lines of raw input accumulated so far.
- _buffer_raw = None
-
- def __init__(self, line_input_checker=True, physical_line_transforms=None,
- logical_line_transforms=None, python_line_transforms=None):
- super(IPythonInputSplitter, self).__init__()
- self._buffer_raw = []
- self._validate = True
-
- if physical_line_transforms is not None:
- self.physical_line_transforms = physical_line_transforms
- else:
- self.physical_line_transforms = [
- leading_indent(),
- classic_prompt(),
- ipy_prompt(),
- cellmagic(end_on_blank_line=line_input_checker),
- ]
-
- self.assemble_logical_lines = assemble_logical_lines()
- if logical_line_transforms is not None:
- self.logical_line_transforms = logical_line_transforms
- else:
- self.logical_line_transforms = [
- help_end(),
- escaped_commands(),
- assign_from_magic(),
- assign_from_system(),
- ]
-
- self.assemble_python_lines = assemble_python_lines()
- if python_line_transforms is not None:
- self.python_line_transforms = python_line_transforms
- else:
- # We don't use any of these at present
- self.python_line_transforms = []
-
- @property
- def transforms(self):
- "Quick access to all transformers."
- return self.physical_line_transforms + \
- [self.assemble_logical_lines] + self.logical_line_transforms + \
- [self.assemble_python_lines] + self.python_line_transforms
-
- @property
- def transforms_in_use(self):
- """Transformers, excluding logical line transformers if we're in a
- Python line."""
- t = self.physical_line_transforms[:]
- if not self.within_python_line:
- t += [self.assemble_logical_lines] + self.logical_line_transforms
- return t + [self.assemble_python_lines] + self.python_line_transforms
-
- def reset(self):
- """Reset the input buffer and associated state."""
- super(IPythonInputSplitter, self).reset()
- self._buffer_raw[:] = []
- self.source_raw = ''
- self.transformer_accumulating = False
- self.within_python_line = False
-
- for t in self.transforms:
- try:
- t.reset()
- except SyntaxError:
- # Nothing that calls reset() expects to handle transformer
- # errors
- pass
-
- def flush_transformers(self):
- def _flush(transform, outs):
- """yield transformed lines
-
- always strings, never None
-
- transform: the current transform
- outs: an iterable of previously transformed inputs.
- Each may be multiline, which will be passed
- one line at a time to transform.
- """
- for out in outs:
- for line in out.splitlines():
- # push one line at a time
- tmp = transform.push(line)
- if tmp is not None:
- yield tmp
-
- # reset the transform
- tmp = transform.reset()
- if tmp is not None:
- yield tmp
-
- out = []
- for t in self.transforms_in_use:
- out = _flush(t, out)
-
- out = list(out)
- if out:
- self._store('\n'.join(out))
-
- def raw_reset(self):
- """Return raw input only and perform a full reset.
- """
- out = self.source_raw
- self.reset()
- return out
-
- def source_reset(self):
- try:
- self.flush_transformers()
- return self.source
- finally:
- self.reset()
-
- def push_accepts_more(self):
- if self.transformer_accumulating:
- return True
- else:
- return super(IPythonInputSplitter, self).push_accepts_more()
-
- def transform_cell(self, cell):
- """Process and translate a cell of input.
- """
- self.reset()
- try:
- self.push(cell)
- self.flush_transformers()
- return self.source
- finally:
- self.reset()
-
- def push(self, lines:str) -> bool:
- """Push one or more lines of IPython input.
-
- This stores the given lines and returns a status code indicating
- whether the code forms a complete Python block or not, after processing
- all input lines for special IPython syntax.
-
- Any exceptions generated in compilation are swallowed, but if an
- exception was produced, the method returns True.
-
- Parameters
- ----------
- lines : string
- One or more lines of Python input.
-
- Returns
- -------
- is_complete : boolean
- True if the current input source (the result of the current input
- plus prior inputs) forms a complete Python execution block. Note that
- this value is also stored as a private attribute (_is_complete), so it
- can be queried at any time.
- """
- assert isinstance(lines, str)
- # We must ensure all input is pure unicode
- # ''.splitlines() --> [], but we need to push the empty line to transformers
- lines_list = lines.splitlines()
- if not lines_list:
- lines_list = ['']
-
- # Store raw source before applying any transformations to it. Note
- # that this must be done *after* the reset() call that would otherwise
- # flush the buffer.
- self._store(lines, self._buffer_raw, 'source_raw')
-
- transformed_lines_list = []
- for line in lines_list:
- transformed = self._transform_line(line)
- if transformed is not None:
- transformed_lines_list.append(transformed)
-
- if transformed_lines_list:
- transformed_lines = '\n'.join(transformed_lines_list)
- return super(IPythonInputSplitter, self).push(transformed_lines)
- else:
- # Got nothing back from transformers - they must be waiting for
- # more input.
- return False
-
- def _transform_line(self, line):
- """Push a line of input code through the various transformers.
-
- Returns any output from the transformers, or None if a transformer
- is accumulating lines.
-
- Sets self.transformer_accumulating as a side effect.
- """
- def _accumulating(dbg):
- #print(dbg)
- self.transformer_accumulating = True
- return None
-
- for transformer in self.physical_line_transforms:
- line = transformer.push(line)
- if line is None:
- return _accumulating(transformer)
-
- if not self.within_python_line:
- line = self.assemble_logical_lines.push(line)
- if line is None:
- return _accumulating('acc logical line')
-
- for transformer in self.logical_line_transforms:
- line = transformer.push(line)
- if line is None:
- return _accumulating(transformer)
-
- line = self.assemble_python_lines.push(line)
- if line is None:
- self.within_python_line = True
- return _accumulating('acc python line')
- else:
- self.within_python_line = False
-
- for transformer in self.python_line_transforms:
- line = transformer.push(line)
- if line is None:
- return _accumulating(transformer)
-
- #print("transformers clear") #debug
- self.transformer_accumulating = False
- return line
-
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/logger.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/logger.py
deleted file mode 100644
index 99e7ce29185e071bb6ba3cc948265e90264ae5b1..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/logger.py
+++ /dev/null
@@ -1,227 +0,0 @@
-"""Logger class for IPython's logging facilities.
-"""
-
-#*****************************************************************************
-# Copyright (C) 2001 Janko Hauser and
-# Copyright (C) 2001-2006 Fernando Perez
-#
-# Distributed under the terms of the BSD License. The full license is in
-# the file COPYING, distributed as part of this software.
-#*****************************************************************************
-
-#****************************************************************************
-# Modules and globals
-
-# Python standard modules
-import glob
-import io
-import os
-import time
-
-
-#****************************************************************************
-# FIXME: This class isn't a mixin anymore, but it still needs attributes from
-# ipython and does input cache management. Finish cleanup later...
-
-class Logger(object):
- """A Logfile class with different policies for file creation"""
-
- def __init__(self, home_dir, logfname='Logger.log', loghead=u'',
- logmode='over'):
-
- # this is the full ipython instance, we need some attributes from it
- # which won't exist until later. What a mess, clean up later...
- self.home_dir = home_dir
-
- self.logfname = logfname
- self.loghead = loghead
- self.logmode = logmode
- self.logfile = None
-
- # Whether to log raw or processed input
- self.log_raw_input = False
-
- # whether to also log output
- self.log_output = False
-
- # whether to put timestamps before each log entry
- self.timestamp = False
-
- # activity control flags
- self.log_active = False
-
- # logmode is a validated property
- def _set_mode(self,mode):
- if mode not in ['append','backup','global','over','rotate']:
- raise ValueError('invalid log mode %s given' % mode)
- self._logmode = mode
-
- def _get_mode(self):
- return self._logmode
-
- logmode = property(_get_mode,_set_mode)
-
- def logstart(self, logfname=None, loghead=None, logmode=None,
- log_output=False, timestamp=False, log_raw_input=False):
- """Generate a new log-file with a default header.
-
- Raises RuntimeError if the log has already been started"""
-
- if self.logfile is not None:
- raise RuntimeError('Log file is already active: %s' %
- self.logfname)
-
- # The parameters can override constructor defaults
- if logfname is not None: self.logfname = logfname
- if loghead is not None: self.loghead = loghead
- if logmode is not None: self.logmode = logmode
-
- # Parameters not part of the constructor
- self.timestamp = timestamp
- self.log_output = log_output
- self.log_raw_input = log_raw_input
-
- # init depending on the log mode requested
- isfile = os.path.isfile
- logmode = self.logmode
-
- if logmode == 'append':
- self.logfile = io.open(self.logfname, 'a', encoding='utf-8')
-
- elif logmode == 'backup':
- if isfile(self.logfname):
- backup_logname = self.logfname+'~'
- # Manually remove any old backup, since os.rename may fail
- # under Windows.
- if isfile(backup_logname):
- os.remove(backup_logname)
- os.rename(self.logfname,backup_logname)
- self.logfile = io.open(self.logfname, 'w', encoding='utf-8')
-
- elif logmode == 'global':
- self.logfname = os.path.join(self.home_dir,self.logfname)
- self.logfile = io.open(self.logfname, 'a', encoding='utf-8')
-
- elif logmode == 'over':
- if isfile(self.logfname):
- os.remove(self.logfname)
- self.logfile = io.open(self.logfname,'w', encoding='utf-8')
-
- elif logmode == 'rotate':
- if isfile(self.logfname):
- if isfile(self.logfname+'.001~'):
- old = glob.glob(self.logfname+'.*~')
- old.sort()
- old.reverse()
- for f in old:
- root, ext = os.path.splitext(f)
- num = int(ext[1:-1])+1
- os.rename(f, root+'.'+repr(num).zfill(3)+'~')
- os.rename(self.logfname, self.logfname+'.001~')
- self.logfile = io.open(self.logfname, 'w', encoding='utf-8')
-
- if logmode != 'append':
- self.logfile.write(self.loghead)
-
- self.logfile.flush()
- self.log_active = True
-
- def switch_log(self,val):
- """Switch logging on/off. val should be ONLY a boolean."""
-
- if val not in [False,True,0,1]:
- raise ValueError('Call switch_log ONLY with a boolean argument, '
- 'not with: %s' % val)
-
- label = {0:'OFF',1:'ON',False:'OFF',True:'ON'}
-
- if self.logfile is None:
- print("""
-Logging hasn't been started yet (use logstart for that).
-
-%logon/%logoff are for temporarily starting and stopping logging for a logfile
-which already exists. But you must first start the logging process with
-%logstart (optionally giving a logfile name).""")
-
- else:
- if self.log_active == val:
- print('Logging is already',label[val])
- else:
- print('Switching logging',label[val])
- self.log_active = not self.log_active
- self.log_active_out = self.log_active
-
- def logstate(self):
- """Print a status message about the logger."""
- if self.logfile is None:
- print('Logging has not been activated.')
- else:
- state = self.log_active and 'active' or 'temporarily suspended'
- print('Filename :', self.logfname)
- print('Mode :', self.logmode)
- print('Output logging :', self.log_output)
- print('Raw input log :', self.log_raw_input)
- print('Timestamping :', self.timestamp)
- print('State :', state)
-
- def log(self, line_mod, line_ori):
- """Write the sources to a log.
-
- Inputs:
-
- - line_mod: possibly modified input, such as the transformations made
- by input prefilters or input handlers of various kinds. This should
- always be valid Python.
-
- - line_ori: unmodified input line from the user. This is not
- necessarily valid Python.
- """
-
- # Write the log line, but decide which one according to the
- # log_raw_input flag, set when the log is started.
- if self.log_raw_input:
- self.log_write(line_ori)
- else:
- self.log_write(line_mod)
-
- def log_write(self, data, kind='input'):
- """Write data to the log file, if active"""
-
- #print 'data: %r' % data # dbg
- if self.log_active and data:
- write = self.logfile.write
- if kind=='input':
- if self.timestamp:
- write(time.strftime('# %a, %d %b %Y %H:%M:%S\n', time.localtime()))
- write(data)
- elif kind=='output' and self.log_output:
- odata = u'\n'.join([u'#[Out]# %s' % s
- for s in data.splitlines()])
- write(u'%s\n' % odata)
- try:
- self.logfile.flush()
- except OSError:
- print("Failed to flush the log file.")
- print(
- f"Please check that {self.logfname} exists and have the right permissions."
- )
- print(
- "Also consider turning off the log with `%logstop` to avoid this warning."
- )
-
- def logstop(self):
- """Fully stop logging and close log file.
-
- In order to start logging again, a new logstart() call needs to be
- made, possibly (though not necessarily) with a new filename, mode and
- other options."""
-
- if self.logfile is not None:
- self.logfile.close()
- self.logfile = None
- else:
- print("Logging hadn't been started.")
- self.log_active = False
-
- # For backwards compatibility, in case anyone was using this.
- close_log = logstop
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/registry.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/registry.py
deleted file mode 100644
index 47da6e05af00e4e5a8c599b0f383a3b4a1f3d9ff..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/registry.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import logging
-
-from typing import Tuple, Dict
-from clickhouse_connect.datatypes.base import TypeDef, ClickHouseType, type_map
-from clickhouse_connect.driver.exceptions import InternalError
-from clickhouse_connect.driver.parser import parse_enum, parse_callable, parse_columns
-
-logger = logging.getLogger(__name__)
-type_cache: Dict[str, ClickHouseType] = {}
-
-
-def parse_name(name: str) -> Tuple[str, str, TypeDef]:
- """
- Converts a ClickHouse type name into the base class and the definition (TypeDef) needed for any
- additional instantiation
- :param name: ClickHouse type name as returned by clickhouse
- :return: The original base name (before arguments), the full name as passed in and the TypeDef object that
- captures any additional arguments
- """
- base = name
- wrappers = []
- keys = tuple()
- if base.startswith('LowCardinality'):
- wrappers.append('LowCardinality')
- base = base[15:-1]
- if base.startswith('Nullable'):
- wrappers.append('Nullable')
- base = base[9:-1]
- if base.startswith('Enum'):
- keys, values = parse_enum(base)
- base = base[:base.find('(')]
- elif base.startswith('Nested'):
- keys, values = parse_columns(base[6:])
- base = 'Nested'
- elif base.startswith('Tuple'):
- keys, values = parse_columns(base[5:])
- base = 'Tuple'
- else:
- try:
- base, values, _ = parse_callable(base)
- except IndexError:
- raise InternalError(f'Can not parse ClickHouse data type: {name}') from None
- return base, name, TypeDef(tuple(wrappers), keys, values)
-
-
-def get_from_name(name: str) -> ClickHouseType:
- """
- Returns the ClickHouseType instance parsed from the ClickHouse type name. Instances are cached
- :param name: ClickHouse type name as returned by ClickHouse in WithNamesAndTypes FORMAT or the Native protocol
- :return: The instance of the ClickHouse Type
- """
- ch_type = type_cache.get(name, None)
- if not ch_type:
- base, name, type_def = parse_name(name)
- try:
- ch_type = type_map[base].build(type_def)
- except KeyError:
- err_str = f'Unrecognized ClickHouse type base: {base} name: {name}'
- logger.error(err_str)
- raise InternalError(err_str) from None
- type_cache[name] = ch_type
- return ch_type
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_imports_tipper.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_imports_tipper.py
deleted file mode 100644
index 7f89c750d9d60497e6d83f3965d1086949d045b7..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_imports_tipper.py
+++ /dev/null
@@ -1,373 +0,0 @@
-import inspect
-import os.path
-import sys
-
-from _pydev_bundle._pydev_tipper_common import do_find
-from _pydevd_bundle.pydevd_utils import hasattr_checked, dir_checked
-
-from inspect import getfullargspec
-
-
-def getargspec(*args, **kwargs):
- arg_spec = getfullargspec(*args, **kwargs)
- return arg_spec.args, arg_spec.varargs, arg_spec.varkw, arg_spec.defaults, arg_spec.kwonlyargs or [], arg_spec.kwonlydefaults or {}
-
-
-# completion types.
-TYPE_IMPORT = '0'
-TYPE_CLASS = '1'
-TYPE_FUNCTION = '2'
-TYPE_ATTR = '3'
-TYPE_BUILTIN = '4'
-TYPE_PARAM = '5'
-
-
-def _imp(name, log=None):
- try:
- return __import__(name)
- except:
- if '.' in name:
- sub = name[0:name.rfind('.')]
-
- if log is not None:
- log.add_content('Unable to import', name, 'trying with', sub)
- log.add_exception()
-
- return _imp(sub, log)
- else:
- s = 'Unable to import module: %s - sys.path: %s' % (str(name), sys.path)
- if log is not None:
- log.add_content(s)
- log.add_exception()
-
- raise ImportError(s)
-
-
-IS_IPY = False
-if sys.platform == 'cli':
- IS_IPY = True
- _old_imp = _imp
-
- def _imp(name, log=None):
- # We must add a reference in clr for .Net
- import clr # @UnresolvedImport
- initial_name = name
- while '.' in name:
- try:
- clr.AddReference(name)
- break # If it worked, that's OK.
- except:
- name = name[0:name.rfind('.')]
- else:
- try:
- clr.AddReference(name)
- except:
- pass # That's OK (not dot net module).
-
- return _old_imp(initial_name, log)
-
-
-def get_file(mod):
- f = None
- try:
- f = inspect.getsourcefile(mod) or inspect.getfile(mod)
- except:
- try:
- f = getattr(mod, '__file__', None)
- except:
- f = None
- if f and f.lower(f[-4:]) in ['.pyc', '.pyo']:
- filename = f[:-4] + '.py'
- if os.path.exists(filename):
- f = filename
-
- return f
-
-
-def Find(name, log=None):
- f = None
-
- mod = _imp(name, log)
- parent = mod
- foundAs = ''
-
- if inspect.ismodule(mod):
- f = get_file(mod)
-
- components = name.split('.')
-
- old_comp = None
- for comp in components[1:]:
- try:
- # this happens in the following case:
- # we have mx.DateTime.mxDateTime.mxDateTime.pyd
- # but after importing it, mx.DateTime.mxDateTime shadows access to mxDateTime.pyd
- mod = getattr(mod, comp)
- except AttributeError:
- if old_comp != comp:
- raise
-
- if inspect.ismodule(mod):
- f = get_file(mod)
- else:
- if len(foundAs) > 0:
- foundAs = foundAs + '.'
- foundAs = foundAs + comp
-
- old_comp = comp
-
- return f, mod, parent, foundAs
-
-
-def search_definition(data):
- '''@return file, line, col
- '''
-
- data = data.replace('\n', '')
- if data.endswith('.'):
- data = data.rstrip('.')
- f, mod, parent, foundAs = Find(data)
- try:
- return do_find(f, mod), foundAs
- except:
- return do_find(f, parent), foundAs
-
-
-def generate_tip(data, log=None):
- data = data.replace('\n', '')
- if data.endswith('.'):
- data = data.rstrip('.')
-
- f, mod, parent, foundAs = Find(data, log)
- # print_ >> open('temp.txt', 'w'), f
- tips = generate_imports_tip_for_module(mod)
- return f, tips
-
-
-def check_char(c):
- if c == '-' or c == '.':
- return '_'
- return c
-
-
-_SENTINEL = object()
-
-
-def generate_imports_tip_for_module(obj_to_complete, dir_comps=None, getattr=getattr, filter=lambda name:True):
- '''
- @param obj_to_complete: the object from where we should get the completions
- @param dir_comps: if passed, we should not 'dir' the object and should just iterate those passed as kwonly_arg parameter
- @param getattr: the way to get kwonly_arg given object from the obj_to_complete (used for the completer)
- @param filter: kwonly_arg callable that receives the name and decides if it should be appended or not to the results
- @return: list of tuples, so that each tuple represents kwonly_arg completion with:
- name, doc, args, type (from the TYPE_* constants)
- '''
- ret = []
-
- if dir_comps is None:
- dir_comps = dir_checked(obj_to_complete)
- if hasattr_checked(obj_to_complete, '__dict__'):
- dir_comps.append('__dict__')
- if hasattr_checked(obj_to_complete, '__class__'):
- dir_comps.append('__class__')
-
- get_complete_info = True
-
- if len(dir_comps) > 1000:
- # ok, we don't want to let our users wait forever...
- # no complete info for you...
-
- get_complete_info = False
-
- dontGetDocsOn = (float, int, str, tuple, list, dict)
- dontGetattrOn = (dict, list, set, tuple)
- for d in dir_comps:
-
- if d is None:
- continue
-
- if not filter(d):
- continue
-
- args = ''
-
- try:
- try:
- if isinstance(obj_to_complete, dontGetattrOn):
- raise Exception('Since python 3.9, e.g. "dict[str]" will return'
- " a dict that's only supposed to take strings. "
- 'Interestingly, e.g. dict["val"] is also valid '
- 'and presumably represents a dict that only takes '
- 'keys that are "val". This breaks our check for '
- 'class attributes.')
- obj = getattr(obj_to_complete.__class__, d)
- except:
- obj = getattr(obj_to_complete, d)
- except: # just ignore and get it without additional info
- ret.append((d, '', args, TYPE_BUILTIN))
- else:
-
- if get_complete_info:
- try:
- retType = TYPE_BUILTIN
-
- # check if we have to get docs
- getDoc = True
- for class_ in dontGetDocsOn:
-
- if isinstance(obj, class_):
- getDoc = False
- break
-
- doc = ''
- if getDoc:
- # no need to get this info... too many constants are defined and
- # makes things much slower (passing all that through sockets takes quite some time)
- try:
- doc = inspect.getdoc(obj)
- if doc is None:
- doc = ''
- except: # may happen on jython when checking java classes (so, just ignore it)
- doc = ''
-
- if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj):
- try:
- args, vargs, kwargs, defaults, kwonly_args, kwonly_defaults = getargspec(obj)
-
- args = args[:]
-
- for kwonly_arg in kwonly_args:
- default = kwonly_defaults.get(kwonly_arg, _SENTINEL)
- if default is not _SENTINEL:
- args.append('%s=%s' % (kwonly_arg, default))
- else:
- args.append(str(kwonly_arg))
-
- args = '(%s)' % (', '.join(args))
- except TypeError:
- # ok, let's see if we can get the arguments from the doc
- args, doc = signature_from_docstring(doc, getattr(obj, '__name__', None))
-
- retType = TYPE_FUNCTION
-
- elif inspect.isclass(obj):
- retType = TYPE_CLASS
-
- elif inspect.ismodule(obj):
- retType = TYPE_IMPORT
-
- else:
- retType = TYPE_ATTR
-
- # add token and doc to return - assure only strings.
- ret.append((d, doc, args, retType))
-
- except: # just ignore and get it without aditional info
- ret.append((d, '', args, TYPE_BUILTIN))
-
- else: # get_complete_info == False
- if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj):
- retType = TYPE_FUNCTION
-
- elif inspect.isclass(obj):
- retType = TYPE_CLASS
-
- elif inspect.ismodule(obj):
- retType = TYPE_IMPORT
-
- else:
- retType = TYPE_ATTR
- # ok, no complete info, let's try to do this as fast and clean as possible
- # so, no docs for this kind of information, only the signatures
- ret.append((d, '', str(args), retType))
-
- return ret
-
-
-def signature_from_docstring(doc, obj_name):
- args = '()'
- try:
- found = False
- if len(doc) > 0:
- if IS_IPY:
- # Handle case where we have the situation below
- # sort(self, object cmp, object key)
- # sort(self, object cmp, object key, bool reverse)
- # sort(self)
- # sort(self, object cmp)
-
- # Or: sort(self: list, cmp: object, key: object)
- # sort(self: list, cmp: object, key: object, reverse: bool)
- # sort(self: list)
- # sort(self: list, cmp: object)
- if obj_name:
- name = obj_name + '('
-
- # Fix issue where it was appearing sort(aa)sort(bb)sort(cc) in the same line.
- lines = doc.splitlines()
- if len(lines) == 1:
- c = doc.count(name)
- if c > 1:
- doc = ('\n' + name).join(doc.split(name))
-
- major = ''
- for line in doc.splitlines():
- if line.startswith(name) and line.endswith(')'):
- if len(line) > len(major):
- major = line
- if major:
- args = major[major.index('('):]
- found = True
-
- if not found:
- i = doc.find('->')
- if i < 0:
- i = doc.find('--')
- if i < 0:
- i = doc.find('\n')
- if i < 0:
- i = doc.find('\r')
-
- if i > 0:
- s = doc[0:i]
- s = s.strip()
-
- # let's see if we have a docstring in the first line
- if s[-1] == ')':
- start = s.find('(')
- if start >= 0:
- end = s.find('[')
- if end <= 0:
- end = s.find(')')
- if end <= 0:
- end = len(s)
-
- args = s[start:end]
- if not args[-1] == ')':
- args = args + ')'
-
- # now, get rid of unwanted chars
- l = len(args) - 1
- r = []
- for i in range(len(args)):
- if i == 0 or i == l:
- r.append(args[i])
- else:
- r.append(check_char(args[i]))
-
- args = ''.join(r)
-
- if IS_IPY:
- if args.startswith('(self:'):
- i = args.find(',')
- if i >= 0:
- args = '(self' + args[i:]
- else:
- args = '(self)'
- i = args.find(')')
- if i > 0:
- args = args[:i + 1]
-
- except:
- pass
- return args, doc
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/utils/__init__.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/utils/__init__.py
deleted file mode 100644
index f2678b321c295bcceaef945111ac3524be19d6e4..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/utils/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .misc import add_prefix
-
-__all__ = ['add_prefix']
diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/baseline_prediction_interface.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/baseline_prediction_interface.py
deleted file mode 100644
index 298a046c4c3c39cbddbcdc5ee47c68606c706b2c..0000000000000000000000000000000000000000
--- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/baseline_prediction_interface.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import tqdm
-import numpy as np
-
-def baseline_predict(metric_function, eval_xs, eval_ys, categorical_feats, metric_used=None, eval_pos=2, max_time=300, **kwargs):
- """
- Baseline prediction interface.
- :param metric_function:
- :param eval_xs:
- :param eval_ys:
- :param categorical_feats:
- :param metric_used:
- :param eval_pos:
- :param max_time: Scheduled maximum time
- :param kwargs:
- :return: list [np.array(metrics), np.array(outputs), best_configs] or [None, None, None] if failed
- """
-
- metrics = []
- outputs = []
- best_configs = []
- eval_splits = list(zip(eval_xs.transpose(0, 1), eval_ys.transpose(0, 1)))
- for eval_x, eval_y in tqdm.tqdm(eval_splits, desc='Calculating splits'+str(metric_function)+' '+str(eval_pos)):
- try:
- metric, output, best_config = metric_function(eval_x[:eval_pos],
- eval_y[:eval_pos],
- eval_x[eval_pos:],
- eval_y[eval_pos:],
- categorical_feats,
- metric_used=metric_used
- , max_time=max_time)
- metrics += [metric]
- outputs += [output]
- best_configs += [best_config]
- return np.array(metrics), np.array(outputs), best_configs
- except Exception as e:
- print(f'There was an exception in {metric_function}')
- print(e)
- return None, None, None
\ No newline at end of file
diff --git a/spaces/TaliaKorobkin/AIPairProgramming1/README.md b/spaces/TaliaKorobkin/AIPairProgramming1/README.md
deleted file mode 100644
index d5ec3ae88581ea00cedacb4b5c4b8fd0ebdba600..0000000000000000000000000000000000000000
--- a/spaces/TaliaKorobkin/AIPairProgramming1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: AIPairProgramming1
-emoji: 🏆
-colorFrom: green
-colorTo: green
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/plugin.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/plugin.py
deleted file mode 100644
index 7b722d58db0f35c3f6621d02876cefc74e64384a..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/plugin.py
+++ /dev/null
@@ -1,88 +0,0 @@
-"""
- pygments.plugin
- ~~~~~~~~~~~~~~~
-
- Pygments plugin interface. By default, this tries to use
- ``importlib.metadata``, which is in the Python standard
- library since Python 3.8, or its ``importlib_metadata``
- backport for earlier versions of Python. It falls back on
- ``pkg_resources`` if not found. Finally, if ``pkg_resources``
- is not found either, no plugins are loaded at all.
-
- lexer plugins::
-
- [pygments.lexers]
- yourlexer = yourmodule:YourLexer
-
- formatter plugins::
-
- [pygments.formatters]
- yourformatter = yourformatter:YourFormatter
- /.ext = yourformatter:YourFormatter
-
- As you can see, you can define extensions for the formatter
- with a leading slash.
-
- syntax plugins::
-
- [pygments.styles]
- yourstyle = yourstyle:YourStyle
-
- filter plugin::
-
- [pygments.filter]
- yourfilter = yourfilter:YourFilter
-
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-LEXER_ENTRY_POINT = 'pygments.lexers'
-FORMATTER_ENTRY_POINT = 'pygments.formatters'
-STYLE_ENTRY_POINT = 'pygments.styles'
-FILTER_ENTRY_POINT = 'pygments.filters'
-
-
-def iter_entry_points(group_name):
- try:
- from importlib.metadata import entry_points
- except ImportError:
- try:
- from importlib_metadata import entry_points
- except ImportError:
- try:
- from pip._vendor.pkg_resources import iter_entry_points
- except (ImportError, OSError):
- return []
- else:
- return iter_entry_points(group_name)
- groups = entry_points()
- if hasattr(groups, 'select'):
- # New interface in Python 3.10 and newer versions of the
- # importlib_metadata backport.
- return groups.select(group=group_name)
- else:
- # Older interface, deprecated in Python 3.10 and recent
- # importlib_metadata, but we need it in Python 3.8 and 3.9.
- return groups.get(group_name, [])
-
-
-def find_plugin_lexers():
- for entrypoint in iter_entry_points(LEXER_ENTRY_POINT):
- yield entrypoint.load()
-
-
-def find_plugin_formatters():
- for entrypoint in iter_entry_points(FORMATTER_ENTRY_POINT):
- yield entrypoint.name, entrypoint.load()
-
-
-def find_plugin_styles():
- for entrypoint in iter_entry_points(STYLE_ENTRY_POINT):
- yield entrypoint.name, entrypoint.load()
-
-
-def find_plugin_filters():
- for entrypoint in iter_entry_points(FILTER_ENTRY_POINT):
- yield entrypoint.name, entrypoint.load()
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/logging.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/logging.py
deleted file mode 100644
index 91368dda78aad590837aa12023dee67e224709ba..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/logging.py
+++ /dev/null
@@ -1,289 +0,0 @@
-import logging
-from datetime import datetime
-from logging import Handler, LogRecord
-from pathlib import Path
-from types import ModuleType
-from typing import ClassVar, Iterable, List, Optional, Type, Union
-
-from pip._vendor.rich._null_file import NullFile
-
-from . import get_console
-from ._log_render import FormatTimeCallable, LogRender
-from .console import Console, ConsoleRenderable
-from .highlighter import Highlighter, ReprHighlighter
-from .text import Text
-from .traceback import Traceback
-
-
-class RichHandler(Handler):
- """A logging handler that renders output with Rich. The time / level / message and file are displayed in columns.
- The level is color coded, and the message is syntax highlighted.
-
- Note:
- Be careful when enabling console markup in log messages if you have configured logging for libraries not
- under your control. If a dependency writes messages containing square brackets, it may not produce the intended output.
-
- Args:
- level (Union[int, str], optional): Log level. Defaults to logging.NOTSET.
- console (:class:`~rich.console.Console`, optional): Optional console instance to write logs.
- Default will use a global console instance writing to stdout.
- show_time (bool, optional): Show a column for the time. Defaults to True.
- omit_repeated_times (bool, optional): Omit repetition of the same time. Defaults to True.
- show_level (bool, optional): Show a column for the level. Defaults to True.
- show_path (bool, optional): Show the path to the original log call. Defaults to True.
- enable_link_path (bool, optional): Enable terminal link of path column to file. Defaults to True.
- highlighter (Highlighter, optional): Highlighter to style log messages, or None to use ReprHighlighter. Defaults to None.
- markup (bool, optional): Enable console markup in log messages. Defaults to False.
- rich_tracebacks (bool, optional): Enable rich tracebacks with syntax highlighting and formatting. Defaults to False.
- tracebacks_width (Optional[int], optional): Number of characters used to render tracebacks, or None for full width. Defaults to None.
- tracebacks_extra_lines (int, optional): Additional lines of code to render tracebacks, or None for full width. Defaults to None.
- tracebacks_theme (str, optional): Override pygments theme used in traceback.
- tracebacks_word_wrap (bool, optional): Enable word wrapping of long tracebacks lines. Defaults to True.
- tracebacks_show_locals (bool, optional): Enable display of locals in tracebacks. Defaults to False.
- tracebacks_suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
- locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
- Defaults to 10.
- locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
- log_time_format (Union[str, TimeFormatterCallable], optional): If ``log_time`` is enabled, either string for strftime or callable that formats the time. Defaults to "[%x %X] ".
- keywords (List[str], optional): List of words to highlight instead of ``RichHandler.KEYWORDS``.
- """
-
- KEYWORDS: ClassVar[Optional[List[str]]] = [
- "GET",
- "POST",
- "HEAD",
- "PUT",
- "DELETE",
- "OPTIONS",
- "TRACE",
- "PATCH",
- ]
- HIGHLIGHTER_CLASS: ClassVar[Type[Highlighter]] = ReprHighlighter
-
- def __init__(
- self,
- level: Union[int, str] = logging.NOTSET,
- console: Optional[Console] = None,
- *,
- show_time: bool = True,
- omit_repeated_times: bool = True,
- show_level: bool = True,
- show_path: bool = True,
- enable_link_path: bool = True,
- highlighter: Optional[Highlighter] = None,
- markup: bool = False,
- rich_tracebacks: bool = False,
- tracebacks_width: Optional[int] = None,
- tracebacks_extra_lines: int = 3,
- tracebacks_theme: Optional[str] = None,
- tracebacks_word_wrap: bool = True,
- tracebacks_show_locals: bool = False,
- tracebacks_suppress: Iterable[Union[str, ModuleType]] = (),
- locals_max_length: int = 10,
- locals_max_string: int = 80,
- log_time_format: Union[str, FormatTimeCallable] = "[%x %X]",
- keywords: Optional[List[str]] = None,
- ) -> None:
- super().__init__(level=level)
- self.console = console or get_console()
- self.highlighter = highlighter or self.HIGHLIGHTER_CLASS()
- self._log_render = LogRender(
- show_time=show_time,
- show_level=show_level,
- show_path=show_path,
- time_format=log_time_format,
- omit_repeated_times=omit_repeated_times,
- level_width=None,
- )
- self.enable_link_path = enable_link_path
- self.markup = markup
- self.rich_tracebacks = rich_tracebacks
- self.tracebacks_width = tracebacks_width
- self.tracebacks_extra_lines = tracebacks_extra_lines
- self.tracebacks_theme = tracebacks_theme
- self.tracebacks_word_wrap = tracebacks_word_wrap
- self.tracebacks_show_locals = tracebacks_show_locals
- self.tracebacks_suppress = tracebacks_suppress
- self.locals_max_length = locals_max_length
- self.locals_max_string = locals_max_string
- self.keywords = keywords
-
- def get_level_text(self, record: LogRecord) -> Text:
- """Get the level name from the record.
-
- Args:
- record (LogRecord): LogRecord instance.
-
- Returns:
- Text: A tuple of the style and level name.
- """
- level_name = record.levelname
- level_text = Text.styled(
- level_name.ljust(8), f"logging.level.{level_name.lower()}"
- )
- return level_text
-
- def emit(self, record: LogRecord) -> None:
- """Invoked by logging."""
- message = self.format(record)
- traceback = None
- if (
- self.rich_tracebacks
- and record.exc_info
- and record.exc_info != (None, None, None)
- ):
- exc_type, exc_value, exc_traceback = record.exc_info
- assert exc_type is not None
- assert exc_value is not None
- traceback = Traceback.from_exception(
- exc_type,
- exc_value,
- exc_traceback,
- width=self.tracebacks_width,
- extra_lines=self.tracebacks_extra_lines,
- theme=self.tracebacks_theme,
- word_wrap=self.tracebacks_word_wrap,
- show_locals=self.tracebacks_show_locals,
- locals_max_length=self.locals_max_length,
- locals_max_string=self.locals_max_string,
- suppress=self.tracebacks_suppress,
- )
- message = record.getMessage()
- if self.formatter:
- record.message = record.getMessage()
- formatter = self.formatter
- if hasattr(formatter, "usesTime") and formatter.usesTime():
- record.asctime = formatter.formatTime(record, formatter.datefmt)
- message = formatter.formatMessage(record)
-
- message_renderable = self.render_message(record, message)
- log_renderable = self.render(
- record=record, traceback=traceback, message_renderable=message_renderable
- )
- if isinstance(self.console.file, NullFile):
- # Handles pythonw, where stdout/stderr are null, and we return NullFile
- # instance from Console.file. In this case, we still want to make a log record
- # even though we won't be writing anything to a file.
- self.handleError(record)
- else:
- try:
- self.console.print(log_renderable)
- except Exception:
- self.handleError(record)
-
- def render_message(self, record: LogRecord, message: str) -> "ConsoleRenderable":
- """Render message text in to Text.
-
- Args:
- record (LogRecord): logging Record.
- message (str): String containing log message.
-
- Returns:
- ConsoleRenderable: Renderable to display log message.
- """
- use_markup = getattr(record, "markup", self.markup)
- message_text = Text.from_markup(message) if use_markup else Text(message)
-
- highlighter = getattr(record, "highlighter", self.highlighter)
- if highlighter:
- message_text = highlighter(message_text)
-
- if self.keywords is None:
- self.keywords = self.KEYWORDS
-
- if self.keywords:
- message_text.highlight_words(self.keywords, "logging.keyword")
-
- return message_text
-
- def render(
- self,
- *,
- record: LogRecord,
- traceback: Optional[Traceback],
- message_renderable: "ConsoleRenderable",
- ) -> "ConsoleRenderable":
- """Render log for display.
-
- Args:
- record (LogRecord): logging Record.
- traceback (Optional[Traceback]): Traceback instance or None for no Traceback.
- message_renderable (ConsoleRenderable): Renderable (typically Text) containing log message contents.
-
- Returns:
- ConsoleRenderable: Renderable to display log.
- """
- path = Path(record.pathname).name
- level = self.get_level_text(record)
- time_format = None if self.formatter is None else self.formatter.datefmt
- log_time = datetime.fromtimestamp(record.created)
-
- log_renderable = self._log_render(
- self.console,
- [message_renderable] if not traceback else [message_renderable, traceback],
- log_time=log_time,
- time_format=time_format,
- level=level,
- path=path,
- line_no=record.lineno,
- link_path=record.pathname if self.enable_link_path else None,
- )
- return log_renderable
-
-
-if __name__ == "__main__": # pragma: no cover
- from time import sleep
-
- FORMAT = "%(message)s"
- # FORMAT = "%(asctime)-15s - %(levelname)s - %(message)s"
- logging.basicConfig(
- level="NOTSET",
- format=FORMAT,
- datefmt="[%X]",
- handlers=[RichHandler(rich_tracebacks=True, tracebacks_show_locals=True)],
- )
- log = logging.getLogger("rich")
-
- log.info("Server starting...")
- log.info("Listening on http://127.0.0.1:8080")
- sleep(1)
-
- log.info("GET /index.html 200 1298")
- log.info("GET /imgs/backgrounds/back1.jpg 200 54386")
- log.info("GET /css/styles.css 200 54386")
- log.warning("GET /favicon.ico 404 242")
- sleep(1)
-
- log.debug(
- "JSONRPC request\n--> %r\n<-- %r",
- {
- "version": "1.1",
- "method": "confirmFruitPurchase",
- "params": [["apple", "orange", "mangoes", "pomelo"], 1.123],
- "id": "194521489",
- },
- {"version": "1.1", "result": True, "error": None, "id": "194521489"},
- )
- log.debug(
- "Loading configuration file /adasd/asdasd/qeqwe/qwrqwrqwr/sdgsdgsdg/werwerwer/dfgerert/ertertert/ertetert/werwerwer"
- )
- log.error("Unable to find 'pomelo' in database!")
- log.info("POST /jsonrpc/ 200 65532")
- log.info("POST /admin/ 401 42234")
- log.warning("password was rejected for admin site.")
-
- def divide() -> None:
- number = 1
- divisor = 0
- foos = ["foo"] * 100
- log.debug("in divide")
- try:
- number / divisor
- except:
- log.exception("An error of some kind occurred!")
-
- divide()
- sleep(1)
- log.critical("Out of memory!")
- log.info("Server exited with code=-1")
- log.info("[bold]EXITING...[/bold]", extra=dict(markup=True))
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/__init__.py
deleted file mode 100644
index aef2821b83f6ac1730d063d8ce939134cc2105a7..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/__init__.py
+++ /dev/null
@@ -1,342 +0,0 @@
-"""
-Utilities for determining application-specific dirs. See for details and
-usage.
-"""
-from __future__ import annotations
-
-import os
-import sys
-from pathlib import Path
-
-if sys.version_info >= (3, 8): # pragma: no cover (py38+)
- from typing import Literal
-else: # pragma: no cover (py38+)
- from ..typing_extensions import Literal
-
-from .api import PlatformDirsABC
-from .version import __version__
-from .version import __version_tuple__ as __version_info__
-
-
-def _set_platform_dir_class() -> type[PlatformDirsABC]:
- if sys.platform == "win32":
- from .windows import Windows as Result
- elif sys.platform == "darwin":
- from .macos import MacOS as Result
- else:
- from .unix import Unix as Result
-
- if os.getenv("ANDROID_DATA") == "/data" and os.getenv("ANDROID_ROOT") == "/system":
-
- if os.getenv("SHELL") or os.getenv("PREFIX"):
- return Result
-
- from .android import _android_folder
-
- if _android_folder() is not None:
- from .android import Android
-
- return Android # return to avoid redefinition of result
-
- return Result
-
-
-PlatformDirs = _set_platform_dir_class() #: Currently active platform
-AppDirs = PlatformDirs #: Backwards compatibility with appdirs
-
-
-def user_data_dir(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- roaming: bool = False,
-) -> str:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param roaming: See `roaming `.
- :returns: data directory tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_data_dir
-
-
-def site_data_dir(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- multipath: bool = False,
-) -> str:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param multipath: See `roaming `.
- :returns: data directory shared by users
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, multipath=multipath).site_data_dir
-
-
-def user_config_dir(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- roaming: bool = False,
-) -> str:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param roaming: See `roaming `.
- :returns: config directory tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_config_dir
-
-
-def site_config_dir(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- multipath: bool = False,
-) -> str:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param multipath: See `roaming `.
- :returns: config directory shared by the users
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, multipath=multipath).site_config_dir
-
-
-def user_cache_dir(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- opinion: bool = True,
-) -> str:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param opinion: See `roaming `.
- :returns: cache directory tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_cache_dir
-
-
-def user_state_dir(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- roaming: bool = False,
-) -> str:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param roaming: See `roaming `.
- :returns: state directory tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_state_dir
-
-
-def user_log_dir(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- opinion: bool = True,
-) -> str:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param opinion: See `roaming `.
- :returns: log directory tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_log_dir
-
-
-def user_documents_dir() -> str:
- """
- :returns: documents directory tied to the user
- """
- return PlatformDirs().user_documents_dir
-
-
-def user_runtime_dir(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- opinion: bool = True,
-) -> str:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param opinion: See `opinion `.
- :returns: runtime directory tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_runtime_dir
-
-
-def user_data_path(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- roaming: bool = False,
-) -> Path:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param roaming: See `roaming `.
- :returns: data path tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_data_path
-
-
-def site_data_path(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- multipath: bool = False,
-) -> Path:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param multipath: See `multipath `.
- :returns: data path shared by users
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, multipath=multipath).site_data_path
-
-
-def user_config_path(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- roaming: bool = False,
-) -> Path:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param roaming: See `roaming `.
- :returns: config path tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_config_path
-
-
-def site_config_path(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- multipath: bool = False,
-) -> Path:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param multipath: See `roaming `.
- :returns: config path shared by the users
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, multipath=multipath).site_config_path
-
-
-def user_cache_path(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- opinion: bool = True,
-) -> Path:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param opinion: See `roaming `.
- :returns: cache path tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_cache_path
-
-
-def user_state_path(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- roaming: bool = False,
-) -> Path:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param roaming: See `roaming `.
- :returns: state path tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_state_path
-
-
-def user_log_path(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- opinion: bool = True,
-) -> Path:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param opinion: See `roaming `.
- :returns: log path tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_log_path
-
-
-def user_documents_path() -> Path:
- """
- :returns: documents path tied to the user
- """
- return PlatformDirs().user_documents_path
-
-
-def user_runtime_path(
- appname: str | None = None,
- appauthor: str | None | Literal[False] = None,
- version: str | None = None,
- opinion: bool = True,
-) -> Path:
- """
- :param appname: See `appname `.
- :param appauthor: See `appauthor `.
- :param version: See `version `.
- :param opinion: See `opinion `.
- :returns: runtime path tied to the user
- """
- return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_runtime_path
-
-
-__all__ = [
- "__version__",
- "__version_info__",
- "PlatformDirs",
- "AppDirs",
- "PlatformDirsABC",
- "user_data_dir",
- "user_config_dir",
- "user_cache_dir",
- "user_state_dir",
- "user_log_dir",
- "user_documents_dir",
- "user_runtime_dir",
- "site_data_dir",
- "site_config_dir",
- "user_data_path",
- "user_config_path",
- "user_cache_path",
- "user_state_path",
- "user_log_path",
- "user_documents_path",
- "user_runtime_path",
- "site_data_path",
- "site_config_path",
-]
diff --git a/spaces/Tao0000/stabilityai-stable-diffusion-2-1/README.md b/spaces/Tao0000/stabilityai-stable-diffusion-2-1/README.md
deleted file mode 100644
index 9751fb563630d9b1066a4ee5c350eb9b23d1653c..0000000000000000000000000000000000000000
--- a/spaces/Tao0000/stabilityai-stable-diffusion-2-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stabilityai Stable Diffusion 2 1
-emoji: 💩
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TeamTonic/hallucination-test/README.md b/spaces/TeamTonic/hallucination-test/README.md
deleted file mode 100644
index be709441f48af9830326715c1e28f49a8a064b33..0000000000000000000000000000000000000000
--- a/spaces/TeamTonic/hallucination-test/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Tonic's Hallucination Space
-emoji: 🧠🤯🌈
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 4.1.1
-app_file: app.py
-pinned: true
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/TencentARC/MasaCtrl/style.css b/spaces/TencentARC/MasaCtrl/style.css
deleted file mode 100644
index 99b3000135b9552cf9f80f63e6318fafed44e867..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/MasaCtrl/style.css
+++ /dev/null
@@ -1,3 +0,0 @@
-h1 {
- text-align: center;
- }
\ No newline at end of file
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py
deleted file mode 100644
index 48c136f1623261b079591065fec7c7fc38165076..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py
+++ /dev/null
@@ -1,187 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import json
-import logging
-import os
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES
-from detectron2.utils.file_io import PathManager
-
-"""
-This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog.
-"""
-
-
-logger = logging.getLogger(__name__)
-
-
-def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info):
- files = []
- # scan through the directory
- cities = PathManager.ls(image_dir)
- logger.info(f"{len(cities)} cities found in '{image_dir}'.")
- image_dict = {}
- for city in cities:
- city_img_dir = os.path.join(image_dir, city)
- for basename in PathManager.ls(city_img_dir):
- image_file = os.path.join(city_img_dir, basename)
-
- suffix = "_leftImg8bit.png"
- assert basename.endswith(suffix), basename
- basename = os.path.basename(basename)[: -len(suffix)]
-
- image_dict[basename] = image_file
-
- for ann in json_info["annotations"]:
- image_file = image_dict.get(ann["image_id"], None)
- assert image_file is not None, "No image {} found for annotation {}".format(
- ann["image_id"], ann["file_name"]
- )
- label_file = os.path.join(gt_dir, ann["file_name"])
- segments_info = ann["segments_info"]
-
- files.append((image_file, label_file, segments_info))
-
- assert len(files), "No images found in {}".format(image_dir)
- assert PathManager.isfile(files[0][0]), files[0][0]
- assert PathManager.isfile(files[0][1]), files[0][1]
- return files
-
-
-def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta):
- """
- Args:
- image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train".
- gt_dir (str): path to the raw annotations. e.g.,
- "~/cityscapes/gtFine/cityscapes_panoptic_train".
- gt_json (str): path to the json file. e.g.,
- "~/cityscapes/gtFine/cityscapes_panoptic_train.json".
- meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id"
- and "stuff_dataset_id_to_contiguous_id" to map category ids to
- contiguous ids for training.
-
- Returns:
- list[dict]: a list of dicts in Detectron2 standard format. (See
- `Using Custom Datasets `_ )
- """
-
- def _convert_category_id(segment_info, meta):
- if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]:
- segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][
- segment_info["category_id"]
- ]
- else:
- segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][
- segment_info["category_id"]
- ]
- return segment_info
-
- assert os.path.exists(
- gt_json
- ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa
- with open(gt_json) as f:
- json_info = json.load(f)
- files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info)
- ret = []
- for image_file, label_file, segments_info in files:
- sem_label_file = (
- image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png"
- )
- segments_info = [_convert_category_id(x, meta) for x in segments_info]
- ret.append(
- {
- "file_name": image_file,
- "image_id": "_".join(
- os.path.splitext(os.path.basename(image_file))[0].split("_")[:3]
- ),
- "sem_seg_file_name": sem_label_file,
- "pan_seg_file_name": label_file,
- "segments_info": segments_info,
- }
- )
- assert len(ret), f"No images found in {image_dir}!"
- assert PathManager.isfile(
- ret[0]["sem_seg_file_name"]
- ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa
- assert PathManager.isfile(
- ret[0]["pan_seg_file_name"]
- ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa
- return ret
-
-
-_RAW_CITYSCAPES_PANOPTIC_SPLITS = {
- "cityscapes_fine_panoptic_train": (
- "cityscapes/leftImg8bit/train",
- "cityscapes/gtFine/cityscapes_panoptic_train",
- "cityscapes/gtFine/cityscapes_panoptic_train.json",
- ),
- "cityscapes_fine_panoptic_val": (
- "cityscapes/leftImg8bit/val",
- "cityscapes/gtFine/cityscapes_panoptic_val",
- "cityscapes/gtFine/cityscapes_panoptic_val.json",
- ),
- # "cityscapes_fine_panoptic_test": not supported yet
-}
-
-
-def register_all_cityscapes_panoptic(root):
- meta = {}
- # The following metadata maps contiguous id from [0, #thing categories +
- # #stuff categories) to their names and colors. We have to replica of the
- # same name and color under "thing_*" and "stuff_*" because the current
- # visualization function in D2 handles thing and class classes differently
- # due to some heuristic used in Panoptic FPN. We keep the same naming to
- # enable reusing existing visualization functions.
- thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES]
- thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES]
- stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES]
- stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES]
-
- meta["thing_classes"] = thing_classes
- meta["thing_colors"] = thing_colors
- meta["stuff_classes"] = stuff_classes
- meta["stuff_colors"] = stuff_colors
-
- # There are three types of ids in cityscapes panoptic segmentation:
- # (1) category id: like semantic segmentation, it is the class id for each
- # pixel. Since there are some classes not used in evaluation, the category
- # id is not always contiguous and thus we have two set of category ids:
- # - original category id: category id in the original dataset, mainly
- # used for evaluation.
- # - contiguous category id: [0, #classes), in order to train the classifier
- # (2) instance id: this id is used to differentiate different instances from
- # the same category. For "stuff" classes, the instance id is always 0; for
- # "thing" classes, the instance id starts from 1 and 0 is reserved for
- # ignored instances (e.g. crowd annotation).
- # (3) panoptic id: this is the compact id that encode both category and
- # instance id by: category_id * 1000 + instance_id.
- thing_dataset_id_to_contiguous_id = {}
- stuff_dataset_id_to_contiguous_id = {}
-
- for k in CITYSCAPES_CATEGORIES:
- if k["isthing"] == 1:
- thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"]
- else:
- stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"]
-
- meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id
- meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id
-
- for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items():
- image_dir = os.path.join(root, image_dir)
- gt_dir = os.path.join(root, gt_dir)
- gt_json = os.path.join(root, gt_json)
-
- DatasetCatalog.register(
- key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta)
- )
- MetadataCatalog.get(key).set(
- panoptic_root=gt_dir,
- image_root=image_dir,
- panoptic_json=gt_json,
- gt_dir=gt_dir.replace("cityscapes_panoptic_", ""),
- evaluator_type="cityscapes_panoptic_seg",
- ignore_label=255,
- label_divisor=1000,
- **meta,
- )
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_blocks.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_blocks.py
deleted file mode 100644
index 5a0488adbfcf0c7eca08616f43ebf695acad4b7e..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_blocks.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import unittest
-import torch
-from torch import nn
-
-from detectron2.layers import ASPP, DepthwiseSeparableConv2d, FrozenBatchNorm2d
-from detectron2.modeling.backbone.resnet import BasicStem, ResNet
-
-
-"""
-Test for misc layers.
-"""
-
-
-class TestBlocks(unittest.TestCase):
- def test_separable_conv(self):
- DepthwiseSeparableConv2d(3, 10, norm1="BN", activation1=nn.PReLU())
-
- def test_aspp(self):
- m = ASPP(3, 10, [2, 3, 4], norm="", activation=nn.PReLU())
- self.assertIsNot(m.convs[0].activation.weight, m.convs[1].activation.weight)
- self.assertIsNot(m.convs[0].activation.weight, m.project.activation.weight)
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def test_frozen_batchnorm_fp16(self):
- from torch.cuda.amp import autocast
-
- C = 10
- input = torch.rand(1, C, 10, 10).cuda()
- m = FrozenBatchNorm2d(C).cuda()
- with autocast():
- output = m(input.half())
- self.assertEqual(output.dtype, torch.float16)
-
- # requires_grad triggers a different codepath
- input.requires_grad_()
- with autocast():
- output = m(input.half())
- self.assertEqual(output.dtype, torch.float16)
-
- def test_resnet_unused_stages(self):
- resnet = ResNet(BasicStem(), ResNet.make_default_stages(18), out_features=["res2"])
- self.assertTrue(hasattr(resnet, "res2"))
- self.assertFalse(hasattr(resnet, "res3"))
- self.assertFalse(hasattr(resnet, "res5"))
-
- resnet = ResNet(BasicStem(), ResNet.make_default_stages(18), out_features=["res2", "res5"])
- self.assertTrue(hasattr(resnet, "res2"))
- self.assertTrue(hasattr(resnet, "res4"))
- self.assertTrue(hasattr(resnet, "res5"))
diff --git a/spaces/Tetel/chat/SydneyGPT/__init__.py b/spaces/Tetel/chat/SydneyGPT/__init__.py
deleted file mode 100644
index 0895f233f1ef5a7384ab8ac1a9f2c6d0f39b4b96..0000000000000000000000000000000000000000
--- a/spaces/Tetel/chat/SydneyGPT/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-import os, sys
-sys.path.append(os.path.dirname(os.path.realpath(__file__)))
diff --git a/spaces/TheBritishLibrary/British-Library-books-genre-classifier-v2/README.md b/spaces/TheBritishLibrary/British-Library-books-genre-classifier-v2/README.md
deleted file mode 100644
index 05da46bc461e16c5b455bf4b0e661ab9a660b8d1..0000000000000000000000000000000000000000
--- a/spaces/TheBritishLibrary/British-Library-books-genre-classifier-v2/README.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: British Library Books Genre Classifier V2
-emoji: 📚
-colorFrom: red
-colorTo: black
-sdk: gradio
-sdk_version: 3.0.24
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/VIPLab/Track-Anything/tools/__init__.py b/spaces/VIPLab/Track-Anything/tools/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/VickyKira/NASAGPT/client/css/dropdown.css b/spaces/VickyKira/NASAGPT/client/css/dropdown.css
deleted file mode 100644
index 302e911e84d171c55384732f759a79ce195abca5..0000000000000000000000000000000000000000
--- a/spaces/VickyKira/NASAGPT/client/css/dropdown.css
+++ /dev/null
@@ -1,10 +0,0 @@
-.dropdown {
- border: 1px solid var(--conversations);
-}
-
-@media screen and (max-width: 990px) {
- .dropdown {
- padding: 4px 8px;
- font-size: 0.75rem;
- }
-}
diff --git a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index eb60d8830714338448be009d1075e3594337db15..0000000000000000000000000000000000000000
--- a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/Xenova/react-translator/assets/index-48a6f08c.css b/spaces/Xenova/react-translator/assets/index-48a6f08c.css
deleted file mode 100644
index cc14a4f7bcebf97ddc0ce149e0239044abebc040..0000000000000000000000000000000000000000
--- a/spaces/Xenova/react-translator/assets/index-48a6f08c.css
+++ /dev/null
@@ -1 +0,0 @@
-#root{max-width:1280px;margin:0 auto;padding:2rem;text-align:center}.language-container{display:flex;gap:20px}.textbox-container{display:flex;justify-content:center;gap:20px;width:800px}.textbox-container>textarea{width:50%}.language-selector{width:50%}.language-selector>select{width:150px}.progress-container{position:relative;font-size:14px;color:#fff;background-color:#e9ecef;border:solid 1px;border-radius:8px;text-align:left;overflow:hidden}.progress-bar{padding:0 4px;z-index:0;top:0;width:1%;height:100%;overflow:hidden;background-color:#007bff;white-space:nowrap}.progress-text{z-index:2}.selector-container{display:flex;gap:20px}.progress-bars-container{padding:8px;height:140px}.container{margin:25px;display:flex;flex-direction:column;gap:10px}:root{font-family:Inter,system-ui,Avenir,Helvetica,Arial,sans-serif;line-height:1.5;font-weight:400;color:#213547;background-color:#fff;font-synthesis:none;text-rendering:optimizeLegibility;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;-webkit-text-size-adjust:100%}body{margin:0;display:flex;place-items:center;min-width:320px;min-height:100vh}h1{font-size:3.2em;line-height:1}h1,h2{margin:8px}select{padding:.3em;cursor:pointer}textarea{padding:.6em}button{padding:.6em 1.2em;cursor:pointer;font-weight:500}button[disabled]{cursor:not-allowed}select,textarea,button{border-radius:8px;border:1px solid transparent;font-size:1em;font-family:inherit;background-color:#f9f9f9;transition:border-color .25s}select:hover,textarea:hover,button:not([disabled]):hover{border-color:#646cff}select:focus,select:focus-visible,textarea:focus,textarea:focus-visible,button:focus,button:focus-visible{outline:4px auto -webkit-focus-ring-color}
diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/mel_processing.py b/spaces/XzJosh/Jiaran-Bert-VITS2/mel_processing.py
deleted file mode 100644
index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Jiaran-Bert-VITS2/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/vdecoder/hifigan/models.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/vdecoder/hifigan/models.py
deleted file mode 100644
index 9747301f350bb269e62601017fe4633ce271b27e..0000000000000000000000000000000000000000
--- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/vdecoder/hifigan/models.py
+++ /dev/null
@@ -1,503 +0,0 @@
-import os
-import json
-from .env import AttrDict
-import numpy as np
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from .utils import init_weights, get_padding
-
-LRELU_SLOPE = 0.1
-
-
-def load_model(model_path, device='cuda'):
- config_file = os.path.join(os.path.split(model_path)[0], 'config.json')
- with open(config_file) as f:
- data = f.read()
-
- global h
- json_config = json.loads(data)
- h = AttrDict(json_config)
-
- generator = Generator(h).to(device)
-
- cp_dict = torch.load(model_path)
- generator.load_state_dict(cp_dict['generator'])
- generator.eval()
- generator.remove_weight_norm()
- del cp_dict
- return generator, h
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.h = h
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- xt = c2(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.h = h
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-def padDiff(x):
- return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0)
-
-class SineGen(torch.nn.Module):
- """ Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(self, samp_rate, harmonic_num=0,
- sine_amp=0.1, noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
- self.flag_for_pulse = flag_for_pulse
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = (f0 > self.voiced_threshold).type(torch.float32)
- return uv
-
- def _f02sine(self, f0_values):
- """ f0_values: (batchsize, length, dim)
- where dim indicates fundamental tone and overtones
- """
- # convert to F0 in rad. The interger part n can be ignored
- # because 2 * np.pi * n doesn't affect phase
- rad_values = (f0_values / self.sampling_rate) % 1
-
- # initial phase noise (no noise for fundamental component)
- rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \
- device=f0_values.device)
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
-
- # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad)
- if not self.flag_for_pulse:
- # for normal case
-
- # To prevent torch.cumsum numerical overflow,
- # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1.
- # Buffer tmp_over_one_idx indicates the time step to add -1.
- # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi
- tmp_over_one = torch.cumsum(rad_values, 1) % 1
- tmp_over_one_idx = (padDiff(tmp_over_one)) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
-
- sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1)
- * 2 * np.pi)
- else:
- # If necessary, make sure that the first time step of every
- # voiced segments is sin(pi) or cos(0)
- # This is used for pulse-train generation
-
- # identify the last time step in unvoiced segments
- uv = self._f02uv(f0_values)
- uv_1 = torch.roll(uv, shifts=-1, dims=1)
- uv_1[:, -1, :] = 1
- u_loc = (uv < 1) * (uv_1 > 0)
-
- # get the instantanouse phase
- tmp_cumsum = torch.cumsum(rad_values, dim=1)
- # different batch needs to be processed differently
- for idx in range(f0_values.shape[0]):
- temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :]
- temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :]
- # stores the accumulation of i.phase within
- # each voiced segments
- tmp_cumsum[idx, :, :] = 0
- tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum
-
- # rad_values - tmp_cumsum: remove the accumulation of i.phase
- # within the previous voiced segment.
- i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1)
-
- # get the sines
- sines = torch.cos(i_phase * 2 * np.pi)
- return sines
-
- def forward(self, f0):
- """ sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim,
- device=f0.device)
- # fundamental component
- fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device))
-
- # generate sine waveforms
- sine_waves = self._f02sine(fn) * self.sine_amp
-
- # generate uv signal
- # uv = torch.ones(f0.shape)
- # uv = uv * (f0 > self.voiced_threshold)
- uv = self._f02uv(f0)
-
- # noise: for unvoiced should be similar to sine_amp
- # std = self.sine_amp/3 -> max value ~ self.sine_amp
- # . for voiced regions is self.noise_std
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
-
- # first: set the unvoiced part to 0 by uv
- # then: additive noise
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """ SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
-
- # to produce sine waveforms
- self.l_sin_gen = SineGen(sampling_rate, harmonic_num,
- sine_amp, add_noise_std, voiced_threshod)
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x):
- """
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- """
- # source for harmonic branch
- sine_wavs, uv, _ = self.l_sin_gen(x)
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
-
- # source for noise branch, in the same shape as uv
- noise = torch.randn_like(uv) * self.sine_amp / 3
- return sine_merge, noise, uv
-
-
-class Generator(torch.nn.Module):
- def __init__(self, h):
- super(Generator, self).__init__()
- self.h = h
-
- self.num_kernels = len(h["resblock_kernel_sizes"])
- self.num_upsamples = len(h["upsample_rates"])
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"]))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=h["sampling_rate"],
- harmonic_num=8)
- self.noise_convs = nn.ModuleList()
- self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3))
- resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])):
- c_cur = h["upsample_initial_channel"] // (2 ** (i + 1))
- self.ups.append(weight_norm(
- ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
- if i + 1 < len(h["upsample_rates"]): #
- stride_f0 = np.prod(h["upsample_rates"][i + 1:])
- self.noise_convs.append(Conv1d(
- 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2))
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = h["upsample_initial_channel"] // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])):
- self.resblocks.append(resblock(h, ch, k, d))
-
- self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
- self.ups.apply(init_weights)
- self.conv_post.apply(init_weights)
- self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1)
-
- def forward(self, x, f0, g=None):
- # print(1,x.shape,f0.shape,f0[:, None].shape)
- f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t
- # print(2,f0.shape)
- har_source, noi_source, uv = self.m_source(f0)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- x = x + self.cond(g)
- # print(124,x.shape,har_source.shape)
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, LRELU_SLOPE)
- # print(3,x.shape)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- # print(4,x_source.shape,har_source.shape,x.shape)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, periods=None):
- super(MultiPeriodDiscriminator, self).__init__()
- self.periods = periods if periods is not None else [2, 3, 5, 7, 11]
- self.discriminators = nn.ModuleList()
- for period in self.periods:
- self.discriminators.append(DiscriminatorP(period))
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 128, 15, 1, padding=7)),
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiScaleDiscriminator(torch.nn.Module):
- def __init__(self):
- super(MultiScaleDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorS(use_spectral_norm=True),
- DiscriminatorS(),
- DiscriminatorS(),
- ])
- self.meanpools = nn.ModuleList([
- AvgPool1d(4, 2, padding=2),
- AvgPool1d(4, 2, padding=2)
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- if i != 0:
- y = self.meanpools[i - 1](y)
- y_hat = self.meanpools[i - 1](y_hat)
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1 - dr) ** 2)
- g_loss = torch.mean(dg ** 2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- l = torch.mean((1 - dg) ** 2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
diff --git a/spaces/YuxinJ/Scenimefy/Scenimefy/models/stylegan_networks.py b/spaces/YuxinJ/Scenimefy/Scenimefy/models/stylegan_networks.py
deleted file mode 100644
index a3c625da4ead5414789b60c23613306e0df7df94..0000000000000000000000000000000000000000
--- a/spaces/YuxinJ/Scenimefy/Scenimefy/models/stylegan_networks.py
+++ /dev/null
@@ -1,914 +0,0 @@
-"""
-The network architectures is based on PyTorch implemenation of StyleGAN2Encoder.
-Original PyTorch repo: https://github.com/rosinality/style-based-gan-pytorch
-Origianl StyelGAN2 paper: https://github.com/NVlabs/stylegan2
-We use the network architeture for our single-image traning setting.
-"""
-
-import math
-import numpy as np
-import random
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5):
- return F.leaky_relu(input + bias, negative_slope) * scale
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
- self.bias = nn.Parameter(torch.zeros(1, channel, 1, 1))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- # print("FusedLeakyReLU: ", input.abs().mean())
- out = fused_leaky_relu(input, self.bias,
- self.negative_slope,
- self.scale)
- # print("FusedLeakyReLU: ", out.abs().mean())
- return out
-
-
-def upfirdn2d_native(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- _, minor, in_h, in_w = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, minor, in_h, 1, in_w, 1)
- out = F.pad(out, [0, up_x - 1, 0, 0, 0, up_y - 1, 0, 0])
- out = out.view(-1, minor, in_h * up_y, in_w * up_x)
-
- out = F.pad(
- out, [max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- :,
- max(-pad_y0, 0): out.shape[2] - max(-pad_y1, 0),
- max(-pad_x0, 0): out.shape[3] - max(-pad_x1, 0),
- ]
-
- # out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- # out = out.permute(0, 2, 3, 1)
-
- return out[:, :, ::down_y, ::down_x]
-
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
- return upfirdn2d_native(input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1])
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if len(k.shape) == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer('kernel', kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = math.sqrt(1) / math.sqrt(in_channel * (kernel_size ** 2))
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- # print("Before EqualConv2d: ", input.abs().mean())
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
- # print("After EqualConv2d: ", out.abs().mean(), (self.weight * self.scale).abs().mean())
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},'
- f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (math.sqrt(1) / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = math.sqrt(1) / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- if style_dim is not None and style_dim > 0:
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
- f'upsample={self.upsample}, downsample={self.downsample})'
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- if style is not None:
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- else:
- style = torch.ones(batch, 1, in_channel, 1, 1).cuda()
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
-
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim=None,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- inject_noise=True,
- ):
- super().__init__()
-
- self.inject_noise = inject_noise
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
-
- self.noise = NoiseInjection()
- # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- # self.activate = ScaledLeakyReLU(0.2)
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style=None, noise=None):
- out = self.conv(input, style)
- if self.inject_noise:
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
-
- self.size = size
-
- self.style_dim = style_dim
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res]
- self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape))
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def forward(
- self,
- styles,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f'noise_{i}') for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- if len(styles[0].shape) < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
-
- else:
- return image, None
-
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1], downsample=True, skip_gain=1.0):
- super().__init__()
-
- self.skip_gain = skip_gain
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=downsample, blur_kernel=blur_kernel)
-
- if in_channel != out_channel or downsample:
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=downsample, activate=False, bias=False
- )
- else:
- self.skip = nn.Identity()
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out * self.skip_gain + skip) / math.sqrt(self.skip_gain ** 2 + 1.0)
-
- return out
-
-
-class StyleGAN2Discriminator(nn.Module):
- def __init__(self, input_nc, ndf=64, n_layers=3, no_antialias=False, size=None, opt=None):
- super().__init__()
- self.opt = opt
- self.stddev_group = 16
- if size is None:
- size = 2 ** int((np.rint(np.log2(min(opt.load_size, opt.crop_size)))))
- if "patch" in self.opt.netD and self.opt.D_patch_size is not None:
- size = 2 ** int(np.log2(self.opt.D_patch_size))
-
- blur_kernel = [1, 3, 3, 1]
- channel_multiplier = ndf / 64
- channels = {
- 4: min(384, int(4096 * channel_multiplier)),
- 8: min(384, int(2048 * channel_multiplier)),
- 16: min(384, int(1024 * channel_multiplier)),
- 32: min(384, int(512 * channel_multiplier)),
- 64: int(256 * channel_multiplier),
- 128: int(128 * channel_multiplier),
- 256: int(64 * channel_multiplier),
- 512: int(32 * channel_multiplier),
- 1024: int(16 * channel_multiplier),
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- if "smallpatch" in self.opt.netD:
- final_res_log2 = 4
- elif "patch" in self.opt.netD:
- final_res_log2 = 3
- else:
- final_res_log2 = 2
-
- for i in range(log_size, final_res_log2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- if False and "tile" in self.opt.netD:
- in_channel += 1
- self.final_conv = ConvLayer(in_channel, channels[4], 3)
- if "patch" in self.opt.netD:
- self.final_linear = ConvLayer(channels[4], 1, 3, bias=False, activate=False)
- else:
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'),
- EqualLinear(channels[4], 1),
- )
-
- def forward(self, input, get_minibatch_features=False):
- if "patch" in self.opt.netD and self.opt.D_patch_size is not None:
- h, w = input.size(2), input.size(3)
- y = torch.randint(h - self.opt.D_patch_size, ())
- x = torch.randint(w - self.opt.D_patch_size, ())
- input = input[:, :, y:y + self.opt.D_patch_size, x:x + self.opt.D_patch_size]
- out = input
- for i, conv in enumerate(self.convs):
- out = conv(out)
- # print(i, out.abs().mean())
- # out = self.convs(input)
-
- batch, channel, height, width = out.shape
-
- if False and "tile" in self.opt.netD:
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, 1, channel // 1, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdim=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
- # print(out.abs().mean())
-
- if "patch" not in self.opt.netD:
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out
-
-
-class TileStyleGAN2Discriminator(StyleGAN2Discriminator):
- def forward(self, input):
- B, C, H, W = input.size(0), input.size(1), input.size(2), input.size(3)
- size = self.opt.D_patch_size
- Y = H // size
- X = W // size
- input = input.view(B, C, Y, size, X, size)
- input = input.permute(0, 2, 4, 1, 3, 5).contiguous().view(B * Y * X, C, size, size)
- return super().forward(input)
-
-
-class StyleGAN2Encoder(nn.Module):
- def __init__(self, input_nc, output_nc, ngf=64, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False, opt=None):
- super().__init__()
- assert opt is not None
- self.opt = opt
- channel_multiplier = ngf / 32
- channels = {
- 4: min(512, int(round(4096 * channel_multiplier))),
- 8: min(512, int(round(2048 * channel_multiplier))),
- 16: min(512, int(round(1024 * channel_multiplier))),
- 32: min(512, int(round(512 * channel_multiplier))),
- 64: int(round(256 * channel_multiplier)),
- 128: int(round(128 * channel_multiplier)),
- 256: int(round(64 * channel_multiplier)),
- 512: int(round(32 * channel_multiplier)),
- 1024: int(round(16 * channel_multiplier)),
- }
-
- blur_kernel = [1, 3, 3, 1]
-
- cur_res = 2 ** int((np.rint(np.log2(min(opt.load_size, opt.crop_size)))))
- convs = [nn.Identity(),
- ConvLayer(3, channels[cur_res], 1)]
-
- num_downsampling = self.opt.stylegan2_G_num_downsampling
- for i in range(num_downsampling):
- in_channel = channels[cur_res]
- out_channel = channels[cur_res // 2]
- convs.append(ResBlock(in_channel, out_channel, blur_kernel, downsample=True))
- cur_res = cur_res // 2
-
- for i in range(n_blocks // 2):
- n_channel = channels[cur_res]
- convs.append(ResBlock(n_channel, n_channel, downsample=False))
-
- self.convs = nn.Sequential(*convs)
-
- def forward(self, input, layers=[], get_features=False):
- feat = input
- feats = []
- if -1 in layers:
- layers.append(len(self.convs) - 1)
- for layer_id, layer in enumerate(self.convs):
- feat = layer(feat)
- # print(layer_id, " features ", feat.abs().mean())
- if layer_id in layers:
- feats.append(feat)
-
- if get_features:
- return feat, feats
- else:
- return feat
-
-
-class StyleGAN2Decoder(nn.Module):
- def __init__(self, input_nc, output_nc, ngf=64, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False, opt=None):
- super().__init__()
- assert opt is not None
- self.opt = opt
-
- blur_kernel = [1, 3, 3, 1]
-
- channel_multiplier = ngf / 32
- channels = {
- 4: min(512, int(round(4096 * channel_multiplier))),
- 8: min(512, int(round(2048 * channel_multiplier))),
- 16: min(512, int(round(1024 * channel_multiplier))),
- 32: min(512, int(round(512 * channel_multiplier))),
- 64: int(round(256 * channel_multiplier)),
- 128: int(round(128 * channel_multiplier)),
- 256: int(round(64 * channel_multiplier)),
- 512: int(round(32 * channel_multiplier)),
- 1024: int(round(16 * channel_multiplier)),
- }
-
- num_downsampling = self.opt.stylegan2_G_num_downsampling
- cur_res = 2 ** int((np.rint(np.log2(min(opt.load_size, opt.crop_size))))) // (2 ** num_downsampling)
- convs = []
-
- for i in range(n_blocks // 2):
- n_channel = channels[cur_res]
- convs.append(ResBlock(n_channel, n_channel, downsample=False))
-
- for i in range(num_downsampling):
- in_channel = channels[cur_res]
- out_channel = channels[cur_res * 2]
- inject_noise = "small" not in self.opt.netG
- convs.append(
- StyledConv(in_channel, out_channel, 3, upsample=True, blur_kernel=blur_kernel, inject_noise=inject_noise)
- )
- cur_res = cur_res * 2
-
- convs.append(ConvLayer(channels[cur_res], 3, 1))
-
- self.convs = nn.Sequential(*convs)
-
- def forward(self, input):
- return self.convs(input)
-
-
-class StyleGAN2Generator(nn.Module):
- def __init__(self, input_nc, output_nc, ngf=64, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False, opt=None):
- super().__init__()
- self.opt = opt
- self.encoder = StyleGAN2Encoder(input_nc, output_nc, ngf, use_dropout, n_blocks, padding_type, no_antialias, opt)
- self.decoder = StyleGAN2Decoder(input_nc, output_nc, ngf, use_dropout, n_blocks, padding_type, no_antialias, opt)
-
- def forward(self, input, layers=[], encode_only=False):
- feat, feats = self.encoder(input, layers, True)
- if encode_only:
- return feats
- else:
- fake = self.decoder(feat)
-
- if len(layers) > 0:
- return fake, feats
- else:
- return fake
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/samplers/score_hlr_sampler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/samplers/score_hlr_sampler.py
deleted file mode 100644
index 11d46b97705db60fb6a4eb5fa7da10ac78acb8bc..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/samplers/score_hlr_sampler.py
+++ /dev/null
@@ -1,264 +0,0 @@
-import torch
-from mmcv.ops import nms_match
-
-from ..builder import BBOX_SAMPLERS
-from ..transforms import bbox2roi
-from .base_sampler import BaseSampler
-from .sampling_result import SamplingResult
-
-
-@BBOX_SAMPLERS.register_module()
-class ScoreHLRSampler(BaseSampler):
- r"""Importance-based Sample Reweighting (ISR_N), described in `Prime Sample
- Attention in Object Detection `_.
-
- Score hierarchical local rank (HLR) differentiates with RandomSampler in
- negative part. It firstly computes Score-HLR in a two-step way,
- then linearly maps score hlr to the loss weights.
-
- Args:
- num (int): Total number of sampled RoIs.
- pos_fraction (float): Fraction of positive samples.
- context (:class:`BaseRoIHead`): RoI head that the sampler belongs to.
- neg_pos_ub (int): Upper bound of the ratio of num negative to num
- positive, -1 means no upper bound.
- add_gt_as_proposals (bool): Whether to add ground truth as proposals.
- k (float): Power of the non-linear mapping.
- bias (float): Shift of the non-linear mapping.
- score_thr (float): Minimum score that a negative sample is to be
- considered as valid bbox.
- """
-
- def __init__(self,
- num,
- pos_fraction,
- context,
- neg_pos_ub=-1,
- add_gt_as_proposals=True,
- k=0.5,
- bias=0,
- score_thr=0.05,
- iou_thr=0.5,
- **kwargs):
- super().__init__(num, pos_fraction, neg_pos_ub, add_gt_as_proposals)
- self.k = k
- self.bias = bias
- self.score_thr = score_thr
- self.iou_thr = iou_thr
- self.context = context
- # context of cascade detectors is a list, so distinguish them here.
- if not hasattr(context, 'num_stages'):
- self.bbox_roi_extractor = context.bbox_roi_extractor
- self.bbox_head = context.bbox_head
- self.with_shared_head = context.with_shared_head
- if self.with_shared_head:
- self.shared_head = context.shared_head
- else:
- self.bbox_roi_extractor = context.bbox_roi_extractor[
- context.current_stage]
- self.bbox_head = context.bbox_head[context.current_stage]
-
- @staticmethod
- def random_choice(gallery, num):
- """Randomly select some elements from the gallery.
-
- If `gallery` is a Tensor, the returned indices will be a Tensor;
- If `gallery` is a ndarray or list, the returned indices will be a
- ndarray.
-
- Args:
- gallery (Tensor | ndarray | list): indices pool.
- num (int): expected sample num.
-
- Returns:
- Tensor or ndarray: sampled indices.
- """
- assert len(gallery) >= num
-
- is_tensor = isinstance(gallery, torch.Tensor)
- if not is_tensor:
- if torch.cuda.is_available():
- device = torch.cuda.current_device()
- else:
- device = 'cpu'
- gallery = torch.tensor(gallery, dtype=torch.long, device=device)
- perm = torch.randperm(gallery.numel(), device=gallery.device)[:num]
- rand_inds = gallery[perm]
- if not is_tensor:
- rand_inds = rand_inds.cpu().numpy()
- return rand_inds
-
- def _sample_pos(self, assign_result, num_expected, **kwargs):
- """Randomly sample some positive samples."""
- pos_inds = torch.nonzero(assign_result.gt_inds > 0).flatten()
- if pos_inds.numel() <= num_expected:
- return pos_inds
- else:
- return self.random_choice(pos_inds, num_expected)
-
- def _sample_neg(self,
- assign_result,
- num_expected,
- bboxes,
- feats=None,
- img_meta=None,
- **kwargs):
- """Sample negative samples.
-
- Score-HLR sampler is done in the following steps:
- 1. Take the maximum positive score prediction of each negative samples
- as s_i.
- 2. Filter out negative samples whose s_i <= score_thr, the left samples
- are called valid samples.
- 3. Use NMS-Match to divide valid samples into different groups,
- samples in the same group will greatly overlap with each other
- 4. Rank the matched samples in two-steps to get Score-HLR.
- (1) In the same group, rank samples with their scores.
- (2) In the same score rank across different groups,
- rank samples with their scores again.
- 5. Linearly map Score-HLR to the final label weights.
-
- Args:
- assign_result (:obj:`AssignResult`): result of assigner.
- num_expected (int): Expected number of samples.
- bboxes (Tensor): bbox to be sampled.
- feats (Tensor): Features come from FPN.
- img_meta (dict): Meta information dictionary.
- """
- neg_inds = torch.nonzero(assign_result.gt_inds == 0).flatten()
- num_neg = neg_inds.size(0)
- if num_neg == 0:
- return neg_inds, None
- with torch.no_grad():
- neg_bboxes = bboxes[neg_inds]
- neg_rois = bbox2roi([neg_bboxes])
- bbox_result = self.context._bbox_forward(feats, neg_rois)
- cls_score, bbox_pred = bbox_result['cls_score'], bbox_result[
- 'bbox_pred']
-
- ori_loss = self.bbox_head.loss(
- cls_score=cls_score,
- bbox_pred=None,
- rois=None,
- labels=neg_inds.new_full((num_neg, ),
- self.bbox_head.num_classes),
- label_weights=cls_score.new_ones(num_neg),
- bbox_targets=None,
- bbox_weights=None,
- reduction_override='none')['loss_cls']
-
- # filter out samples with the max score lower than score_thr
- max_score, argmax_score = cls_score.softmax(-1)[:, :-1].max(-1)
- valid_inds = (max_score > self.score_thr).nonzero().view(-1)
- invalid_inds = (max_score <= self.score_thr).nonzero().view(-1)
- num_valid = valid_inds.size(0)
- num_invalid = invalid_inds.size(0)
-
- num_expected = min(num_neg, num_expected)
- num_hlr = min(num_valid, num_expected)
- num_rand = num_expected - num_hlr
- if num_valid > 0:
- valid_rois = neg_rois[valid_inds]
- valid_max_score = max_score[valid_inds]
- valid_argmax_score = argmax_score[valid_inds]
- valid_bbox_pred = bbox_pred[valid_inds]
-
- # valid_bbox_pred shape: [num_valid, #num_classes, 4]
- valid_bbox_pred = valid_bbox_pred.view(
- valid_bbox_pred.size(0), -1, 4)
- selected_bbox_pred = valid_bbox_pred[range(num_valid),
- valid_argmax_score]
- pred_bboxes = self.bbox_head.bbox_coder.decode(
- valid_rois[:, 1:], selected_bbox_pred)
- pred_bboxes_with_score = torch.cat(
- [pred_bboxes, valid_max_score[:, None]], -1)
- group = nms_match(pred_bboxes_with_score, self.iou_thr)
-
- # imp: importance
- imp = cls_score.new_zeros(num_valid)
- for g in group:
- g_score = valid_max_score[g]
- # g_score has already sorted
- rank = g_score.new_tensor(range(g_score.size(0)))
- imp[g] = num_valid - rank + g_score
- _, imp_rank_inds = imp.sort(descending=True)
- _, imp_rank = imp_rank_inds.sort()
- hlr_inds = imp_rank_inds[:num_expected]
-
- if num_rand > 0:
- rand_inds = torch.randperm(num_invalid)[:num_rand]
- select_inds = torch.cat(
- [valid_inds[hlr_inds], invalid_inds[rand_inds]])
- else:
- select_inds = valid_inds[hlr_inds]
-
- neg_label_weights = cls_score.new_ones(num_expected)
-
- up_bound = max(num_expected, num_valid)
- imp_weights = (up_bound -
- imp_rank[hlr_inds].float()) / up_bound
- neg_label_weights[:num_hlr] = imp_weights
- neg_label_weights[num_hlr:] = imp_weights.min()
- neg_label_weights = (self.bias +
- (1 - self.bias) * neg_label_weights).pow(
- self.k)
- ori_selected_loss = ori_loss[select_inds]
- new_loss = ori_selected_loss * neg_label_weights
- norm_ratio = ori_selected_loss.sum() / new_loss.sum()
- neg_label_weights *= norm_ratio
- else:
- neg_label_weights = cls_score.new_ones(num_expected)
- select_inds = torch.randperm(num_neg)[:num_expected]
-
- return neg_inds[select_inds], neg_label_weights
-
- def sample(self,
- assign_result,
- bboxes,
- gt_bboxes,
- gt_labels=None,
- img_meta=None,
- **kwargs):
- """Sample positive and negative bboxes.
-
- This is a simple implementation of bbox sampling given candidates,
- assigning results and ground truth bboxes.
-
- Args:
- assign_result (:obj:`AssignResult`): Bbox assigning results.
- bboxes (Tensor): Boxes to be sampled from.
- gt_bboxes (Tensor): Ground truth bboxes.
- gt_labels (Tensor, optional): Class labels of ground truth bboxes.
-
- Returns:
- tuple[:obj:`SamplingResult`, Tensor]: Sampling result and negetive
- label weights.
- """
- bboxes = bboxes[:, :4]
-
- gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8)
- if self.add_gt_as_proposals:
- bboxes = torch.cat([gt_bboxes, bboxes], dim=0)
- assign_result.add_gt_(gt_labels)
- gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8)
- gt_flags = torch.cat([gt_ones, gt_flags])
-
- num_expected_pos = int(self.num * self.pos_fraction)
- pos_inds = self.pos_sampler._sample_pos(
- assign_result, num_expected_pos, bboxes=bboxes, **kwargs)
- num_sampled_pos = pos_inds.numel()
- num_expected_neg = self.num - num_sampled_pos
- if self.neg_pos_ub >= 0:
- _pos = max(1, num_sampled_pos)
- neg_upper_bound = int(self.neg_pos_ub * _pos)
- if num_expected_neg > neg_upper_bound:
- num_expected_neg = neg_upper_bound
- neg_inds, neg_label_weights = self.neg_sampler._sample_neg(
- assign_result,
- num_expected_neg,
- bboxes,
- img_meta=img_meta,
- **kwargs)
-
- return SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes,
- assign_result, gt_flags), neg_label_weights
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/utils/gaussian_target.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/utils/gaussian_target.py
deleted file mode 100644
index 7bb7160cb4bf2f47876f6e8373142aa5846920a9..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/utils/gaussian_target.py
+++ /dev/null
@@ -1,185 +0,0 @@
-from math import sqrt
-
-import torch
-
-
-def gaussian2D(radius, sigma=1, dtype=torch.float32, device='cpu'):
- """Generate 2D gaussian kernel.
-
- Args:
- radius (int): Radius of gaussian kernel.
- sigma (int): Sigma of gaussian function. Default: 1.
- dtype (torch.dtype): Dtype of gaussian tensor. Default: torch.float32.
- device (str): Device of gaussian tensor. Default: 'cpu'.
-
- Returns:
- h (Tensor): Gaussian kernel with a
- ``(2 * radius + 1) * (2 * radius + 1)`` shape.
- """
- x = torch.arange(
- -radius, radius + 1, dtype=dtype, device=device).view(1, -1)
- y = torch.arange(
- -radius, radius + 1, dtype=dtype, device=device).view(-1, 1)
-
- h = (-(x * x + y * y) / (2 * sigma * sigma)).exp()
-
- h[h < torch.finfo(h.dtype).eps * h.max()] = 0
- return h
-
-
-def gen_gaussian_target(heatmap, center, radius, k=1):
- """Generate 2D gaussian heatmap.
-
- Args:
- heatmap (Tensor): Input heatmap, the gaussian kernel will cover on
- it and maintain the max value.
- center (list[int]): Coord of gaussian kernel's center.
- radius (int): Radius of gaussian kernel.
- k (int): Coefficient of gaussian kernel. Default: 1.
-
- Returns:
- out_heatmap (Tensor): Updated heatmap covered by gaussian kernel.
- """
- diameter = 2 * radius + 1
- gaussian_kernel = gaussian2D(
- radius, sigma=diameter / 6, dtype=heatmap.dtype, device=heatmap.device)
-
- x, y = center
-
- height, width = heatmap.shape[:2]
-
- left, right = min(x, radius), min(width - x, radius + 1)
- top, bottom = min(y, radius), min(height - y, radius + 1)
-
- masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right]
- masked_gaussian = gaussian_kernel[radius - top:radius + bottom,
- radius - left:radius + right]
- out_heatmap = heatmap
- torch.max(
- masked_heatmap,
- masked_gaussian * k,
- out=out_heatmap[y - top:y + bottom, x - left:x + right])
-
- return out_heatmap
-
-
-def gaussian_radius(det_size, min_overlap):
- r"""Generate 2D gaussian radius.
-
- This function is modified from the `official github repo
- `_.
-
- Given ``min_overlap``, radius could computed by a quadratic equation
- according to Vieta's formulas.
-
- There are 3 cases for computing gaussian radius, details are following:
-
- - Explanation of figure: ``lt`` and ``br`` indicates the left-top and
- bottom-right corner of ground truth box. ``x`` indicates the
- generated corner at the limited position when ``radius=r``.
-
- - Case1: one corner is inside the gt box and the other is outside.
-
- .. code:: text
-
- |< width >|
-
- lt-+----------+ -
- | | | ^
- +--x----------+--+
- | | | |
- | | | | height
- | | overlap | |
- | | | |
- | | | | v
- +--+---------br--+ -
- | | |
- +----------+--x
-
- To ensure IoU of generated box and gt box is larger than ``min_overlap``:
-
- .. math::
- \cfrac{(w-r)*(h-r)}{w*h+(w+h)r-r^2} \ge {iou} \quad\Rightarrow\quad
- {r^2-(w+h)r+\cfrac{1-iou}{1+iou}*w*h} \ge 0 \\
- {a} = 1,\quad{b} = {-(w+h)},\quad{c} = {\cfrac{1-iou}{1+iou}*w*h}
- {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a}
-
- - Case2: both two corners are inside the gt box.
-
- .. code:: text
-
- |< width >|
-
- lt-+----------+ -
- | | | ^
- +--x-------+ |
- | | | |
- | |overlap| | height
- | | | |
- | +-------x--+
- | | | v
- +----------+-br -
-
- To ensure IoU of generated box and gt box is larger than ``min_overlap``:
-
- .. math::
- \cfrac{(w-2*r)*(h-2*r)}{w*h} \ge {iou} \quad\Rightarrow\quad
- {4r^2-2(w+h)r+(1-iou)*w*h} \ge 0 \\
- {a} = 4,\quad {b} = {-2(w+h)},\quad {c} = {(1-iou)*w*h}
- {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a}
-
- - Case3: both two corners are outside the gt box.
-
- .. code:: text
-
- |< width >|
-
- x--+----------------+
- | | |
- +-lt-------------+ | -
- | | | | ^
- | | | |
- | | overlap | | height
- | | | |
- | | | | v
- | +------------br--+ -
- | | |
- +----------------+--x
-
- To ensure IoU of generated box and gt box is larger than ``min_overlap``:
-
- .. math::
- \cfrac{w*h}{(w+2*r)*(h+2*r)} \ge {iou} \quad\Rightarrow\quad
- {4*iou*r^2+2*iou*(w+h)r+(iou-1)*w*h} \le 0 \\
- {a} = {4*iou},\quad {b} = {2*iou*(w+h)},\quad {c} = {(iou-1)*w*h} \\
- {r} \le \cfrac{-b+\sqrt{b^2-4*a*c}}{2*a}
-
- Args:
- det_size (list[int]): Shape of object.
- min_overlap (float): Min IoU with ground truth for boxes generated by
- keypoints inside the gaussian kernel.
-
- Returns:
- radius (int): Radius of gaussian kernel.
- """
- height, width = det_size
-
- a1 = 1
- b1 = (height + width)
- c1 = width * height * (1 - min_overlap) / (1 + min_overlap)
- sq1 = sqrt(b1**2 - 4 * a1 * c1)
- r1 = (b1 - sq1) / (2 * a1)
-
- a2 = 4
- b2 = 2 * (height + width)
- c2 = (1 - min_overlap) * width * height
- sq2 = sqrt(b2**2 - 4 * a2 * c2)
- r2 = (b2 - sq2) / (2 * a2)
-
- a3 = 4 * min_overlap
- b3 = -2 * min_overlap * (height + width)
- c3 = (min_overlap - 1) * width * height
- sq3 = sqrt(b3**2 - 4 * a3 * c3)
- r3 = (b3 + sq3) / (2 * a3)
- return min(r1, r2, r3)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/builder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/builder.py
deleted file mode 100644
index 1f5b971252bfc971c3ffbaa27746d69b1d3ea9fd..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/builder.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import warnings
-
-from annotator.uniformer.mmcv.cnn import MODELS as MMCV_MODELS
-from annotator.uniformer.mmcv.utils import Registry
-
-MODELS = Registry('models', parent=MMCV_MODELS)
-
-BACKBONES = MODELS
-NECKS = MODELS
-HEADS = MODELS
-LOSSES = MODELS
-SEGMENTORS = MODELS
-
-
-def build_backbone(cfg):
- """Build backbone."""
- return BACKBONES.build(cfg)
-
-
-def build_neck(cfg):
- """Build neck."""
- return NECKS.build(cfg)
-
-
-def build_head(cfg):
- """Build head."""
- return HEADS.build(cfg)
-
-
-def build_loss(cfg):
- """Build loss."""
- return LOSSES.build(cfg)
-
-
-def build_segmentor(cfg, train_cfg=None, test_cfg=None):
- """Build segmentor."""
- if train_cfg is not None or test_cfg is not None:
- warnings.warn(
- 'train_cfg and test_cfg is deprecated, '
- 'please specify them in model', UserWarning)
- assert cfg.get('train_cfg') is None or train_cfg is None, \
- 'train_cfg specified in both outer field and model field '
- assert cfg.get('test_cfg') is None or test_cfg is None, \
- 'test_cfg specified in both outer field and model field '
- return SEGMENTORS.build(
- cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg))
diff --git a/spaces/abidlabs/Voice-Cloning/README.md b/spaces/abidlabs/Voice-Cloning/README.md
deleted file mode 100644
index ca77a6d2447b360e821eac2e543cb55d1722f5a5..0000000000000000000000000000000000000000
--- a/spaces/abidlabs/Voice-Cloning/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Voice Cloning
-emoji: ⚡
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.8
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: BilalSardar/Voice-Cloning
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/canvas/headless.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/canvas/headless.py
deleted file mode 100644
index 2262d9f2779f0dc98eb5d0486c6249290b132b65..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/canvas/headless.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import pyglet
-import warnings
-
-from .base import Display, Screen, ScreenMode, Canvas
-
-
-from ctypes import *
-from pyglet.libs.egl import egl
-from pyglet.libs.egl import eglext
-
-
-class HeadlessDisplay(Display):
-
- def __init__(self):
- super().__init__()
- # TODO: fix this placeholder:
- self._screens = [HeadlessScreen(self, 0, 0, 1920, 1080)]
-
- num_devices = egl.EGLint()
- eglext.eglQueryDevicesEXT(0, None, byref(num_devices))
- if num_devices.value > 0:
- headless_device = pyglet.options['headless_device']
- if headless_device < 0 or headless_device >= num_devices.value:
- raise ValueError(f'Invalid EGL devide id: {headless_device}')
- devices = (eglext.EGLDeviceEXT * num_devices.value)()
- eglext.eglQueryDevicesEXT(num_devices.value, devices, byref(num_devices))
- self._display_connection = eglext.eglGetPlatformDisplayEXT(
- eglext.EGL_PLATFORM_DEVICE_EXT, devices[headless_device], None)
- else:
- warnings.warn('No device available for EGL device platform. Using native display type.')
- display = egl.EGLNativeDisplayType()
- self._display_connection = egl.eglGetDisplay(display)
-
- egl.eglInitialize(self._display_connection, None, None)
-
- def get_screens(self):
- return self._screens
-
- def __del__(self):
- egl.eglTerminate(self._display_connection)
-
-
-class HeadlessCanvas(Canvas):
- def __init__(self, display, egl_surface):
- super().__init__(display)
- self.egl_surface = egl_surface
-
-
-class HeadlessScreen(Screen):
- def __init__(self, display, x, y, width, height):
- super().__init__(display, x, y, width, height)
-
- def get_matching_configs(self, template):
- canvas = HeadlessCanvas(self.display, None)
- configs = template.match(canvas)
- # XXX deprecate
- for config in configs:
- config.screen = self
- return configs
-
- def get_modes(self):
- pass
-
- def get_mode(self):
- pass
-
- def set_mode(self, mode):
- pass
-
- def restore_mode(self):
- pass
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/win32/wintab.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/win32/wintab.py
deleted file mode 100644
index e4890e109f53a3d904e3874f4e97e7c8e31aa03c..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/win32/wintab.py
+++ /dev/null
@@ -1,401 +0,0 @@
-import ctypes
-from collections import defaultdict
-import pyglet
-from pyglet.input.base import DeviceOpenException
-from pyglet.input.base import Tablet, TabletCanvas
-from pyglet.libs.win32 import libwintab as wintab
-from pyglet.util import debug_print
-
-_debug = debug_print('debug_input')
-
-lib = wintab.lib
-
-
-def wtinfo(category, index, buffer):
- size = lib.WTInfoW(category, index, None)
- assert size <= ctypes.sizeof(buffer)
- lib.WTInfoW(category, index, ctypes.byref(buffer))
- return buffer
-
-
-def wtinfo_string(category, index):
- size = lib.WTInfoW(category, index, None)
- buffer = ctypes.create_unicode_buffer(size)
- lib.WTInfoW(category, index, buffer)
- return buffer.value
-
-
-def wtinfo_uint(category, index):
- buffer = wintab.UINT()
- lib.WTInfoW(category, index, ctypes.byref(buffer))
- return buffer.value
-
-
-def wtinfo_word(category, index):
- buffer = wintab.WORD()
- lib.WTInfoW(category, index, ctypes.byref(buffer))
- return buffer.value
-
-
-def wtinfo_dword(category, index):
- buffer = wintab.DWORD()
- lib.WTInfoW(category, index, ctypes.byref(buffer))
- return buffer.value
-
-
-def wtinfo_wtpkt(category, index):
- buffer = wintab.WTPKT()
- lib.WTInfoW(category, index, ctypes.byref(buffer))
- return buffer.value
-
-
-def wtinfo_bool(category, index):
- buffer = wintab.BOOL()
- lib.WTInfoW(category, index, ctypes.byref(buffer))
- return bool(buffer.value)
-
-
-class WintabTablet(Tablet):
- def __init__(self, index):
- self._device = wintab.WTI_DEVICES + index
- self.name = wtinfo_string(self._device, wintab.DVC_NAME).strip()
- self.id = wtinfo_string(self._device, wintab.DVC_PNPID)
-
- hardware = wtinfo_uint(self._device, wintab.DVC_HARDWARE)
- # phys_cursors = hardware & wintab.HWC_PHYSID_CURSORS
-
- n_cursors = wtinfo_uint(self._device, wintab.DVC_NCSRTYPES)
- first_cursor = wtinfo_uint(self._device, wintab.DVC_FIRSTCSR)
-
- self.pressure_axis = wtinfo(self._device, wintab.DVC_NPRESSURE, wintab.AXIS())
-
- self.cursors = []
- self._cursor_map = {}
-
- for i in range(n_cursors):
- cursor = WintabTabletCursor(self, i + first_cursor)
- if not cursor.bogus:
- self.cursors.append(cursor)
- self._cursor_map[i + first_cursor] = cursor
-
- def open(self, window):
- return WintabTabletCanvas(self, window)
-
-
-class WintabTabletCanvas(TabletCanvas):
- override_keys = False
-
- def __init__(self, device, window, msg_base=wintab.WT_DEFBASE):
- super(WintabTabletCanvas, self).__init__(window)
-
- self.device = device
- self.msg_base = msg_base
-
- # Get the extension masks available. Only need to do this once.
- global _extension_masks
- if not _extension_masks:
- _extension_masks = get_extension_masks()
-
- # Just use system context, for similarity w/ os x and xinput.
- # WTI_DEFCONTEXT detaches mouse from tablet, which is nice, but not
- # possible on os x afiak.
- self.context_info = context_info = wintab.LOGCONTEXT()
- wtinfo(wintab.WTI_DEFSYSCTX, 0, context_info)
- context_info.lcMsgBase = msg_base
- context_info.lcOptions |= wintab.CXO_MESSAGES
-
- # If you change this, change definition of PACKET also.
- context_info.lcPktData = (
- wintab.PK_CHANGED | wintab.PK_CURSOR | wintab.PK_BUTTONS |
- wintab.PK_X | wintab.PK_Y | wintab.PK_Z |
- wintab.PK_NORMAL_PRESSURE | wintab.PK_TANGENT_PRESSURE |
- wintab.PK_ORIENTATION) | _extension_masks
- context_info.lcPktMode = 0 # All absolute (PACKETMODE)
-
- self._context = lib.WTOpenW(window._hwnd, ctypes.byref(context_info), True)
- if not self._context:
- raise DeviceOpenException("Couldn't open tablet context")
-
- window._event_handlers[msg_base + wintab.WT_PACKET] = self._event_wt_packet
- window._event_handlers[msg_base + wintab.WT_PROXIMITY] = self._event_wt_proximity
-
- if _extension_masks:
- window._event_handlers[msg_base + wintab.WT_PACKETEXT] = self._event_wt_packetext
-
- self._current_cursor = None
- self._pressure_scale = device.pressure_axis.get_scale()
- self._pressure_bias = device.pressure_axis.get_bias()
-
- self.express_keys = defaultdict(lambda: defaultdict(bool)) # [control_id][location_id]
- self.express_key_ct = 0
- self.touchrings = [] # Not currently implemented.
- self.touchstrips = [] # Not currently implemented.
-
- # Override test
- for tablet_id in range(get_tablet_count()):
- control_count = self.extension_get(wintab.WTX_EXPKEYS2, tablet_id, 0, 0,
- wintab.TABLET_PROPERTY_CONTROLCOUNT)
- self.express_key_ct = control_count
- assert _debug(f"Controls Found: {control_count}")
- if self.override_keys is True:
- for control_id in range(control_count):
- function_count = self.extension_get(wintab.WTX_EXPKEYS2, tablet_id, control_id, 0,
- wintab.TABLET_PROPERTY_FUNCCOUNT)
- for function_id in range(function_count):
- self.extension_set(wintab.WTX_EXPKEYS2, tablet_id, control_id, function_id,
- wintab.TABLET_PROPERTY_OVERRIDE, wintab.BOOL(True))
-
- def extension_get(self, extension, tablet_id, control_id, function_id, property_id, value_type=wintab.UINT):
- prop = wintab.EXTPROPERTY()
-
- prop.version = 0
- prop.tabletIndex = tablet_id
- prop.controlIndex = control_id
- prop.functionIndex = function_id
- prop.propertyID = property_id
- prop.reserved = 0
- prop.dataSize = ctypes.sizeof(value_type)
-
- success = lib.WTExtGet(self._context, extension, ctypes.byref(prop))
- if success:
- return ctypes.cast(prop.data, ctypes.POINTER(value_type)).contents.value
-
- return 0
-
- def extension_set(self, extension, tablet_id, control_id, function_id, property_id, value):
- prop = wintab.EXTPROPERTY()
- prop.version = 0
- prop.tabletIndex = tablet_id
- prop.controlIndex = control_id
- prop.functionIndex = function_id
- prop.propertyID = property_id
- prop.reserved = 0
- prop.dataSize = ctypes.sizeof(value)
- prop.data[0] = value.value
-
- success = lib.WTExtSet(self._context, extension, ctypes.byref(prop))
- if success:
- return True
-
- return False
-
- def close(self):
- lib.WTClose(self._context)
- self._context = None
-
- del self.window._event_handlers[self.msg_base + wintab.WT_PACKET]
- del self.window._event_handlers[self.msg_base + wintab.WT_PROXIMITY]
-
- if _extension_masks:
- del self.window._event_handlers[self.msg_base + wintab.WT_PACKETEXT]
-
- def _set_current_cursor(self, cursor_type):
- if self._current_cursor:
- self.dispatch_event('on_leave', self._current_cursor)
-
- self._current_cursor = self.device._cursor_map.get(cursor_type, None)
-
- if self._current_cursor:
- self.dispatch_event('on_enter', self._current_cursor)
-
- @pyglet.window.win32.Win32EventHandler(0)
- def _event_wt_packet(self, msg, wParam, lParam):
- if lParam != self._context:
- return
-
- packet = wintab.PACKET()
- if lib.WTPacket(self._context, wParam, ctypes.byref(packet)) == 0:
- return
-
- if not packet.pkChanged:
- return
-
- window_x, window_y = self.window.get_location() # TODO cache on window
- window_y = self.window.screen.height - window_y - self.window.height
- x = packet.pkX - window_x
- y = packet.pkY - window_y
- pressure = (packet.pkNormalPressure + self._pressure_bias) * self._pressure_scale
-
- if self._current_cursor is None:
- self._set_current_cursor(packet.pkCursor)
-
- self.dispatch_event('on_motion', self._current_cursor, x, y, pressure, 0., 0., packet.pkButtons)
-
- @pyglet.window.win32.Win32EventHandler(0)
- def _event_wt_packetext(self, msg, wParam, lParam):
- packet = wintab.PACKETEXT()
- if lib.WTPacket(lParam, wParam, ctypes.byref(packet)) == 0:
- return
-
- # Proper context exists in the packet, not the lParam.
- if packet.pkBase.nContext == self._context:
- if packet.pkExpKeys.nControl < self.express_key_ct:
- current_state = self.express_keys[packet.pkExpKeys.nControl][packet.pkExpKeys.nLocation]
- new_state = bool(packet.pkExpKeys.nState)
- if current_state != new_state:
- event_type = "on_express_key_press" if new_state else "on_express_key_release"
-
- self.express_keys[packet.pkExpKeys.nControl][packet.pkExpKeys.nLocation] = new_state
-
- self.dispatch_event(event_type, packet.pkExpKeys.nControl, packet.pkExpKeys.nLocation)
-
- @pyglet.window.win32.Win32EventHandler(0)
- def _event_wt_proximity(self, msg, wParam, lParam):
- if wParam != self._context:
- return
-
- if not lParam & 0xffff0000:
- # Not a hardware proximity event
- return
-
- if not lParam & 0xffff:
- # Going out
- self.dispatch_event('on_leave', self._current_cursor)
-
- # If going in, proximity event will be generated by next event, which
- # can actually grab a cursor id.
- self._current_cursor = None
-
- def on_express_key_press(self, control_id: int, location_id: int):
- """An event called when an ExpressKey is pressed down.
-
- :Parameters:
- `control_id` : int
- Zero-based index number given to the assigned key by the driver.
- The same control_id may exist in multiple locations, which the location_id is used to differentiate.
- `location_id: int
- Zero-based index indicating side of tablet where control id was found.
- Some tablets may have clusters of ExpressKey's on various areas of the tablet.
- (0 = left, 1 = right, 2 = top, 3 = bottom, 4 = transducer).
-
- :event:
- """
-
- def on_express_key_release(self, control_id: int, location_id: int):
- """An event called when an ExpressKey is released.
-
- :Parameters:
- `control_id` : int
- Zero-based index number given to the assigned key by the driver.
- The same control_id may exist in multiple locations, which the location_id is used to differentiate.
- `location_id: int
- Zero-based index indicating side of tablet where control id was found.
- Some tablets may have clusters of ExpressKey's on various areas of the tablet.
- (0 = left, 1 = right, 2 = top, 3 = bottom, 4 = transducer).
-
- :event:
- """
-
-
-WintabTabletCanvas.register_event_type('on_express_key_press')
-WintabTabletCanvas.register_event_type('on_express_key_release')
-
-
-class WintabTabletCursor:
- def __init__(self, device, index):
- self.device = device
- self._cursor = wintab.WTI_CURSORS + index
-
- self.name = wtinfo_string(self._cursor, wintab.CSR_NAME).strip()
- self.active = wtinfo_bool(self._cursor, wintab.CSR_ACTIVE)
- pktdata = wtinfo_wtpkt(self._cursor, wintab.CSR_PKTDATA)
-
- # A whole bunch of cursors are reported by the driver, but most of
- # them are hogwash. Make sure a cursor has at least X and Y data
- # before adding it to the device.
- self.bogus = not (pktdata & wintab.PK_X and pktdata & wintab.PK_Y)
- if self.bogus:
- return
-
- self.id = (wtinfo_dword(self._cursor, wintab.CSR_TYPE) << 32) | \
- wtinfo_dword(self._cursor, wintab.CSR_PHYSID)
-
- def __repr__(self):
- return 'WintabCursor(%r)' % self.name
-
-
-def get_spec_version():
- spec_version = wtinfo_word(wintab.WTI_INTERFACE, wintab.IFC_SPECVERSION)
- return spec_version
-
-
-def get_interface_name():
- interface_name = wtinfo_string(wintab.WTI_INTERFACE, wintab.IFC_WINTABID)
- return interface_name
-
-
-def get_implementation_version():
- impl_version = wtinfo_word(wintab.WTI_INTERFACE, wintab.IFC_IMPLVERSION)
- return impl_version
-
-
-def extension_index(ext):
- """Check if a particular extension exists within the driver."""
- exists = True
- i = 0
- index = 0xFFFFFFFF
-
- while exists:
- tag = wintab.UINT()
- exists = lib.WTInfoW(wintab.WTI_EXTENSIONS + i, wintab.EXT_TAG, ctypes.byref(tag))
- if tag.value == ext:
- index = i
- break
-
- i += 1
-
- if index != 0xFFFFFFFF:
- return index
-
- return None
-
-
-def get_extension_masks():
- """Determine which extension support is available by getting the masks."""
- masks = 0
- tr_idx = extension_index(wintab.WTX_TOUCHRING)
- if tr_idx is not None:
- assert _debug("Touchring support found")
- masks |= wtinfo_uint(wintab.WTI_EXTENSIONS + tr_idx, wintab.EXT_MASK)
- else:
- assert _debug("Touchring extension not found.")
-
- ts_idx = extension_index(wintab.WTX_TOUCHSTRIP)
- if ts_idx is not None:
- assert _debug("Touchstrip support found.")
- masks |= wtinfo_uint(wintab.WTI_EXTENSIONS + ts_idx, wintab.EXT_MASK)
- else:
- assert _debug("Touchstrip extension not found.")
-
- expkeys_idx = extension_index(wintab.WTX_EXPKEYS2)
- if expkeys_idx is not None:
- assert _debug("ExpressKey support found.")
- masks |= wtinfo_uint(wintab.WTI_EXTENSIONS + expkeys_idx, wintab.EXT_MASK)
- else:
- assert _debug("ExpressKey extension not found.")
-
- return masks
-
-
-def get_tablet_count():
- """Return just the number of current devices."""
- spec_version = get_spec_version()
- assert _debug(f"Wintab Version: {spec_version}")
- if spec_version < 0x101:
- return 0
-
- n_devices = wtinfo_uint(wintab.WTI_INTERFACE, wintab.IFC_NDEVICES)
- return n_devices
-
-
-_extension_masks = None
-
-
-def get_tablets(display=None):
- # Require spec version 1.1 or greater
- n_devices = get_tablet_count()
- if not n_devices:
- return []
-
- devices = [WintabTablet(i) for i in range(n_devices)]
- return devices
diff --git a/spaces/adrabi-abderrahim/english-pronunciation-practice/app.py b/spaces/adrabi-abderrahim/english-pronunciation-practice/app.py
deleted file mode 100644
index 90e324b478d9c67b8acbb3cd1fa0946e024f07f8..0000000000000000000000000000000000000000
--- a/spaces/adrabi-abderrahim/english-pronunciation-practice/app.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from gtts import gTTS
-from transformers import pipeline
-import gradio as gr
-import uuid
-
-asr = pipeline('automatic-speech-recognition', "facebook/wav2vec2-conformer-rope-large-960h-ft")
-corrector = pipeline("text2text-generation", model="pszemraj/grammar-synthesis-small")
-
-transcribe = lambda audio: asr(audio)['text'].lower()
-
-def to_audio(s):
- audio_path = f'/tmp/{uuid.uuid4()}.mp3'
- tts = gTTS(s, tld='us')
- tts.save(audio_path)
- return audio_path
-
-
-def transcription(audio, history):
- if audio:
- message = transcribe(audio)
- history.append(( (audio, ) , message))
- results = corrector(message)
- results = '\n'.join([t['generated_text'] for t in results])
- history.append( (None, f'**[Grammar and examples]**\n {results}') )
-
- return history
-
-def chat(message, history):
- audio_path = to_audio(message)
- history.append((message, (audio_path,)))
- results = corrector(message)
- results = '\n'.join([t['generated_text'] for t in results])
- history.append( (None, f'**[Grammar and examples]**\n {results}') )
-
- return None, history
-
-with gr.Blocks(theme=gr.themes.Soft()) as learning:
- gr.Markdown('# The main aim of this app is to help English learners to speak fluently.')
-
- chatbot = gr.Chatbot()
-
- with gr.Row():
- message = gr.Textbox(label='Send your message to TTS')
- microphone = gr.Audio(label="Transcribe", source="microphone", type="filepath")
-
- microphone.change(transcription, [microphone, chatbot], [chatbot])
- microphone.change(lambda:None, None, microphone)
- message.submit(chat, [message, chatbot], [message, chatbot])
-
-learning.launch()
\ No newline at end of file
diff --git a/spaces/akhaliq/JoJoGAN/op/__init__.py b/spaces/akhaliq/JoJoGAN/op/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py b/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py
deleted file mode 100644
index 77a6e05eae6a939ae7575ae70b7173644141fffe..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from encoder.data_objects.random_cycler import RandomCycler
-from encoder.data_objects.speaker_batch import SpeakerBatch
-from encoder.data_objects.speaker import Speaker
-from encoder.params_data import partials_n_frames
-from torch.utils.data import Dataset, DataLoader
-from pathlib import Path
-
-# TODO: improve with a pool of speakers for data efficiency
-
-class SpeakerVerificationDataset(Dataset):
- def __init__(self, datasets_root: Path):
- self.root = datasets_root
- speaker_dirs = [f for f in self.root.glob("*") if f.is_dir()]
- if len(speaker_dirs) == 0:
- raise Exception("No speakers found. Make sure you are pointing to the directory "
- "containing all preprocessed speaker directories.")
- self.speakers = [Speaker(speaker_dir) for speaker_dir in speaker_dirs]
- self.speaker_cycler = RandomCycler(self.speakers)
-
- def __len__(self):
- return int(1e10)
-
- def __getitem__(self, index):
- return next(self.speaker_cycler)
-
- def get_logs(self):
- log_string = ""
- for log_fpath in self.root.glob("*.txt"):
- with log_fpath.open("r") as log_file:
- log_string += "".join(log_file.readlines())
- return log_string
-
-
-class SpeakerVerificationDataLoader(DataLoader):
- def __init__(self, dataset, speakers_per_batch, utterances_per_speaker, sampler=None,
- batch_sampler=None, num_workers=0, pin_memory=False, timeout=0,
- worker_init_fn=None):
- self.utterances_per_speaker = utterances_per_speaker
-
- super().__init__(
- dataset=dataset,
- batch_size=speakers_per_batch,
- shuffle=False,
- sampler=sampler,
- batch_sampler=batch_sampler,
- num_workers=num_workers,
- collate_fn=self.collate,
- pin_memory=pin_memory,
- drop_last=False,
- timeout=timeout,
- worker_init_fn=worker_init_fn
- )
-
- def collate(self, speakers):
- return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
-
\ No newline at end of file
diff --git a/spaces/alalalyuqing/White-box-Cartoonization/README.md b/spaces/alalalyuqing/White-box-Cartoonization/README.md
deleted file mode 100644
index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000
--- a/spaces/alalalyuqing/White-box-Cartoonization/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-python_version: 3.7
-title: White Box Cartoonization
-emoji: 📚
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: hylee/White-box-Cartoonization
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/alamin655/websurfx/Dockerfile b/spaces/alamin655/websurfx/Dockerfile
deleted file mode 100644
index f779730963d287d426904d4758f12b58c14f55bd..0000000000000000000000000000000000000000
--- a/spaces/alamin655/websurfx/Dockerfile
+++ /dev/null
@@ -1,28 +0,0 @@
-FROM rust:latest AS chef
-# We only pay the installation cost once,
-# it will be cached from the second build onwards
-RUN cargo install cargo-chef --locked
-
-WORKDIR /app
-
-FROM chef AS planner
-COPY . .
-RUN cargo chef prepare --recipe-path recipe.json
-
-FROM chef AS builder
-COPY --from=planner /app/recipe.json recipe.json
-# Build dependencies - this is the caching Docker layer!
-RUN cargo chef cook --release --recipe-path recipe.json
-
-# Build application
-COPY . .
-RUN cargo install --path .
-
-# We do not need the Rust toolchain to run the binary!
-FROM gcr.io/distroless/cc-debian12
-COPY --from=builder /app/public/ /opt/websurfx/public/
-COPY --from=builder /app/websurfx/config.lua /etc/xdg/websurfx/config.lua
-COPY --from=builder /app/websurfx/allowlist.txt /etc/xdg/websurfx/allowlist.txt
-COPY --from=builder /app/websurfx/blocklist.txt /etc/xdg/websurfx/blocklist.txt
-COPY --from=builder /usr/local/cargo/bin/* /usr/local/bin/
-CMD ["websurfx"]
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_editable.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_editable.py
deleted file mode 100644
index cf7b01aed5afcef2924ebb10c15499c4497d5ea2..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_editable.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import logging
-import os
-from typing import Optional
-
-from pip._vendor.pep517.wrappers import HookMissing, Pep517HookCaller
-
-from pip._internal.utils.subprocess import runner_with_spinner_message
-
-logger = logging.getLogger(__name__)
-
-
-def build_wheel_editable(
- name: str,
- backend: Pep517HookCaller,
- metadata_directory: str,
- tempd: str,
-) -> Optional[str]:
- """Build one InstallRequirement using the PEP 660 build process.
-
- Returns path to wheel if successfully built. Otherwise, returns None.
- """
- assert metadata_directory is not None
- try:
- logger.debug("Destination directory: %s", tempd)
-
- runner = runner_with_spinner_message(
- f"Building editable for {name} (pyproject.toml)"
- )
- with backend.subprocess_runner(runner):
- try:
- wheel_name = backend.build_editable(
- tempd,
- metadata_directory=metadata_directory,
- )
- except HookMissing as e:
- logger.error(
- "Cannot build editable %s because the build "
- "backend does not have the %s hook",
- name,
- e,
- )
- return None
- except Exception:
- logger.error("Failed building editable for %s", name)
- return None
- return os.path.join(tempd, wheel_name)
diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/Rouge155.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/Rouge155.py
deleted file mode 100644
index a3d2ca32f1f430e5356106e719a816da56f9f887..0000000000000000000000000000000000000000
--- a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/Rouge155.py
+++ /dev/null
@@ -1,649 +0,0 @@
-from __future__ import print_function, unicode_literals, division
-
-import os
-import re
-import codecs
-import platform
-
-from subprocess import check_output
-from tempfile import mkdtemp
-from functools import partial
-
-try:
- from configparser import ConfigParser
-except ImportError:
- from ConfigParser import ConfigParser
-
-from .utils import log
-from .utils.file_utils import DirectoryProcessor
-from .utils.file_utils import verify_dir
-
-
-class Rouge155(object):
- """
- This is a wrapper for the ROUGE 1.5.5 summary evaluation package.
- This class is designed to simplify the evaluation process by:
-
- 1) Converting summaries into a format ROUGE understands.
- 2) Generating the ROUGE configuration file automatically based
- on filename patterns.
-
- This class can be used within Python like this:
-
- rouge = Rouge155()
- rouge.system_dir = 'test/systems'
- rouge.model_dir = 'test/models'
-
- # The system filename pattern should contain one group that
- # matches the document ID.
- rouge.system_filename_pattern = 'SL.P.10.R.11.SL062003-(\d+).html'
-
- # The model filename pattern has '#ID#' as a placeholder for the
- # document ID. If there are multiple model summaries, pyrouge
- # will use the provided regex to automatically match them with
- # the corresponding system summary. Here, [A-Z] matches
- # multiple model summaries for a given #ID#.
- rouge.model_filename_pattern = 'SL.P.10.R.[A-Z].SL062003-#ID#.html'
-
- rouge_output = rouge.evaluate()
- print(rouge_output)
- output_dict = rouge.output_to_dict(rouge_ouput)
- print(output_dict)
- -> {'rouge_1_f_score': 0.95652,
- 'rouge_1_f_score_cb': 0.95652,
- 'rouge_1_f_score_ce': 0.95652,
- 'rouge_1_precision': 0.95652,
- [...]
-
-
- To evaluate multiple systems:
-
- rouge = Rouge155()
- rouge.system_dir = '/PATH/TO/systems'
- rouge.model_dir = 'PATH/TO/models'
- for system_id in ['id1', 'id2', 'id3']:
- rouge.system_filename_pattern = \
- 'SL.P/.10.R.{}.SL062003-(\d+).html'.format(system_id)
- rouge.model_filename_pattern = \
- 'SL.P.10.R.[A-Z].SL062003-#ID#.html'
- rouge_output = rouge.evaluate(system_id)
- print(rouge_output)
-
- """
-
- def __init__(self, rouge_dir=None, rouge_args=None, log_level=None):
- """
- Create a Rouge155 object.
-
- rouge_dir: Directory containing Rouge-1.5.5.pl
- rouge_args: Arguments to pass through to ROUGE if you
- don't want to use the default pyrouge
- arguments.
-
- """
- if log_level is None:
- self.log = log.get_global_console_logger()
- else:
- self.log = log.get_global_console_logger(log_level)
- self.__set_dir_properties()
- self._config_file = None
- self._settings_file = self.__get_config_path()
- self.__set_rouge_dir(rouge_dir)
- self.args = self.__clean_rouge_args(rouge_args)
- self._system_filename_pattern = None
- self._model_filename_pattern = None
-
- def save_home_dir(self):
- config = ConfigParser()
- section = "pyrouge settings"
- config.add_section(section)
- config.set(section, "home_dir", self._home_dir)
- with open(self._settings_file, "w") as f:
- config.write(f)
- self.log.info("Set ROUGE home directory to {}.".format(self._home_dir))
-
- @property
- def settings_file(self):
- """
- Path of the setttings file, which stores the ROUGE home dir.
-
- """
- return self._settings_file
-
- @property
- def bin_path(self):
- """
- The full path of the ROUGE binary (although it's technically
- a script), i.e. rouge_home_dir/ROUGE-1.5.5.pl
-
- """
- if self._bin_path is None:
- raise Exception(
- "ROUGE path not set. Please set the ROUGE home directory "
- "and ensure that ROUGE-1.5.5.pl exists in it."
- )
- return self._bin_path
-
- @property
- def system_filename_pattern(self):
- """
- The regular expression pattern for matching system summary
- filenames. The regex string.
-
- E.g. "SL.P.10.R.11.SL062003-(\d+).html" will match the system
- filenames in the SPL2003/system folder of the ROUGE SPL example
- in the "sample-test" folder.
-
- Currently, there is no support for multiple systems.
-
- """
- return self._system_filename_pattern
-
- @system_filename_pattern.setter
- def system_filename_pattern(self, pattern):
- self._system_filename_pattern = pattern
-
- @property
- def model_filename_pattern(self):
- """
- The regular expression pattern for matching model summary
- filenames. The pattern needs to contain the string "#ID#",
- which is a placeholder for the document ID.
-
- E.g. "SL.P.10.R.[A-Z].SL062003-#ID#.html" will match the model
- filenames in the SPL2003/system folder of the ROUGE SPL
- example in the "sample-test" folder.
-
- "#ID#" is a placeholder for the document ID which has been
- matched by the "(\d+)" part of the system filename pattern.
- The different model summaries for a given document ID are
- matched by the "[A-Z]" part.
-
- """
- return self._model_filename_pattern
-
- @model_filename_pattern.setter
- def model_filename_pattern(self, pattern):
- self._model_filename_pattern = pattern
-
- @property
- def config_file(self):
- return self._config_file
-
- @config_file.setter
- def config_file(self, path):
- config_dir, _ = os.path.split(path)
- verify_dir(config_dir, "configuration file")
- self._config_file = path
-
- def split_sentences(self):
- """
- ROUGE requires texts split into sentences. In case the texts
- are not already split, this method can be used.
-
- """
- from pyrouge.utils.sentence_splitter import PunktSentenceSplitter
-
- self.log.info("Splitting sentences.")
- ss = PunktSentenceSplitter()
- sent_split_to_string = lambda s: "\n".join(ss.split(s))
- process_func = partial(
- DirectoryProcessor.process, function=sent_split_to_string
- )
- self.__process_summaries(process_func)
-
- @staticmethod
- def convert_summaries_to_rouge_format(input_dir, output_dir):
- """
- Convert all files in input_dir into a format ROUGE understands
- and saves the files to output_dir. The input files are assumed
- to be plain text with one sentence per line.
-
- input_dir: Path of directory containing the input files.
- output_dir: Path of directory in which the converted files
- will be saved.
-
- """
- DirectoryProcessor.process(
- input_dir, output_dir, Rouge155.convert_text_to_rouge_format
- )
-
- @staticmethod
- def convert_text_to_rouge_format(text, title="dummy title"):
- """
- Convert a text to a format ROUGE understands. The text is
- assumed to contain one sentence per line.
-
- text: The text to convert, containg one sentence per line.
- title: Optional title for the text. The title will appear
- in the converted file, but doesn't seem to have
- any other relevance.
-
- Returns: The converted text as string.
-
- """
- sentences = text.split("\n")
- sent_elems = [
- '[{i}]'
- "{text}".format(i=i, text=sent)
- for i, sent in enumerate(sentences, start=1)
- ]
- html = """
-
-{title}
-
-
-{elems}
-
-""".format(
- title=title, elems="\n".join(sent_elems)
- )
-
- return html
-
- @staticmethod
- def write_config_static(
- system_dir,
- system_filename_pattern,
- model_dir,
- model_filename_pattern,
- config_file_path,
- system_id=None,
- ):
- """
- Write the ROUGE configuration file, which is basically a list
- of system summary files and their corresponding model summary
- files.
-
- pyrouge uses regular expressions to automatically find the
- matching model summary files for a given system summary file
- (cf. docstrings for system_filename_pattern and
- model_filename_pattern).
-
- system_dir: Path of directory containing
- system summaries.
- system_filename_pattern: Regex string for matching
- system summary filenames.
- model_dir: Path of directory containing
- model summaries.
- model_filename_pattern: Regex string for matching model
- summary filenames.
- config_file_path: Path of the configuration file.
- system_id: Optional system ID string which
- will appear in the ROUGE output.
-
- """
- system_filenames = [f for f in os.listdir(system_dir)]
- system_models_tuples = []
-
- system_filename_pattern = re.compile(system_filename_pattern)
- for system_filename in sorted(system_filenames):
- match = system_filename_pattern.match(system_filename)
- if match:
- id = match.groups(0)[0]
- model_filenames = Rouge155.__get_model_filenames_for_id(
- id, model_dir, model_filename_pattern
- )
- system_models_tuples.append((system_filename, sorted(model_filenames)))
- if not system_models_tuples:
- raise Exception(
- "Did not find any files matching the pattern {} "
- "in the system summaries directory {}.".format(
- system_filename_pattern.pattern, system_dir
- )
- )
-
- with codecs.open(config_file_path, "w", encoding="utf-8") as f:
- f.write('')
- for task_id, (system_filename, model_filenames) in enumerate(
- system_models_tuples, start=1
- ):
-
- eval_string = Rouge155.__get_eval_string(
- task_id,
- system_id,
- system_dir,
- system_filename,
- model_dir,
- model_filenames,
- )
- f.write(eval_string)
- f.write("")
-
- def write_config(self, config_file_path=None, system_id=None):
- """
- Write the ROUGE configuration file, which is basically a list
- of system summary files and their matching model summary files.
-
- This is a non-static version of write_config_file_static().
-
- config_file_path: Path of the configuration file.
- system_id: Optional system ID string which will
- appear in the ROUGE output.
-
- """
- if not system_id:
- system_id = 1
- if (not config_file_path) or (not self._config_dir):
- self._config_dir = mkdtemp()
- config_filename = "rouge_conf.xml"
- else:
- config_dir, config_filename = os.path.split(config_file_path)
- verify_dir(config_dir, "configuration file")
- self._config_file = os.path.join(self._config_dir, config_filename)
- Rouge155.write_config_static(
- self._system_dir,
- self._system_filename_pattern,
- self._model_dir,
- self._model_filename_pattern,
- self._config_file,
- system_id,
- )
- self.log.info("Written ROUGE configuration to {}".format(self._config_file))
-
- def evaluate(self, system_id=1, rouge_args=None):
- """
- Run ROUGE to evaluate the system summaries in system_dir against
- the model summaries in model_dir. The summaries are assumed to
- be in the one-sentence-per-line HTML format ROUGE understands.
-
- system_id: Optional system ID which will be printed in
- ROUGE's output.
-
- Returns: Rouge output as string.
-
- """
- self.write_config(system_id=system_id)
- options = self.__get_options(rouge_args)
- command = [self._bin_path] + options
- env = os.environ.copy()
- if hasattr(self, "_home_dir") and self._home_dir:
- env["ROUGE_EVAL_HOME"] = self._home_dir
- self.log.info("Running ROUGE with command {}".format(" ".join(command)))
- rouge_output = check_output(command, env=env).decode("UTF-8")
- return rouge_output
-
- def convert_and_evaluate(self, system_id=1, split_sentences=False, rouge_args=None):
- """
- Convert plain text summaries to ROUGE format and run ROUGE to
- evaluate the system summaries in system_dir against the model
- summaries in model_dir. Optionally split texts into sentences
- in case they aren't already.
-
- This is just a convenience method combining
- convert_summaries_to_rouge_format() and evaluate().
-
- split_sentences: Optional argument specifying if
- sentences should be split.
- system_id: Optional system ID which will be printed
- in ROUGE's output.
-
- Returns: ROUGE output as string.
-
- """
- if split_sentences:
- self.split_sentences()
- self.__write_summaries()
- rouge_output = self.evaluate(system_id, rouge_args)
- return rouge_output
-
- def output_to_dict(self, output):
- """
- Convert the ROUGE output into python dictionary for further
- processing.
-
- """
- # 0 ROUGE-1 Average_R: 0.02632 (95%-conf.int. 0.02632 - 0.02632)
- pattern = re.compile(
- r"(\d+) (ROUGE-\S+) (Average_\w): (\d.\d+) "
- r"\(95%-conf.int. (\d.\d+) - (\d.\d+)\)"
- )
- results = {}
- for line in output.split("\n"):
- match = pattern.match(line)
- if match:
- (
- sys_id,
- rouge_type,
- measure,
- result,
- conf_begin,
- conf_end,
- ) = match.groups()
- measure = {
- "Average_R": "recall",
- "Average_P": "precision",
- "Average_F": "f_score",
- }[measure]
- rouge_type = rouge_type.lower().replace("-", "_")
- key = "{}_{}".format(rouge_type, measure)
- results[key] = float(result)
- results["{}_cb".format(key)] = float(conf_begin)
- results["{}_ce".format(key)] = float(conf_end)
- return results
-
- ###################################################################
- # Private methods
-
- def __set_rouge_dir(self, home_dir=None):
- """
- Verfify presence of ROUGE-1.5.5.pl and data folder, and set
- those paths.
-
- """
- if not home_dir:
- self._home_dir = self.__get_rouge_home_dir_from_settings()
- else:
- self._home_dir = home_dir
- self.save_home_dir()
- self._bin_path = os.path.join(self._home_dir, "ROUGE-1.5.5.pl")
- self.data_dir = os.path.join(self._home_dir, "data")
- if not os.path.exists(self._bin_path):
- raise Exception(
- "ROUGE binary not found at {}. Please set the "
- "correct path by running pyrouge_set_rouge_path "
- "/path/to/rouge/home.".format(self._bin_path)
- )
-
- def __get_rouge_home_dir_from_settings(self):
- config = ConfigParser()
- with open(self._settings_file) as f:
- if hasattr(config, "read_file"):
- config.read_file(f)
- else:
- # use deprecated python 2.x method
- config.readfp(f)
- rouge_home_dir = config.get("pyrouge settings", "home_dir")
- return rouge_home_dir
-
- @staticmethod
- def __get_eval_string(
- task_id, system_id, system_dir, system_filename, model_dir, model_filenames
- ):
- """
- ROUGE can evaluate several system summaries for a given text
- against several model summaries, i.e. there is an m-to-n
- relation between system and model summaries. The system
- summaries are listed in the tag and the model summaries
- in the tag. pyrouge currently only supports one system
- summary per text, i.e. it assumes a 1-to-n relation between
- system and model summaries.
-
- """
- peer_elems = '
{name}
'.format(
- id=system_id, name=system_filename
- )
-
- model_elems = [
- '{name}'.format(id=chr(65 + i), name=name)
- for i, name in enumerate(model_filenames)
- ]
-
- model_elems = "\n\t\t\t".join(model_elems)
- eval_string = """
-
- {model_root}
- {peer_root}
-
-
-
- {peer_elems}
-
-
- {model_elems}
-
-
-""".format(
- task_id=task_id,
- model_root=model_dir,
- model_elems=model_elems,
- peer_root=system_dir,
- peer_elems=peer_elems,
- )
- return eval_string
-
- def __process_summaries(self, process_func):
- """
- Helper method that applies process_func to the files in the
- system and model folders and saves the resulting files to new
- system and model folders.
-
- """
- temp_dir = mkdtemp()
- new_system_dir = os.path.join(temp_dir, "system")
- os.mkdir(new_system_dir)
- new_model_dir = os.path.join(temp_dir, "model")
- os.mkdir(new_model_dir)
- self.log.info(
- "Processing summaries. Saving system files to {} and "
- "model files to {}.".format(new_system_dir, new_model_dir)
- )
- process_func(self._system_dir, new_system_dir)
- process_func(self._model_dir, new_model_dir)
- self._system_dir = new_system_dir
- self._model_dir = new_model_dir
-
- def __write_summaries(self):
- self.log.info("Writing summaries.")
- self.__process_summaries(self.convert_summaries_to_rouge_format)
-
- @staticmethod
- def __get_model_filenames_for_id(id, model_dir, model_filenames_pattern):
- pattern = re.compile(model_filenames_pattern.replace("#ID#", id))
- model_filenames = [f for f in os.listdir(model_dir) if pattern.match(f)]
- if not model_filenames:
- raise Exception(
- "Could not find any model summaries for the system"
- " summary with ID {}. Specified model filename pattern was: "
- "{}".format(id, model_filenames_pattern)
- )
- return model_filenames
-
- def __get_options(self, rouge_args=None):
- """
- Get supplied command line arguments for ROUGE or use default
- ones.
-
- """
- if self.args:
- options = self.args.split()
- elif rouge_args:
- options = rouge_args.split()
- else:
- options = [
- "-e",
- self._data_dir,
- "-c",
- 95,
- "-2",
- "-1",
- "-U",
- "-r",
- 1000,
- "-n",
- 4,
- "-w",
- 1.2,
- "-a",
- ]
- options = list(map(str, options))
-
- options = self.__add_config_option(options)
- return options
-
- def __create_dir_property(self, dir_name, docstring):
- """
- Generate getter and setter for a directory property.
-
- """
- property_name = "{}_dir".format(dir_name)
- private_name = "_" + property_name
- setattr(self, private_name, None)
-
- def fget(self):
- return getattr(self, private_name)
-
- def fset(self, path):
- verify_dir(path, dir_name)
- setattr(self, private_name, path)
-
- p = property(fget=fget, fset=fset, doc=docstring)
- setattr(self.__class__, property_name, p)
-
- def __set_dir_properties(self):
- """
- Automatically generate the properties for directories.
-
- """
- directories = [
- ("home", "The ROUGE home directory."),
- ("data", "The path of the ROUGE 'data' directory."),
- ("system", "Path of the directory containing system summaries."),
- ("model", "Path of the directory containing model summaries."),
- ]
- for (dirname, docstring) in directories:
- self.__create_dir_property(dirname, docstring)
-
- def __clean_rouge_args(self, rouge_args):
- """
- Remove enclosing quotation marks, if any.
-
- """
- if not rouge_args:
- return
- quot_mark_pattern = re.compile('"(.+)"')
- match = quot_mark_pattern.match(rouge_args)
- if match:
- cleaned_args = match.group(1)
- return cleaned_args
- else:
- return rouge_args
-
- def __add_config_option(self, options):
- return options + ["-m"] + [self._config_file]
-
- def __get_config_path(self):
- if platform.system() == "Windows":
- parent_dir = os.getenv("APPDATA")
- config_dir_name = "pyrouge"
- elif os.name == "posix":
- parent_dir = os.path.expanduser("~")
- config_dir_name = ".pyrouge"
- else:
- parent_dir = os.path.dirname(__file__)
- config_dir_name = ""
- config_dir = os.path.join(parent_dir, config_dir_name)
- if not os.path.exists(config_dir):
- os.makedirs(config_dir)
- return os.path.join(config_dir, "settings.ini")
-
-
-if __name__ == "__main__":
- import argparse
- from utils.argparsers import rouge_path_parser
-
- parser = argparse.ArgumentParser(parents=[rouge_path_parser])
- args = parser.parse_args()
-
- rouge = Rouge155(args.rouge_home)
- rouge.save_home_dir()
diff --git a/spaces/allknowingroger/Image-Models-Test166/README.md b/spaces/allknowingroger/Image-Models-Test166/README.md
deleted file mode 100644
index f91e4b31ab345f987b425de029c057bfb69d9e1b..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test166/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test
----
-
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test74/README.md b/spaces/allknowingroger/Image-Models-Test74/README.md
deleted file mode 100644
index 7b8fa7cf47c8831cfa8e64966fbf7e7a32f6685e..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test74/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test73
----
-
-
\ No newline at end of file
diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Gravityengine.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Gravityengine.py
deleted file mode 100644
index f0cd09daaaae0adaa349f91139dc60c7ac79c028..0000000000000000000000000000000000000000
--- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Gravityengine.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://gpt4.xunika.uk/'
-model = ['gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
- headers = {
- 'Content-Type': 'application/json',
- }
- data = {
- 'model': model,
- 'temperature': 0.7,
- 'presence_penalty': 0,
- 'messages': messages,
- }
- response = requests.post(url + '/api/openai/v1/chat/completions',
- json=data, stream=True)
-
- yield response.json()['choices'][0]['message']['content']
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/antonovmaxim/text-generation-webui-space/modules/chat.py b/spaces/antonovmaxim/text-generation-webui-space/modules/chat.py
deleted file mode 100644
index 3055a97a65bd4bd2137f6ce1ec182e8d63637a13..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/modules/chat.py
+++ /dev/null
@@ -1,592 +0,0 @@
-import ast
-import base64
-import copy
-import io
-import json
-import logging
-import re
-from datetime import datetime
-from pathlib import Path
-
-import yaml
-from PIL import Image
-
-import modules.shared as shared
-from modules.extensions import apply_extensions
-from modules.html_generator import chat_html_wrapper, make_thumbnail
-from modules.text_generation import (generate_reply, get_encoded_length,
- get_max_prompt_length)
-from modules.utils import replace_all
-
-
-def get_turn_substrings(state, instruct=False):
- if instruct:
- if 'turn_template' not in state or state['turn_template'] == '':
- template = '<|user|>\n<|user-message|>\n<|bot|>\n<|bot-message|>\n'
- else:
- template = state['turn_template'].replace(r'\n', '\n')
- else:
- template = '<|user|>: <|user-message|>\n<|bot|>: <|bot-message|>\n'
-
- replacements = {
- '<|user|>': state['name1_instruct' if instruct else 'name1'].strip(),
- '<|bot|>': state['name2_instruct' if instruct else 'name2'].strip(),
- }
-
- output = {
- 'user_turn': template.split('<|bot|>')[0],
- 'bot_turn': '<|bot|>' + template.split('<|bot|>')[1],
- 'user_turn_stripped': template.split('<|bot|>')[0].split('<|user-message|>')[0],
- 'bot_turn_stripped': '<|bot|>' + template.split('<|bot|>')[1].split('<|bot-message|>')[0],
- }
-
- for k in output:
- output[k] = replace_all(output[k], replacements)
-
- return output
-
-
-def generate_chat_prompt(user_input, state, **kwargs):
- impersonate = kwargs.get('impersonate', False)
- _continue = kwargs.get('_continue', False)
- also_return_rows = kwargs.get('also_return_rows', False)
- history = state.get('history', shared.history['internal'])
- is_instruct = state['mode'] == 'instruct'
-
- # Finding the maximum prompt size
- chat_prompt_size = state['chat_prompt_size']
- if shared.soft_prompt:
- chat_prompt_size -= shared.soft_prompt_tensor.shape[1]
-
- max_length = min(get_max_prompt_length(state), chat_prompt_size)
-
- all_substrings = {
- 'chat': get_turn_substrings(state, instruct=False),
- 'instruct': get_turn_substrings(state, instruct=True)
- }
- substrings = all_substrings['instruct' if is_instruct else 'chat']
-
- # Creating the template for "chat-instruct" mode
- if state['mode'] == 'chat-instruct':
- wrapper = ''
- command = state['chat-instruct_command'].replace('<|character|>', state['name2'] if not impersonate else state['name1'])
- wrapper += state['context_instruct']
- wrapper += all_substrings['instruct']['user_turn'].replace('<|user-message|>', command)
- wrapper += all_substrings['instruct']['bot_turn_stripped']
- if impersonate:
- wrapper += substrings['user_turn_stripped'].rstrip(' ')
- else:
- wrapper += apply_extensions("bot_prefix", substrings['bot_turn_stripped'].rstrip(' '))
- else:
- wrapper = '<|prompt|>'
-
- # Building the prompt
- min_rows = 3
- i = len(history) - 1
- rows = [state['context_instruct'] if is_instruct else f"{state['context'].strip()}\n"]
- while i >= 0 and get_encoded_length(wrapper.replace('<|prompt|>', ''.join(rows))) < max_length:
- if _continue and i == len(history) - 1:
- rows.insert(1, substrings['bot_turn_stripped'] + history[i][1].strip())
- else:
- rows.insert(1, substrings['bot_turn'].replace('<|bot-message|>', history[i][1].strip()))
-
- string = history[i][0]
- if string not in ['', '<|BEGIN-VISIBLE-CHAT|>']:
- rows.insert(1, replace_all(substrings['user_turn'], {'<|user-message|>': string.strip(), '<|round|>': str(i)}))
-
- i -= 1
-
- if impersonate:
- if state['mode'] == 'chat-instruct':
- min_rows = 1
- else:
- min_rows = 2
- rows.append(substrings['user_turn_stripped'].rstrip(' '))
- elif not _continue:
- # Adding the user message
- if len(user_input) > 0:
- rows.append(replace_all(substrings['user_turn'], {'<|user-message|>': user_input.strip(), '<|round|>': str(len(history))}))
-
- # Adding the Character prefix
- if state['mode'] != 'chat-instruct':
- rows.append(apply_extensions("bot_prefix", substrings['bot_turn_stripped'].rstrip(' ')))
-
- while len(rows) > min_rows and get_encoded_length(wrapper.replace('<|prompt|>', ''.join(rows))) >= max_length:
- rows.pop(1)
-
- prompt = wrapper.replace('<|prompt|>', ''.join(rows))
- if also_return_rows:
- return prompt, rows
- else:
- return prompt
-
-
-def get_stopping_strings(state):
- stopping_strings = []
- if state['mode'] in ['instruct', 'chat-instruct']:
- stopping_strings += [
- state['turn_template'].split('<|user-message|>')[1].split('<|bot|>')[0] + '<|bot|>',
- state['turn_template'].split('<|bot-message|>')[1] + '<|user|>'
- ]
-
- replacements = {
- '<|user|>': state['name1_instruct'],
- '<|bot|>': state['name2_instruct']
- }
-
- for i in range(len(stopping_strings)):
- stopping_strings[i] = replace_all(stopping_strings[i], replacements).rstrip(' ').replace(r'\n', '\n')
-
- if state['mode'] in ['chat', 'chat-instruct']:
- stopping_strings += [
- f"\n{state['name1']}:",
- f"\n{state['name2']}:"
- ]
-
- stopping_strings += ast.literal_eval(f"[{state['custom_stopping_strings']}]")
- return stopping_strings
-
-
-def extract_message_from_reply(reply, state):
- next_character_found = False
- stopping_strings = get_stopping_strings(state)
-
- if state['stop_at_newline']:
- lines = reply.split('\n')
- reply = lines[0].strip()
- if len(lines) > 1:
- next_character_found = True
- else:
- for string in stopping_strings:
- idx = reply.find(string)
- if idx != -1:
- reply = reply[:idx]
- next_character_found = True
-
- # If something like "\nYo" is generated just before "\nYou:"
- # is completed, trim it
- if not next_character_found:
- for string in stopping_strings:
- for j in range(len(string) - 1, 0, -1):
- if reply[-j:] == string[:j]:
- reply = reply[:-j]
- break
- else:
- continue
-
- break
-
- return reply, next_character_found
-
-
-def chatbot_wrapper(text, state, regenerate=False, _continue=False):
- if shared.model_name == 'None' or shared.model is None:
- logging.error("No model is loaded! Select one in the Model tab.")
- yield shared.history['visible']
- return
-
- # Defining some variables
- cumulative_reply = ''
- just_started = True
- visible_text = None
- eos_token = '\n' if state['stop_at_newline'] else None
- stopping_strings = get_stopping_strings(state)
-
- # Preparing the input
- if not any((regenerate, _continue)):
- text, visible_text = apply_extensions('input_hijack', text, visible_text)
- if visible_text is None:
- visible_text = text
-
- text = apply_extensions('input', text)
- # *Is typing...*
- yield shared.history['visible'] + [[visible_text, shared.processing_message]]
- else:
- text, visible_text = shared.history['internal'][-1][0], shared.history['visible'][-1][0]
- if regenerate:
- shared.history['visible'].pop()
- shared.history['internal'].pop()
- # *Is typing...*
- yield shared.history['visible'] + [[visible_text, shared.processing_message]]
- elif _continue:
- last_reply = [shared.history['internal'][-1][1], shared.history['visible'][-1][1]]
- yield shared.history['visible'][:-1] + [[visible_text, last_reply[1] + '...']]
-
- # Generating the prompt
- kwargs = {'_continue': _continue}
- prompt = apply_extensions('custom_generate_chat_prompt', text, state, **kwargs)
- if prompt is None:
- prompt = generate_chat_prompt(text, state, **kwargs)
-
- # Generate
- for i in range(state['chat_generation_attempts']):
- reply = None
- for j, reply in enumerate(generate_reply(prompt + cumulative_reply, state, eos_token=eos_token, stopping_strings=stopping_strings, is_chat=True)):
- reply = cumulative_reply + reply
-
- # Extracting the reply
- reply, next_character_found = extract_message_from_reply(reply, state)
- visible_reply = re.sub("(||{{user}})", state['name1'], reply)
- visible_reply = apply_extensions("output", visible_reply)
-
- # We need this global variable to handle the Stop event,
- # otherwise gradio gets confused
- if shared.stop_everything:
- return shared.history['visible']
-
- if just_started:
- just_started = False
- if not _continue:
- shared.history['internal'].append(['', ''])
- shared.history['visible'].append(['', ''])
-
- if _continue:
- shared.history['internal'][-1] = [text, last_reply[0] + reply]
- shared.history['visible'][-1] = [visible_text, last_reply[1] + visible_reply]
- yield shared.history['visible']
- elif not (j == 0 and visible_reply.strip() == ''):
- shared.history['internal'][-1] = [text, reply]
- shared.history['visible'][-1] = [visible_text, visible_reply]
- yield shared.history['visible']
-
- if next_character_found:
- break
-
- if reply in [None, '']:
- break
- else:
- cumulative_reply = reply
-
- yield shared.history['visible']
-
-
-def impersonate_wrapper(text, state):
- if shared.model_name == 'None' or shared.model is None:
- logging.error("No model is loaded! Select one in the Model tab.")
- yield ''
- return
-
- # Defining some variables
- cumulative_reply = ''
- eos_token = '\n' if state['stop_at_newline'] else None
- prompt = generate_chat_prompt('', state, impersonate=True)
- stopping_strings = get_stopping_strings(state)
-
- yield text + '...'
- cumulative_reply = text
- for i in range(state['chat_generation_attempts']):
- reply = None
- for reply in generate_reply(prompt + cumulative_reply, state, eos_token=eos_token, stopping_strings=stopping_strings, is_chat=True):
- reply = cumulative_reply + reply
- reply, next_character_found = extract_message_from_reply(reply, state)
- yield reply
- if next_character_found:
- break
-
- if reply in [None, '']:
- break
- else:
- cumulative_reply = reply
-
- yield cumulative_reply
-
-
-def generate_chat_reply(text, state, regenerate=False, _continue=False):
- if regenerate or _continue:
- text = ''
- if (len(shared.history['visible']) == 1 and not shared.history['visible'][0][0]) or len(shared.history['internal']) == 0:
- yield shared.history['visible']
- return
-
- for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue):
- yield history
-
-
-# Same as above but returns HTML
-def generate_chat_reply_wrapper(text, state, regenerate=False, _continue=False):
- for history in generate_chat_reply(text, state, regenerate, _continue):
- yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode'], state['chat_style'])
-
-
-def remove_last_message():
- if len(shared.history['visible']) > 0 and shared.history['internal'][-1][0] != '<|BEGIN-VISIBLE-CHAT|>':
- last = shared.history['visible'].pop()
- shared.history['internal'].pop()
- else:
- last = ['', '']
-
- return last[0]
-
-
-def send_last_reply_to_input():
- if len(shared.history['internal']) > 0:
- return shared.history['internal'][-1][1]
- else:
- return ''
-
-
-def replace_last_reply(text):
- if len(shared.history['visible']) > 0:
- shared.history['visible'][-1][1] = text
- shared.history['internal'][-1][1] = apply_extensions("input", text)
-
-
-def send_dummy_message(text):
- shared.history['visible'].append([text, ''])
- shared.history['internal'].append([apply_extensions("input", text), ''])
-
-
-def send_dummy_reply(text):
- if len(shared.history['visible']) > 0 and not shared.history['visible'][-1][1] == '':
- shared.history['visible'].append(['', ''])
- shared.history['internal'].append(['', ''])
-
- shared.history['visible'][-1][1] = text
- shared.history['internal'][-1][1] = apply_extensions("input", text)
-
-
-def clear_chat_log(greeting, mode):
- shared.history['visible'] = []
- shared.history['internal'] = []
-
- if mode != 'instruct':
- if greeting != '':
- shared.history['internal'] += [['<|BEGIN-VISIBLE-CHAT|>', greeting]]
- shared.history['visible'] += [['', apply_extensions("output", greeting)]]
-
- save_history(mode)
-
-
-def redraw_html(name1, name2, mode, style, reset_cache=False):
- return chat_html_wrapper(shared.history['visible'], name1, name2, mode, style, reset_cache=reset_cache)
-
-
-def tokenize_dialogue(dialogue, name1, name2):
- history = []
- messages = []
- dialogue = re.sub('', '', dialogue)
- dialogue = re.sub('', '', dialogue)
- dialogue = re.sub('(\n|^)[Aa]non:', '\\1You:', dialogue)
- dialogue = re.sub('(\n|^)\[CHARACTER\]:', f'\\g<1>{name2}:', dialogue)
- idx = [m.start() for m in re.finditer(f"(^|\n)({re.escape(name1)}|{re.escape(name2)}):", dialogue)]
- if len(idx) == 0:
- return history
-
- for i in range(len(idx) - 1):
- messages.append(dialogue[idx[i]:idx[i + 1]].strip())
-
- messages.append(dialogue[idx[-1]:].strip())
- entry = ['', '']
- for i in messages:
- if i.startswith(f'{name1}:'):
- entry[0] = i[len(f'{name1}:'):].strip()
- elif i.startswith(f'{name2}:'):
- entry[1] = i[len(f'{name2}:'):].strip()
- if not (len(entry[0]) == 0 and len(entry[1]) == 0):
- history.append(entry)
-
- entry = ['', '']
-
- print("\033[1;32;1m\nDialogue tokenized to:\033[0;37;0m\n", end='')
- for row in history:
- for column in row:
- print("\n")
- for line in column.strip().split('\n'):
- print("| " + line + "\n")
-
- print("|\n")
- print("------------------------------")
-
- return history
-
-
-def save_history(mode, timestamp=False):
- # Instruct mode histories should not be saved as if
- # Alpaca or Vicuna were characters
- if mode == 'instruct':
- if not timestamp:
- return
-
- fname = f"Instruct_{datetime.now().strftime('%Y%m%d-%H%M%S')}.json"
- else:
- if timestamp:
- fname = f"{shared.character}_{datetime.now().strftime('%Y%m%d-%H%M%S')}.json"
- else:
- fname = f"{shared.character}_persistent.json"
-
- if not Path('logs').exists():
- Path('logs').mkdir()
-
- with open(Path(f'logs/{fname}'), 'w', encoding='utf-8') as f:
- f.write(json.dumps({'data': shared.history['internal'], 'data_visible': shared.history['visible']}, indent=2))
-
- return Path(f'logs/{fname}')
-
-
-def load_history(file, name1, name2):
- file = file.decode('utf-8')
- try:
- j = json.loads(file)
- if 'data' in j:
- shared.history['internal'] = j['data']
- if 'data_visible' in j:
- shared.history['visible'] = j['data_visible']
- else:
- shared.history['visible'] = copy.deepcopy(shared.history['internal'])
- except:
- shared.history['internal'] = tokenize_dialogue(file, name1, name2)
- shared.history['visible'] = copy.deepcopy(shared.history['internal'])
-
-
-def replace_character_names(text, name1, name2):
- text = text.replace('{{user}}', name1).replace('{{char}}', name2)
- return text.replace('', name1).replace('', name2)
-
-
-def build_pygmalion_style_context(data):
- context = ""
- if 'char_persona' in data and data['char_persona'] != '':
- context += f"{data['char_name']}'s Persona: {data['char_persona']}\n"
-
- if 'world_scenario' in data and data['world_scenario'] != '':
- context += f"Scenario: {data['world_scenario']}\n"
-
- context = f"{context.strip()}\n\n"
- return context
-
-
-def generate_pfp_cache(character):
- cache_folder = Path("cache")
- if not cache_folder.exists():
- cache_folder.mkdir()
-
- for path in [Path(f"characters/{character}.{extension}") for extension in ['png', 'jpg', 'jpeg']]:
- if path.exists():
- img = make_thumbnail(Image.open(path))
- img.save(Path('cache/pfp_character.png'), format='PNG')
- return img
-
- return None
-
-
-def load_character(character, name1, name2, instruct=False):
- shared.character = character
- context = greeting = turn_template = ""
- greeting_field = 'greeting'
- picture = None
-
- # Deleting the profile picture cache, if any
- if Path("cache/pfp_character.png").exists():
- Path("cache/pfp_character.png").unlink()
-
- if character != 'None':
- folder = 'characters' if not instruct else 'characters/instruction-following'
- picture = generate_pfp_cache(character)
- for extension in ["yml", "yaml", "json"]:
- filepath = Path(f'{folder}/{character}.{extension}')
- if filepath.exists():
- break
-
- file_contents = open(filepath, 'r', encoding='utf-8').read()
- data = json.loads(file_contents) if extension == "json" else yaml.safe_load(file_contents)
-
- # Finding the bot's name
- for k in ['name', 'bot', '<|bot|>', 'char_name']:
- if k in data and data[k] != '':
- name2 = data[k]
- break
-
- # Find the user name (if any)
- for k in ['your_name', 'user', '<|user|>']:
- if k in data and data[k] != '':
- name1 = data[k]
- break
-
- for field in ['context', 'greeting', 'example_dialogue', 'char_persona', 'char_greeting', 'world_scenario']:
- if field in data:
- data[field] = replace_character_names(data[field], name1, name2)
-
- if 'context' in data:
- context = data['context']
- if not instruct:
- context = context.strip() + '\n'
- elif "char_persona" in data:
- context = build_pygmalion_style_context(data)
- greeting_field = 'char_greeting'
-
- if 'example_dialogue' in data:
- context += f"{data['example_dialogue'].strip()}\n"
-
- if greeting_field in data:
- greeting = data[greeting_field]
-
- if 'turn_template' in data:
- turn_template = data['turn_template']
-
- else:
- context = shared.settings['context']
- name2 = shared.settings['name2']
- greeting = shared.settings['greeting']
- turn_template = shared.settings['turn_template']
-
- if not instruct:
- shared.history['internal'] = []
- shared.history['visible'] = []
- if Path(f'logs/{shared.character}_persistent.json').exists():
- load_history(open(Path(f'logs/{shared.character}_persistent.json'), 'rb').read(), name1, name2)
- else:
- # Insert greeting if it exists
- if greeting != "":
- shared.history['internal'] += [['<|BEGIN-VISIBLE-CHAT|>', greeting]]
- shared.history['visible'] += [['', apply_extensions("output", greeting)]]
-
- # Create .json log files since they don't already exist
- save_history('instruct' if instruct else 'chat')
-
- return name1, name2, picture, greeting, context, repr(turn_template)[1:-1]
-
-
-def upload_character(json_file, img, tavern=False):
- json_file = json_file if type(json_file) == str else json_file.decode('utf-8')
- data = json.loads(json_file)
- outfile_name = data["char_name"]
- i = 1
- while Path(f'characters/{outfile_name}.json').exists():
- outfile_name = f'{data["char_name"]}_{i:03d}'
- i += 1
-
- if tavern:
- outfile_name = f'TavernAI-{outfile_name}'
-
- with open(Path(f'characters/{outfile_name}.json'), 'w', encoding='utf-8') as f:
- f.write(json_file)
-
- if img is not None:
- img = Image.open(io.BytesIO(img))
- img.save(Path(f'characters/{outfile_name}.png'))
-
- logging.info(f'New character saved to "characters/{outfile_name}.json".')
- return outfile_name
-
-
-def upload_tavern_character(img, name1, name2):
- _img = Image.open(io.BytesIO(img))
- _img.getexif()
- decoded_string = base64.b64decode(_img.info['chara'])
- _json = json.loads(decoded_string)
- _json = {"char_name": _json['name'], "char_persona": _json['description'], "char_greeting": _json["first_mes"], "example_dialogue": _json['mes_example'], "world_scenario": _json['scenario']}
- return upload_character(json.dumps(_json), img, tavern=True)
-
-
-def upload_your_profile_picture(img):
- cache_folder = Path("cache")
- if not cache_folder.exists():
- cache_folder.mkdir()
-
- if img is None:
- if Path("cache/pfp_me.png").exists():
- Path("cache/pfp_me.png").unlink()
- else:
- img = make_thumbnail(img)
- img.save(Path('cache/pfp_me.png'))
- logging.info('Profile picture saved to "cache/pfp_me.png"')
diff --git a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/__init__.py b/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/__init__.py
deleted file mode 100644
index 38e906243d898d7fc071c0fe218338c5cace3ea1..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .sam import Sam
-from .image_encoder import ImageEncoderViT
-from .mask_decoder import MaskDecoder
-from .prompt_encoder import PromptEncoder
-from .transformer import TwoWayTransformer
diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/ScuNET/scripts/scunet_model.py b/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/ScuNET/scripts/scunet_model.py
deleted file mode 100644
index e0fbf3a33747f447d396dd0d564e92c904cfabac..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/ScuNET/scripts/scunet_model.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import os.path
-import sys
-import traceback
-
-import PIL.Image
-import numpy as np
-import torch
-from basicsr.utils.download_util import load_file_from_url
-
-import modules.upscaler
-from modules import devices, modelloader
-from scunet_model_arch import SCUNet as net
-
-
-class UpscalerScuNET(modules.upscaler.Upscaler):
- def __init__(self, dirname):
- self.name = "ScuNET"
- self.model_name = "ScuNET GAN"
- self.model_name2 = "ScuNET PSNR"
- self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_gan.pth"
- self.model_url2 = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_psnr.pth"
- self.user_path = dirname
- super().__init__()
- model_paths = self.find_models(ext_filter=[".pth"])
- scalers = []
- add_model2 = True
- for file in model_paths:
- if "http" in file:
- name = self.model_name
- else:
- name = modelloader.friendly_name(file)
- if name == self.model_name2 or file == self.model_url2:
- add_model2 = False
- try:
- scaler_data = modules.upscaler.UpscalerData(name, file, self, 4)
- scalers.append(scaler_data)
- except Exception:
- print(f"Error loading ScuNET model: {file}", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
- if add_model2:
- scaler_data2 = modules.upscaler.UpscalerData(self.model_name2, self.model_url2, self)
- scalers.append(scaler_data2)
- self.scalers = scalers
-
- def do_upscale(self, img: PIL.Image, selected_file):
- torch.cuda.empty_cache()
-
- model = self.load_model(selected_file)
- if model is None:
- return img
-
- device = devices.get_device_for('scunet')
- img = np.array(img)
- img = img[:, :, ::-1]
- img = np.moveaxis(img, 2, 0) / 255
- img = torch.from_numpy(img).float()
- img = img.unsqueeze(0).to(device)
-
- with torch.no_grad():
- output = model(img)
- output = output.squeeze().float().cpu().clamp_(0, 1).numpy()
- output = 255. * np.moveaxis(output, 0, 2)
- output = output.astype(np.uint8)
- output = output[:, :, ::-1]
- torch.cuda.empty_cache()
- return PIL.Image.fromarray(output, 'RGB')
-
- def load_model(self, path: str):
- device = devices.get_device_for('scunet')
- if "http" in path:
- filename = load_file_from_url(url=self.model_url, model_dir=self.model_path, file_name="%s.pth" % self.name,
- progress=True)
- else:
- filename = path
- if not os.path.exists(os.path.join(self.model_path, filename)) or filename is None:
- print(f"ScuNET: Unable to load model from {filename}", file=sys.stderr)
- return None
-
- model = net(in_nc=3, config=[4, 4, 4, 4, 4, 4, 4], dim=64)
- model.load_state_dict(torch.load(filename), strict=True)
- model.eval()
- for k, v in model.named_parameters():
- v.requires_grad = False
- model = model.to(device)
-
- return model
-
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/MspImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/MspImagePlugin.py
deleted file mode 100644
index c4d7ddbb4f84ada85733a991aebfcc66ca39db71..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/MspImagePlugin.py
+++ /dev/null
@@ -1,194 +0,0 @@
-#
-# The Python Imaging Library.
-#
-# MSP file handling
-#
-# This is the format used by the Paint program in Windows 1 and 2.
-#
-# History:
-# 95-09-05 fl Created
-# 97-01-03 fl Read/write MSP images
-# 17-02-21 es Fixed RLE interpretation
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1995-97.
-# Copyright (c) Eric Soroos 2017.
-#
-# See the README file for information on usage and redistribution.
-#
-# More info on this format: https://archive.org/details/gg243631
-# Page 313:
-# Figure 205. Windows Paint Version 1: "DanM" Format
-# Figure 206. Windows Paint Version 2: "LinS" Format. Used in Windows V2.03
-#
-# See also: https://www.fileformat.info/format/mspaint/egff.htm
-
-import io
-import struct
-
-from . import Image, ImageFile
-from ._binary import i16le as i16
-from ._binary import o16le as o16
-
-#
-# read MSP files
-
-
-def _accept(prefix):
- return prefix[:4] in [b"DanM", b"LinS"]
-
-
-##
-# Image plugin for Windows MSP images. This plugin supports both
-# uncompressed (Windows 1.0).
-
-
-class MspImageFile(ImageFile.ImageFile):
-
- format = "MSP"
- format_description = "Windows Paint"
-
- def _open(self):
-
- # Header
- s = self.fp.read(32)
- if not _accept(s):
- raise SyntaxError("not an MSP file")
-
- # Header checksum
- checksum = 0
- for i in range(0, 32, 2):
- checksum = checksum ^ i16(s, i)
- if checksum != 0:
- raise SyntaxError("bad MSP checksum")
-
- self.mode = "1"
- self._size = i16(s, 4), i16(s, 6)
-
- if s[:4] == b"DanM":
- self.tile = [("raw", (0, 0) + self.size, 32, ("1", 0, 1))]
- else:
- self.tile = [("MSP", (0, 0) + self.size, 32, None)]
-
-
-class MspDecoder(ImageFile.PyDecoder):
- # The algo for the MSP decoder is from
- # https://www.fileformat.info/format/mspaint/egff.htm
- # cc-by-attribution -- That page references is taken from the
- # Encyclopedia of Graphics File Formats and is licensed by
- # O'Reilly under the Creative Common/Attribution license
- #
- # For RLE encoded files, the 32byte header is followed by a scan
- # line map, encoded as one 16bit word of encoded byte length per
- # line.
- #
- # NOTE: the encoded length of the line can be 0. This was not
- # handled in the previous version of this encoder, and there's no
- # mention of how to handle it in the documentation. From the few
- # examples I've seen, I've assumed that it is a fill of the
- # background color, in this case, white.
- #
- #
- # Pseudocode of the decoder:
- # Read a BYTE value as the RunType
- # If the RunType value is zero
- # Read next byte as the RunCount
- # Read the next byte as the RunValue
- # Write the RunValue byte RunCount times
- # If the RunType value is non-zero
- # Use this value as the RunCount
- # Read and write the next RunCount bytes literally
- #
- # e.g.:
- # 0x00 03 ff 05 00 01 02 03 04
- # would yield the bytes:
- # 0xff ff ff 00 01 02 03 04
- #
- # which are then interpreted as a bit packed mode '1' image
-
- _pulls_fd = True
-
- def decode(self, buffer):
-
- img = io.BytesIO()
- blank_line = bytearray((0xFF,) * ((self.state.xsize + 7) // 8))
- try:
- self.fd.seek(32)
- rowmap = struct.unpack_from(
- f"<{self.state.ysize}H", self.fd.read(self.state.ysize * 2)
- )
- except struct.error as e:
- raise OSError("Truncated MSP file in row map") from e
-
- for x, rowlen in enumerate(rowmap):
- try:
- if rowlen == 0:
- img.write(blank_line)
- continue
- row = self.fd.read(rowlen)
- if len(row) != rowlen:
- raise OSError(
- "Truncated MSP file, expected %d bytes on row %s", (rowlen, x)
- )
- idx = 0
- while idx < rowlen:
- runtype = row[idx]
- idx += 1
- if runtype == 0:
- (runcount, runval) = struct.unpack_from("Bc", row, idx)
- img.write(runval * runcount)
- idx += 2
- else:
- runcount = runtype
- img.write(row[idx : idx + runcount])
- idx += runcount
-
- except struct.error as e:
- raise OSError(f"Corrupted MSP file in row {x}") from e
-
- self.set_as_raw(img.getvalue(), ("1", 0, 1))
-
- return -1, 0
-
-
-Image.register_decoder("MSP", MspDecoder)
-
-
-#
-# write MSP files (uncompressed only)
-
-
-def _save(im, fp, filename):
-
- if im.mode != "1":
- raise OSError(f"cannot write mode {im.mode} as MSP")
-
- # create MSP header
- header = [0] * 16
-
- header[0], header[1] = i16(b"Da"), i16(b"nM") # version 1
- header[2], header[3] = im.size
- header[4], header[5] = 1, 1
- header[6], header[7] = 1, 1
- header[8], header[9] = im.size
-
- checksum = 0
- for h in header:
- checksum = checksum ^ h
- header[12] = checksum # FIXME: is this the right field?
-
- # header
- for h in header:
- fp.write(o16(h))
-
- # image body
- ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 32, ("1", 0, 1))])
-
-
-#
-# registry
-
-Image.register_open(MspImageFile.format, MspImageFile, _accept)
-Image.register_save(MspImageFile.format, _save)
-
-Image.register_extension(MspImageFile.format, ".msp")
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/benchmark/dummy_masked_lm.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/benchmark/dummy_masked_lm.py
deleted file mode 100644
index 12b9c5d0f55993bf8750564882a351fc3f8055f0..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/benchmark/dummy_masked_lm.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from dataclasses import dataclass, field
-from typing import Optional
-
-import torch
-from omegaconf import II
-
-from .dummy_dataset import DummyDataset
-from fairseq.data import Dictionary
-from fairseq.dataclass import FairseqDataclass
-from fairseq.tasks import FairseqTask, register_task
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class DummyMaskedLMConfig(FairseqDataclass):
- dict_size: int = 49996
- dataset_size: int = 100000
- tokens_per_sample: int = field(
- default=512,
- metadata={
- "help": "max number of total tokens over all"
- " segments per sample for BERT dataset"
- },
- )
- batch_size: Optional[int] = II("dataset.batch_size")
- max_tokens: Optional[int] = II("dataset.max_tokens")
- max_target_positions: int = II("task.tokens_per_sample")
-
-
-@register_task("dummy_masked_lm", dataclass=DummyMaskedLMConfig)
-class DummyMaskedLMTask(FairseqTask):
- def __init__(self, cfg: DummyMaskedLMConfig):
- super().__init__(cfg)
-
- self.dictionary = Dictionary()
- for i in range(cfg.dict_size):
- self.dictionary.add_symbol("word{}".format(i))
- logger.info("dictionary: {} types".format(len(self.dictionary)))
- # add mask token
- self.mask_idx = self.dictionary.add_symbol("")
- self.dictionary.pad_to_multiple_(8) # often faster if divisible by 8
-
- mask_idx = 0
- pad_idx = 1
- seq = torch.arange(cfg.tokens_per_sample) + pad_idx + 1
- mask = torch.arange(2, cfg.tokens_per_sample, 7) # ~15%
- src = seq.clone()
- src[mask] = mask_idx
- tgt = torch.full_like(seq, pad_idx)
- tgt[mask] = seq[mask]
-
- self.dummy_src = src
- self.dummy_tgt = tgt
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- """Load a given dataset split.
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- if self.cfg.batch_size is not None:
- bsz = self.cfg.batch_size
- else:
- bsz = max(1, self.cfg.max_tokens // self.cfg.tokens_per_sample)
- self.datasets[split] = DummyDataset(
- {
- "id": 1,
- "net_input": {
- "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]),
- "src_lengths": torch.full(
- (bsz,), self.cfg.tokens_per_sample, dtype=torch.long
- ),
- },
- "target": torch.stack([self.dummy_tgt for _ in range(bsz)]),
- "nsentences": bsz,
- "ntokens": bsz * self.cfg.tokens_per_sample,
- },
- num_items=self.cfg.dataset_size,
- item_size=self.cfg.tokens_per_sample,
- )
-
- @property
- def source_dictionary(self):
- return self.dictionary
-
- @property
- def target_dictionary(self):
- return self.dictionary
diff --git a/spaces/asafAdge/Detic/tools/get_imagenet_21k_full_tar_json.py b/spaces/asafAdge/Detic/tools/get_imagenet_21k_full_tar_json.py
deleted file mode 100644
index e7127440030297812a9f4df38cfd6b4cba340c39..0000000000000000000000000000000000000000
--- a/spaces/asafAdge/Detic/tools/get_imagenet_21k_full_tar_json.py
+++ /dev/null
@@ -1,81 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import json
-import numpy as np
-import pickle
-import io
-import gzip
-import sys
-import time
-from nltk.corpus import wordnet
-from tqdm import tqdm
-import operator
-import torch
-
-sys.path.insert(0, 'third_party/CenterNet2/projects/CenterNet2/')
-sys.path.insert(0, 'third_party/Deformable-DETR')
-from detic.data.tar_dataset import DiskTarDataset, _TarDataset
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument("--imagenet_dir", default='datasets/imagenet/ImageNet-21k/')
- parser.add_argument("--tarfile_path", default='datasets/imagenet/metadata-22k/tar_files.npy')
- parser.add_argument("--tar_index_dir", default='datasets/imagenet/metadata-22k/tarindex_npy')
- parser.add_argument("--out_path", default='datasets/imagenet/annotations/imagenet-22k_image_info.json')
- parser.add_argument("--workers", default=16, type=int)
- args = parser.parse_args()
-
-
- start_time = time.time()
- print('Building dataset')
- dataset = DiskTarDataset(args.tarfile_path, args.tar_index_dir)
- end_time = time.time()
- print(f"Took {end_time-start_time} seconds to make the dataset.")
- print(f"Have {len(dataset)} samples.")
- print('dataset', dataset)
-
-
- tar_files = np.load(args.tarfile_path)
- categories = []
- for i, tar_file in enumerate(tar_files):
- wnid = tar_file[-13:-4]
- synset = wordnet.synset_from_pos_and_offset('n', int(wnid[1:]))
- synonyms = [x.name() for x in synset.lemmas()]
- category = {
- 'id': i + 1,
- 'synset': synset.name(),
- 'name': synonyms[0],
- 'def': synset.definition(),
- 'synonyms': synonyms,
- }
- categories.append(category)
- print('categories', len(categories))
-
- data_loader = torch.utils.data.DataLoader(
- dataset, batch_size=1, shuffle=False,
- num_workers=args.workers,
- collate_fn=operator.itemgetter(0),
- )
- images = []
- for img, label, index in tqdm(data_loader):
- if label == -1:
- continue
- image = {
- 'id': int(index) + 1,
- 'pos_category_ids': [int(label) + 1],
- 'height': int(img.height),
- 'width': int(img.width),
- 'tar_index': int(index),
- }
- images.append(image)
-
- data = {'categories': categories, 'images': images, 'annotations': []}
- try:
- for k, v in data.items():
- print(k, len(v))
- print('Saving to ', args.out_path)
- json.dump(data, open(args.out_path, 'w'))
- except:
- pass
- import pdb; pdb.set_trace()
-
diff --git a/spaces/ashhhh23/lordofthemysteries/Dockerfile b/spaces/ashhhh23/lordofthemysteries/Dockerfile
deleted file mode 100644
index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000
--- a/spaces/ashhhh23/lordofthemysteries/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/ChunShan Feng.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/ChunShan Feng.html
deleted file mode 100644
index 975b1f7a1de1d68314bce22df14646ba176e277b..0000000000000000000000000000000000000000
--- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/ChunShan Feng.html
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
-
- ChunShan Feng
-
-
-
-
-
-
ChunShan Feng
-
-
-
Mentee to Mentor
1- What's your motivation to become a mentor with SharpestMinds? - Landing a job in a new country is not easy. Immigrated from China and can share knowledge and experience with newcomers on how to make the transition and also want to help people land jobs.
2- What's your career journey been like as a data scientist? - Have 15 years of work experience in the manufacturing industry and have done DA & project development. Also, teach colleagues in China. Have an interest in analyzing data. - Currently working in a software consulting company. Doing analysis and building dashboards on Power BI.
3- According to you, what's the biggest challenge faced by newcomers when entering a data role? How can you help them with this? - The biggest challenge is having less experience working with projects, and less on the job training. They need project experience and add it to their resume. - Teach them what problems needs to be solved. and how we solve real-world problems.
4- How was the experience as an SM Mentee? - Mentor helped with how to write a resume and how to prepare for an interview and improve the project.
5- Do you have any questions regarding the SM platform? - None yet.
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/atticus/image-text-retrival-huster/misc/dataset.py b/spaces/atticus/image-text-retrival-huster/misc/dataset.py
deleted file mode 100644
index 8b375272dc0a0ab39a568777b1f24aa80a96667f..0000000000000000000000000000000000000000
--- a/spaces/atticus/image-text-retrival-huster/misc/dataset.py
+++ /dev/null
@@ -1,278 +0,0 @@
-"""
-****************** COPYRIGHT AND CONFIDENTIALITY INFORMATION ******************
-Copyright (c) 2018 [Thomson Licensing]
-All Rights Reserved
-This program contains proprietary information which is a trade secret/business \
-secret of [Thomson Licensing] and is protected, even if unpublished, under \
-applicable Copyright laws (including French droit d'auteur) and/or may be \
-subject to one or more patent(s).
-Recipient is to retain this program in confidence and is not permitted to use \
-or make copies thereof other than as permitted in a written agreement with \
-[Thomson Licensing] unless otherwise expressly allowed by applicable laws or \
-by [Thomson Licensing] under express agreement.
-Thomson Licensing is a company of the group TECHNICOLOR
-*******************************************************************************
-This scripts permits one to reproduce training and experiments of:
- Engilberge, M., Chevallier, L., Pérez, P., & Cord, M. (2018, April).
- Finding beans in burgers: Deep semantic-visual embedding with localization.
- In Proceedings of CVPR (pp. 3984-3993)
-
-Author: Martin Engilberge
-"""
-
-import json
-import os
-import re
-
-import numpy as np
-import torch
-import torch.utils.data as data
-
-from misc.config import path
-from misc.utils import encode_sentence, _load_dictionary
-from PIL import Image
-from pycocotools import mask as maskUtils
-from pycocotools.coco import COCO
-from visual_genome import local as vg
-
-class OnlineRetrival(data.Dataset):
- def __init__(self) -> None:
- super(OnlineRetrival).__init__()
-
- def __getitem__(self, index, raw=False):
- # TODO: 输入文字, 输出句子编码
- pass
-
-
-class CocoCaptionsRV(data.Dataset):
-
- def __init__(self, root=path["COCO_ROOT"], coco_json_file_path=path["COCO_RESTVAL_SPLIT"], word_dict_path=path["WORD_DICT"], sset="train", transform=None):
- # self.root = os.path.join(root, "images/")
- self.root = root
- self.transform = transform
-
- # dataset.json come from Karpathy neural talk repository and contain the restval split of coco
- with open(coco_json_file_path, 'r') as f:
- datas = json.load(f)
-
- if sset == "train":
- self.content = [x for x in datas["images"] if x["split"] == "train"]
- elif sset == "trainrv":
- self.content = [x for x in datas["images"] if x["split"] == "train" or x["split"] == "restval"]
- elif sset == "val":
- self.content = [x for x in datas["images"] if x["split"] == "val"]
- else:
- self.content = [x for x in datas["images"] if x["split"] == "test"]
-
- self.content = [(os.path.join(y["filepath"], y["filename"]), [x["raw"] for x in y["sentences"]]) for y in self.content]
-
- path_params = os.path.join(word_dict_path, 'utable.npy')
- self.params = np.load(path_params, encoding='latin1')
- self.dico = _load_dictionary(word_dict_path)
-
- def __getitem__(self, index, raw=False):
- idx = index / 5
-
- idx_cap = index % 5
-
- path = self.content[int(idx)][0]
- target = self.content[int(idx)][1][idx_cap]
- if raw:
- return path, target
-
- img = Image.open(os.path.join(self.root, path)).convert('RGB')
-
- if self.transform is not None:
- img = self.transform(img)
-
- target = encode_sentence(target, self.params, self.dico)
-
- return img, target
-
- def __len__(self):
- return len(self.content) * 5
-
-
-class VgCaptions(data.Dataset):
-
- def __init__(self, coco_root=path["COCO_ROOT"], vg_path_ann=path["VG_ANN"], path_vg_img=path["VG_IMAGE"], coco_json_file_path=path["COCO_RESTVAL_SPLIT"], word_dict_path=path["WORD_DICT"], image=True, transform=None):
- self.transform = transform
- self.image = image
-
- path_params = os.path.join(word_dict_path, 'utable.npy')
- self.params = np.load(path_params, encoding='latin1')
- self.dico = _load_dictionary(word_dict_path)
-
- self.path_vg_img = path_vg_img
-
- ids = vg.get_all_image_data(vg_path_ann)
- regions = vg.get_all_region_descriptions(vg_path_ann)
-
- annFile = os.path.join(coco_root, "annotations/captions_val2014.json")
- coco = COCO(annFile)
- ids_val_coco = list(coco.imgs.keys())
-
- # Uncomment following bloc to evaluate only on validation set from Rest/Val split
- # with open(coco_json_file_path, 'r') as f: # coco_json_file_path = "/home/wp01/users/engilbergem/dev/trunk/CPLApplications/deep/PytorchApplications/coco/dataset.json"
- # datas = json.load(f)
- # ids_val_coco = [x['cocoid'] for x in datas["images"] if x["split"] == "val"] # list(coco.imgs.keys())
-
- self.data = [x for x in zip(ids, regions) if x[0].coco_id in ids_val_coco]
- self.imgs_paths = [x[0].id for x in self.data]
- self.nb_regions = [len([x.phrase for x in y[1]])
- for y in self.data]
- self.captions = [x.phrase for y in self.data for x in y[1]]
- # print()
- def __getitem__(self, index, raw=False):
-
- if self.image:
-
- id_vg = self.data[index][0].id
- img = Image.open(os.path.join(self.path_vg_img,
- str(id_vg) + ".jpg")).convert('RGB')
-
- if raw:
- return img
-
- if self.transform is not None:
- img = self.transform(img)
-
- return img
- else:
- target = self.captions[index]
-
- # If the caption is incomplete we set it to zero
- if len(target) < 3:
- target = torch.FloatTensor(1, 620)
- else:
- target = encode_sentence(target, self.params, self.dico)
-
- return target
-
- def __len__(self):
- if self.image:
- return len(self.data)
- else:
- return len(self.captions)
-
-
-class CocoSemantic(data.Dataset):
-
- def __init__(self, coco_root=path["COCO_ROOT"], word_dict_path=path["WORD_DICT"], transform=None):
- self.coco_root = coco_root
-
- annFile = os.path.join(coco_root, "annotations/instances_val2014.json")
- self.coco = COCO(annFile)
- self.ids = list(self.coco.imgs.keys())
- self.transform = transform
-
- path_params = os.path.join(word_dict_path, 'utable.npy')
- params = np.load(path_params, encoding='latin1')
- dico = _load_dictionary(word_dict_path)
-
- self.categories = self.coco.loadCats(self.coco.getCatIds())
- # repeats category with plural version
- categories_sent = [cat['name'] + " " + cat['name'] + "s" for cat in self.categories]
- self.categories_w2v = [encode_sentence(cat, params, dico, tokenize=True) for cat in categories_sent]
-
- def __getitem__(self, index, raw=False):
- img_id = self.ids[index]
- ann_ids = self.coco.getAnnIds(imgIds=img_id)
- anns = self.coco.loadAnns(ann_ids)
-
- target = dict()
-
- path = self.coco.loadImgs(img_id)[0]['file_name']
-
- img = Image.open(os.path.join(self.coco_root, "images/val2014/", path)).convert('RGB')
- img_size = img.size
-
- for ann in anns:
- key = [cat['name'] for cat in self.categories if cat['id'] == ann["category_id"]][0]
-
- if key not in target:
- target[key] = list()
-
- if type(ann['segmentation']) != list:
- if type(ann['segmentation']['counts']) == list:
- rle = maskUtils.frPyObjects(
- [ann['segmentation']], img_size[0], img_size[1])
- else:
- rle = [ann['segmentation']]
-
- target[key] += [("rle", rle)]
- else:
- target[key] += ann["segmentation"]
-
- if raw:
- return path, target
-
- if self.transform is not None:
- img = self.transform(img)
-
- return img, img_size, target
-
- def __len__(self):
- return len(self.ids)
-
-
-class FileDataset(data.Dataset):
-
- def __init__(self, img_dir_paths, imgs=None, transform=None):
- self.transform = transform
- self.root = img_dir_paths
- self.imgs = imgs or [os.path.join(img_dir_paths, f) for f in os.listdir(img_dir_paths) if re.match(r'.*\.jpg', f)]
-
- def __getitem__(self, index):
-
- img = Image.open(self.imgs[index]).convert('RGB')
-
- if self.transform is not None:
- img = self.transform(img)
-
- return img
-
- def get_image_list(self):
- return self.imgs
-
- def __len__(self):
- return len(self.imgs)
-
-
-class TextDataset(data.Dataset):
-
- def __init__(self, text_path, word_dict_path=path["WORD_DICT"]):
-
- with open(text_path) as f:
- lines = f.readlines()
-
- self.sent_list = [line.rstrip('\n') for line in lines]
-
- path_params = os.path.join(word_dict_path, 'utable.npy')
- self.params = np.load(path_params, encoding='latin1')
- self.dico = _load_dictionary(word_dict_path)
-
- def __getitem__(self, index):
-
- caption = self.sent_list[index]
-
- caption = encode_sentence(caption, self.params, self.dico)
-
- return caption
-
- def __len__(self):
- return len(self.sent_list)
-
-
-class TextEncoder(object):
-
- def __init__(self, word_dict_path=path["WORD_DICT"]):
-
- path_params = os.path.join(word_dict_path, 'utable.npy')
- self.params = np.load(path_params, encoding='latin1', allow_pickle=True)
- self.dico = _load_dictionary(word_dict_path)
-
- def encode(self, text):
-
- caption = encode_sentence(text, self.params, self.dico)
- return caption
diff --git a/spaces/auto-academic/auto-draft/latex_templates/Default/backgrounds.tex b/spaces/auto-academic/auto-draft/latex_templates/Default/backgrounds.tex
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/autonomous019/image_story_generator/app.py b/spaces/autonomous019/image_story_generator/app.py
deleted file mode 100644
index f08cf0080b9573bdecc80c4795eae92d96bc6642..0000000000000000000000000000000000000000
--- a/spaces/autonomous019/image_story_generator/app.py
+++ /dev/null
@@ -1,154 +0,0 @@
-from transformers import ViTConfig, ViTForImageClassification
-from transformers import ViTFeatureExtractor
-from PIL import Image
-import requests
-import matplotlib.pyplot as plt
-import gradio as gr
-from gradio.mix import Parallel
-from transformers import ImageClassificationPipeline, PerceiverForImageClassificationConvProcessing, PerceiverFeatureExtractor
-from transformers import VisionEncoderDecoderModel
-from transformers import AutoTokenizer
-import torch
-from transformers import (
- AutoModelForCausalLM,
- LogitsProcessorList,
- MinLengthLogitsProcessor,
- StoppingCriteriaList,
- MaxLengthCriteria,
-)
-
-# https://github.com/NielsRogge/Transformers-Tutorials/blob/master/HuggingFace_vision_ecosystem_overview_(June_2022).ipynb
-# option 1: load with randomly initialized weights (train from scratch)
-
-#tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
-#model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
-
-
-config = ViTConfig(num_hidden_layers=12, hidden_size=768)
-model = ViTForImageClassification(config)
-
-#print(config)
-
-feature_extractor = ViTFeatureExtractor()
-# or, to load one that corresponds to a checkpoint on the hub:
-#feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224")
-
-#the following gets called by classify_image()
-feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-conv")
-model = PerceiverForImageClassificationConvProcessing.from_pretrained("deepmind/vision-perceiver-conv")
-#google/vit-base-patch16-224, deepmind/vision-perceiver-conv
-image_pipe = ImageClassificationPipeline(model=model, feature_extractor=feature_extractor)
-
-def create_story(text_seed):
- #tokenizer = AutoTokenizer.from_pretrained("gpt2")
- #model = AutoModelForCausalLM.from_pretrained("gpt2")
-
- #eleutherAI gpt-3 based
- tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
- model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M")
-
- # set pad_token_id to eos_token_id because GPT2 does not have a EOS token
- model.config.pad_token_id = model.config.eos_token_id
-
- #input_prompt = "It might be possible to"
- input_prompt = text_seed
- input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids
-
- # instantiate logits processors
- logits_processor = LogitsProcessorList(
- [
- MinLengthLogitsProcessor(10, eos_token_id=model.config.eos_token_id),
- ]
- )
- stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=100)])
-
- outputs = model.greedy_search(
- input_ids, logits_processor=logits_processor, stopping_criteria=stopping_criteria
- )
-
- result_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
- return result_text
-
-
-
-
-
-
-def self_caption(image):
- repo_name = "ydshieh/vit-gpt2-coco-en"
- #test_image = "cats.jpg"
- test_image = image
- #url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
- #test_image = Image.open(requests.get(url, stream=True).raw)
- #test_image.save("cats.png")
-
- feature_extractor2 = ViTFeatureExtractor.from_pretrained(repo_name)
- tokenizer = AutoTokenizer.from_pretrained(repo_name)
- model2 = VisionEncoderDecoderModel.from_pretrained(repo_name)
- pixel_values = feature_extractor2(test_image, return_tensors="pt").pixel_values
- print("Pixel Values")
- print(pixel_values)
- # autoregressively generate text (using beam search or other decoding strategy)
- generated_ids = model2.generate(pixel_values, max_length=16, num_beams=4, return_dict_in_generate=True)
-
- # decode into text
- preds = tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True)
- preds = [pred.strip() for pred in preds]
- print("Predictions")
- print(preds)
- print("The preds type is : ",type(preds))
- pred_keys = ["Prediction"]
- pred_value = preds
-
- pred_dictionary = dict(zip(pred_keys, pred_value))
- print("Pred dictionary")
- print(pred_dictionary)
- #return(pred_dictionary)
- preds = ' '.join(preds)
- story = create_story(preds)
- story = ' '.join(story)
- return story
-
-
-def classify_image(image):
- results = image_pipe(image)
-
- print("RESULTS")
- print(results)
- # convert to format Gradio expects
- output = {}
- for prediction in results:
- predicted_label = prediction['label']
- score = prediction['score']
- output[predicted_label] = score
- print("OUTPUT")
- print(output)
- return output
-
-
-image = gr.inputs.Image(type="pil")
-label = gr.outputs.Label(num_top_classes=5)
-examples = [ ["cats.jpg"], ["batter.jpg"],["drinkers.jpg"] ]
-title = "Generate a Story from an Image"
-description = "Demo for classifying images with Perceiver IO. To use it, simply upload an image and click 'submit', a story is autogenerated as well"
-article = ""
-
-img_info1 = gr.Interface(
- fn=classify_image,
- inputs=image,
- outputs=label,
-)
-
-img_info2 = gr.Interface(
- fn=self_caption,
- inputs=image,
- #outputs=label,
- outputs = [
- gr.outputs.Textbox(label = 'Story')
-],
-)
-
-Parallel(img_info1,img_info2, inputs=image, title=title, description=description, examples=examples, enable_queue=True).launch(debug=True)
-#Parallel(img_info1,img_info2, inputs=image, outputs=label, title=title, description=description, examples=examples, enable_queue=True).launch(debug=True)
-
-
diff --git a/spaces/avichr/HebEMO_demo/README.md b/spaces/avichr/HebEMO_demo/README.md
deleted file mode 100644
index 14d8ba106370eed5f63bd83ce7b4a66205deaaaf..0000000000000000000000000000000000000000
--- a/spaces/avichr/HebEMO_demo/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: HebEMO_demo
-emoji: 📚
-colorFrom: purple
-colorTo: pink
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/awacke1/MultiplayerImageRecognition/README.md b/spaces/awacke1/MultiplayerImageRecognition/README.md
deleted file mode 100644
index 8330f3bc36e94ca10ed3488804d5e7d18cfa9122..0000000000000000000000000000000000000000
--- a/spaces/awacke1/MultiplayerImageRecognition/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: MultiplayerImageRecognition
-emoji: 🏃
-colorFrom: pink
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/networks.py b/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/networks.py
deleted file mode 100644
index 3a0d13ad2d560278f16586da68d3a5eadb26e746..0000000000000000000000000000000000000000
--- a/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/networks.py
+++ /dev/null
@@ -1,96 +0,0 @@
-from typing import Sequence
-
-from itertools import chain
-
-import torch
-import torch.nn as nn
-from torchvision import models
-
-from criteria.lpips.utils import normalize_activation
-
-
-def get_network(net_type: str):
- if net_type == 'alex':
- return AlexNet()
- elif net_type == 'squeeze':
- return SqueezeNet()
- elif net_type == 'vgg':
- return VGG16()
- else:
- raise NotImplementedError('choose net_type from [alex, squeeze, vgg].')
-
-
-class LinLayers(nn.ModuleList):
- def __init__(self, n_channels_list: Sequence[int]):
- super(LinLayers, self).__init__([
- nn.Sequential(
- nn.Identity(),
- nn.Conv2d(nc, 1, 1, 1, 0, bias=False)
- ) for nc in n_channels_list
- ])
-
- for param in self.parameters():
- param.requires_grad = False
-
-
-class BaseNet(nn.Module):
- def __init__(self):
- super(BaseNet, self).__init__()
-
- # register buffer
- self.register_buffer(
- 'mean', torch.Tensor([-.030, -.088, -.188])[None, :, None, None])
- self.register_buffer(
- 'std', torch.Tensor([.458, .448, .450])[None, :, None, None])
-
- def set_requires_grad(self, state: bool):
- for param in chain(self.parameters(), self.buffers()):
- param.requires_grad = state
-
- def z_score(self, x: torch.Tensor):
- return (x - self.mean) / self.std
-
- def forward(self, x: torch.Tensor):
- x = self.z_score(x)
-
- output = []
- for i, (_, layer) in enumerate(self.layers._modules.items(), 1):
- x = layer(x)
- if i in self.target_layers:
- output.append(normalize_activation(x))
- if len(output) == len(self.target_layers):
- break
- return output
-
-
-class SqueezeNet(BaseNet):
- def __init__(self):
- super(SqueezeNet, self).__init__()
-
- self.layers = models.squeezenet1_1(True).features
- self.target_layers = [2, 5, 8, 10, 11, 12, 13]
- self.n_channels_list = [64, 128, 256, 384, 384, 512, 512]
-
- self.set_requires_grad(False)
-
-
-class AlexNet(BaseNet):
- def __init__(self):
- super(AlexNet, self).__init__()
-
- self.layers = models.alexnet(True).features
- self.target_layers = [2, 5, 8, 10, 12]
- self.n_channels_list = [64, 192, 384, 256, 256]
-
- self.set_requires_grad(False)
-
-
-class VGG16(BaseNet):
- def __init__(self):
- super(VGG16, self).__init__()
-
- self.layers = models.vgg16(True).features
- self.target_layers = [4, 9, 16, 23, 30]
- self.n_channels_list = [64, 128, 256, 512, 512]
-
- self.set_requires_grad(False)
\ No newline at end of file
diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327222254.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327222254.py
deleted file mode 100644
index fe2dcde2a2493f2ee33d4765b83a4724e98aeab7..0000000000000000000000000000000000000000
--- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327222254.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import os
-os.system("pip install gfpgan")
-
-#os.system("pip freeze")
-#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .")
-import random
-import gradio as gr
-from PIL import Image
-import torch
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg')
-
-
-import cv2
-import glob
-import numpy as np
-from basicsr.utils import imwrite
-from gfpgan import GFPGANer
-
-bg_upsampler = None
-
-
-
-# set up GFPGAN restorer
-restorer = GFPGANer(
- model_path='experiments/pretrained_models/GFPGANv1.3.pth',
- upscale=2,
- arch='clean',
- channel_multiplier=2,
- bg_upsampler=bg_upsampler)
-
-
-def inference(img):
- input_img = cv2.imread(img, cv2.IMREAD_COLOR)
- cropped_faces, restored_faces, restored_img = restorer.enhance(
- input_img, has_aligned=False, only_center_face=False, paste_back=True)
-
- #return Image.fromarray(restored_faces[0][:,:,::-1])
- return Image.fromarray(restored_img[:, :, ::-1])
-
-title = "让美好回忆更清晰"
-
-
-description = "上传老照片,点击Submit,稍等片刻,右侧Output将照片另存为即可。"
-
-article = "
"
-
-gr.Interface(
- inference,
- [gr.inputs.Image(type="filepath", label="Input")],
- gr.outputs.Image(type="pil", label="Output"),
- title=title,
- description=description,
- article=article,
- examples=[
- ['lincoln.jpg'],
- ['einstein.png'],
- ['edison.jpg'],
- ['Henry.jpg'],
- ['Frida.jpg']
- ]
- ).launch(enable_queue=True,cache_examples=True,share=True)
-
-
diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621104301.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621104301.py
deleted file mode 100644
index 94065d6b1f7a3bdd3a07a47847c07dbd1d70a745..0000000000000000000000000000000000000000
--- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621104301.py
+++ /dev/null
@@ -1,42 +0,0 @@
-#-*- coding : utf-8-*-
-import base64
-from subprocess import STDOUT
-import streamlit as st
-import pandas as pd
-import camelot as cam # extracting tables from PDFs
-
-st.title("PDF Table Extractor")
-
-input_pdf = st.file_uploader(label = "", type = 'pdf')
-
-background = st.selectbox("表格线条是否隐藏",(False,True))
-extractor_mode = st.selectbox("单页抽取 OR 全文抽取",("单页抽取","全文抽取"))
-
-if input_pdf is not None:
- # byte object into a PDF file
- with open("input.pdf", "wb") as f:
- base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8')
- f.write(base64.b64decode(base64_pdf))
- f.close()
- if extractor_mode == "单页抽取":
- page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1)
- # read the pdf and parse it using stream
- tables = cam.read_pdf("input.pdf", pages=page_number, process_background=background)
- result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter')
- #tables[1].to_excel(result,index=False)
- for i in range(0,len(tables)):
- table = tables[i].df
- sheetname = str(i)
- table.to_excel(result, sheetname,index=False)
-
- with open('result.xlsx','rb') as f:
- st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel")
- if extractor_mode == "全文抽取":
- tables_all= cam.read_pdf("input.pdf", pages="all", process_background=background)
- result_all = pd.ExcelWriter('result_all.xlsx', engine='xlsxwriter')
- for i in range(0,len(tables_all)):
- table = tables_all[i].df
- sheetname = str(i)
- table.to_excel(result_all, sheetname,index=False)
- with open('result_all.xlsx','rb') as f:
- st.download_button('抽取完成,点击下载!', f,file_name='result_all.xlsx',mime="application/vnd.ms-excel")
diff --git a/spaces/betterme/Nice/git_init.sh b/spaces/betterme/Nice/git_init.sh
deleted file mode 100644
index 4298a16845a89be8b2f401219dbb3fd4183cab20..0000000000000000000000000000000000000000
--- a/spaces/betterme/Nice/git_init.sh
+++ /dev/null
@@ -1,9 +0,0 @@
-#!/usr/bin/env bash
-
-#git config --global credential.helper store
-
-git add ./*
-git commit -m "update" # git commit --amend -m '重新commit'
-
-git pull
-git push -f
\ No newline at end of file
diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py b/spaces/bigjoker/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py
deleted file mode 100644
index 766e2c2499f0a1866e88424a319b99a9df973fc4..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import html
-import json
-import os
-import urllib.parse
-
-from modules import shared, ui_extra_networks, sd_models
-
-
-class ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):
- def __init__(self):
- super().__init__('Checkpoints')
-
- def refresh(self):
- shared.refresh_checkpoints()
-
- def list_items(self):
- checkpoint: sd_models.CheckpointInfo
- for name, checkpoint in sd_models.checkpoints_list.items():
- path, ext = os.path.splitext(checkpoint.filename)
- previews = [path + ".png", path + ".preview.png"]
-
- preview = None
- for file in previews:
- if os.path.isfile(file):
- preview = self.link_preview(file)
- break
-
- yield {
- "name": checkpoint.name_for_extra,
- "filename": path,
- "preview": preview,
- "search_term": self.search_terms_from_path(checkpoint.filename) + " " + (checkpoint.sha256 or ""),
- "onclick": '"' + html.escape(f"""return selectCheckpoint({json.dumps(name)})""") + '"',
- "local_preview": path + ".png",
- }
-
- def allowed_directories_for_previews(self):
- return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None]
-
diff --git a/spaces/binker/interpreter5/functional.py b/spaces/binker/interpreter5/functional.py
deleted file mode 100644
index fb433a9729741d434b1a6d2d4b7651cfa26427f4..0000000000000000000000000000000000000000
--- a/spaces/binker/interpreter5/functional.py
+++ /dev/null
@@ -1,116 +0,0 @@
-from bot_backend import *
-import base64
-import time
-
-
-def chat_completion(bot_backend: BotBackend):
- model_choice = bot_backend.gpt_model_choice
- config = bot_backend.config
- kwargs_for_chat_completion = bot_backend.kwargs_for_chat_completion
-
- assert config['model'][model_choice]['available'], f"{model_choice} is not available for your API key"
-
- response = openai.ChatCompletion.create(**kwargs_for_chat_completion)
- return response
-
-
-def add_function_response_to_bot_history(content_to_display, history, unique_id):
- images, text = [], []
-
- # terminal output
- error_occurred = False
- for mark, out_str in content_to_display:
- if mark in ('stdout', 'execute_result_text', 'display_text'):
- text.append(out_str)
- elif mark in ('execute_result_png', 'execute_result_jpeg', 'display_png', 'display_jpeg'):
- if 'png' in mark:
- images.append(('png', out_str))
- else:
- images.append(('jpg', out_str))
- elif mark == 'error':
- text.append(delete_color_control_char(out_str))
- error_occurred = True
- text = '\n'.join(text).strip('\n')
- if error_occurred:
- history.append([None, f'❌Terminal output:\n```shell\n\n{text}\n```'])
- else:
- history.append([None, f'✔️Terminal output:\n```shell\n{text}\n```'])
-
- # image output
- for filetype, img in images:
- image_bytes = base64.b64decode(img)
- temp_path = f'cache/temp_{unique_id}'
- if not os.path.exists(temp_path):
- os.mkdir(temp_path)
- path = f'{temp_path}/{hash(time.time())}.{filetype}'
- with open(path, 'wb') as f:
- f.write(image_bytes)
- history.append(
- [
- None,
- f''
- ]
- )
-
-
-def parse_json(function_args: str, finished: bool):
- """
- GPT may generate non-standard JSON format string, which contains '\n' in string value, leading to error when using
- `json.loads()`.
- Here we implement a parser to extract code directly from non-standard JSON string.
- :return: code string if successfully parsed otherwise None
- """
- parser_log = {
- 'met_begin_{': False,
- 'begin_"code"': False,
- 'end_"code"': False,
- 'met_:': False,
- 'met_end_}': False,
- 'met_end_code_"': False,
- "code_begin_index": 0,
- "code_end_index": 0
- }
- try:
- for index, char in enumerate(function_args):
- if char == '{':
- parser_log['met_begin_{'] = True
- elif parser_log['met_begin_{'] and char == '"':
- if parser_log['met_:']:
- if finished:
- parser_log['code_begin_index'] = index + 1
- break
- else:
- if index + 1 == len(function_args):
- return ''
- else:
- temp_code_str = function_args[index + 1:]
- if '\n' in temp_code_str:
- return temp_code_str.strip('\n')
- else:
- return json.loads(function_args + '"}')['code']
- elif parser_log['begin_"code"']:
- parser_log['end_"code"'] = True
- else:
- parser_log['begin_"code"'] = True
- elif parser_log['end_"code"'] and char == ':':
- parser_log['met_:'] = True
- else:
- continue
- if finished:
- for index, char in enumerate(function_args[::-1]):
- back_index = -1 - index
- if char == '}':
- parser_log['met_end_}'] = True
- elif parser_log['met_end_}'] and char == '"':
- parser_log['code_end_index'] = back_index - 1
- break
- else:
- continue
- code_str = function_args[parser_log['code_begin_index']: parser_log['code_end_index'] + 1]
- if '\n' in code_str:
- return code_str.strip('\n')
- else:
- return json.loads(function_args)['code']
-
- except Exception as e:
- return None
diff --git a/spaces/bioriAsaeru/text-to-voice/Download latest software of Kundli software and get horoscope matching rashifal panchang and more.md b/spaces/bioriAsaeru/text-to-voice/Download latest software of Kundli software and get horoscope matching rashifal panchang and more.md
deleted file mode 100644
index bef591721ca1818c102771e6fdf792ed0c4b0591..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Download latest software of Kundli software and get horoscope matching rashifal panchang and more.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
Free Kundli Software is one of the most sought after feature of AstroSage. AstroSageis the number one website of astrology and it has initiated many useful servicesjust for the ease of its visitors. This is the software that prepares birth chart(Kundli) of a person using the personal details is one of the best software. Kundlisoftware gives a comprehensive report of a person's complete life. It contains adetailed description of the planetary positions at the time of your birth and atthe present time as well. There is not even a single detail left that should bementioned in one's Kundli. A proper chart indicating the position of all the planetsat the time a person was born is also provided for better understanding. To getyour Kundli using this unique Kundli Software for free, fill your details in followingform:
The significance of the Kundli is known to almost everyone. The Kundli preparedby Kundli software of AstroSage illustrates the events of life with complete knowledgeof astrology. The language used to describe all the events is very easy to understand.The given sections in the prepared Kundli will take you by surprise as they notonly describe your events of life but also make you aware of many kinds of Doshas.It is better to know beforehand that your Kundli consists of any Dosha or not, asthey play a very considerable role in one's life. For example, if any kind of Doshais present in the Kundli of a person then that person is required to perform someremedies to avoid severe conditions in life. There is only one way of coming toknow about Dosha, that is, the birth chart. Kundli not only gives report of presentDosha but also describes that the position of a planet in your Kundli is maleficor positive for any case.
-
Preparing a Kundli is a wise thing to do in numerous ways. As it is already mentionedthat Kundli covers all the aspects of life then with the help of a Kundli it becomeseasy to predict the upcoming events of life. It is your own choice that in whicharea of life you are most interested. As far as a Kundli is concerned, it will provideyou predictions on all the desired areas of life. The best thing about the Kundlisoftware is that it not only gives an analyzed report of the overall life but alsotells a person about his life from time to time. Through the Kundli software youwill also come to know about the time period in which you are going to go throughShani Sade Sati. Shani sade sati is the time duration of Saturn that affects thelife of a native tremendously. The effects could be very great and even very adverse.Shani sade sati is divided into three phases. Your Kundli will give an accurateprediction on how your shani sade sati is going to be and its time duration willalso be told.
-
One more major feature of Kundli software is that it also gives the prediction ofone's life analyzed by Lal Kitab. Lal Kitab is one of the most acknowledged mediumof making predictions and is being fondly followed. The remedies that could resolveyour problems are given hand to hand that are very easy to perform and are extremelyeffective.
-
In short, a Kundli is not only a way of coming to know about your life but it alsohelps in improving one's quality of life. The Kundli software available on AstroSagecan be accessed without any charges. What could be better than a Kundli softwarethat tells everything about life without leaving a stone untouched for free? So,get ready to explore the facts of your own life.
-
The Kundli software download is easier than ever before and very user friendly. When downloading is just a click away why wait. Download Kundli software and get your Janam Kundli and see what destiny has in store for you. Secure your childs future getting a detailed Janam Kundli done. Get to know about compatibility between perspective bride and groom downloading our professional Kundli software.
-
-
Free Kundali software download can make your calculations easier. One of the most tedious of tasks in astrology is Vedic horoscope calculation and charting. As an extension of calculation of planetary positions, this portal also provides free services of horoscope compatibility matching for matrimonial purposes. This process, also known as kundali milan orguna milap, is followed extensively in marriage system in India.
-
What makes this Kundli software download so much in demand is that it is both simple software to use with detailed professional analysis ( that is it displays all the relevant information about a chart, namely, the positions of planets, divisional chart positions, Nakshatras, Vimshottari dasha, Bhava etc.. moreover all this is absolutely free .
-
Speedy, automated yet accurate Kundli software download offers an exhaustive study and expert recommendations of the planetary positions. Find the exact longitude and latitude position of the place of birth by using the place finder and locating the city from the menu.
-
This software was designed and written by P.V.R. Narasimha Rao. He is a software engineer and astrologer hailing from India and living near Boston, US. He has engineering degrees from IIT, Madras/Chennai and Rice University, Houston. He is also a Sanskrit scholar. He authored a textbook, many magazine articles and research articles and teaches astrology near Boston. You can read more about him here.
-
In terms of the range of calculations available, technical depth and breadth, level of customizability of calculations and ease of use, Jagannatha Hora is unsurpassed by any contemporary Vedic astrology software package. If interested, please check out a nearly complete list of the features.
-
Note: The zip files do not have any problem with them. Many people have successfully downloaded and installed the software. If you are unable to unzip them after downloading them or unable to install, it means that your download did not succeed for some reason. Keep trying until you succeed. We cannot help you with this and there is no use in sending us an email.
-
We do not distribute or sell the software in CDs. You have to download the software from the internet. You may also try to find someone who has already downloaded it and get them to make a CD for you.
-
Thank you for using this software. Please use it to help people and to conduct researches to enrich our collective understanding of Vedic astrology. It is the author's earnest and sincere hope that your use of this software will result in a lot of souls being helped and also in a renaissance in the knowledge of Vedic astrology!
-
MB Janam Kundali Software is an advanced birth chart calculation software based on Vedic Astrology. It tells you accurately the astronomical location of planets at the time of an individual's birth. This program gives you the natal chart in both North Indian and South Indian Styles.
-
The software will install a desktop icon, and uninstalling the program will leave a folder in the user's Program Files. Overall, the program does what it should, but the interface could be cleaner, and novice astrologers will need additional resources to interpret their results.
-
Latest Kundli software that works with Windows vista, XP and 7. Its new features include: facility to update to pdf, jpeg ratna vichar, stone suggession manglik vichar mantra, upaay kundil software for windows 7, kundli, latest kundli, kundli computer zone, czone kundli, kundli 7, kundli software for windows 7
-
It is part from religion & spirituality category and is licensed as shareware for Windows 32-bit and 64-bit platform and can be used as a free trial until the trial period will end. The Kundli demo is available to all software users as a free download with potential restrictions compared with the full version.
-
Kundli Software is one of the few astrology apps that has garnered 5 million downloads in the field of online astrology., i.e. the number of users across the country and abroad has now exceeded 50 lakhs mark. This is the biggest success for this app, as the organic download of any app is majorly valued and reflects on the popularity. Get Free astrology software, Kundli software and aaj ka rashifal by AstroSage.com in 11 languages.
-
Kundli for Windows is an astrology software Main features: - Good presentation. - Accurate calculations. - Screen preview. - Storage of horsocopes and modules for future references. - South/North Indian Charting. - Aynamsa N.C. Lahiri/ K.P. / B.V. Raman. - Latitude and Logitude databases, Time Zones database.
-
Kundli is an excellent astrology software for Windows operating systems. Downloading this app on your device will provide you with a good presentation, accurate calculations, storage of horoscopes and modules for the future, a wide range of references, Y2K compatibility, and a lot more.
-
It offers unlimited storage to preserve relevant astrological data and information. Also, personalized branding becomes very easy with Dhruv Astro Software as your name and address details appear in each and every page of the report generated with the help of this software.
-
Avails you with an Elaborate and Coloured Kundli containing more than 200 pages.To top it all, this software puts to use the most detailed systems of astrology to make things more convenient for professional practitioners of astrology.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Harry Potter E A Ordem Da Fenix 720p Dublado Em Como Harry enfrentou a ameaa de Voldemort e a tirania de Umbridge.md b/spaces/bioriAsaeru/text-to-voice/Harry Potter E A Ordem Da Fenix 720p Dublado Em Como Harry enfrentou a ameaa de Voldemort e a tirania de Umbridge.md
deleted file mode 100644
index dc671aa2a8dfff6141ce59b41078f18626ffebe2..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Harry Potter E A Ordem Da Fenix 720p Dublado Em Como Harry enfrentou a ameaa de Voldemort e a tirania de Umbridge.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Journey to the West Conquering the Demons 720p A Must-See Movie for Fans of Stephen Chow and Chinese Mythology.md b/spaces/bioriAsaeru/text-to-voice/Journey to the West Conquering the Demons 720p A Must-See Movie for Fans of Stephen Chow and Chinese Mythology.md
deleted file mode 100644
index 68c59453d501c89e57898d9fde64b8bcf0f96d14..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Journey to the West Conquering the Demons 720p A Must-See Movie for Fans of Stephen Chow and Chinese Mythology.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
journey to the west conquering the demons 720p download
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_inference.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_inference.py
deleted file mode 100644
index deb886c0417285ed1d5ad85eb941fa1ac757cdab..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_inference.py
+++ /dev/null
@@ -1,161 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import logging
-import numpy as np
-from itertools import count
-import torch
-from caffe2.proto import caffe2_pb2
-from caffe2.python import core
-
-from .caffe2_modeling import META_ARCH_CAFFE2_EXPORT_TYPE_MAP, convert_batched_inputs_to_c2_format
-from .shared import ScopedWS, get_pb_arg_vali, get_pb_arg_vals, infer_device_type
-
-logger = logging.getLogger(__name__)
-
-
-# ===== ref: mobile-vision predictor's 'Caffe2Wrapper' class ======
-class ProtobufModel(torch.nn.Module):
- """
- Wrapper of a caffe2's protobuf model.
- It works just like nn.Module, but running caffe2 under the hood.
- Input/Output are tuple[tensor] that match the caffe2 net's external_input/output.
- """
-
- _ids = count(0)
-
- def __init__(self, predict_net, init_net):
- logger.info(f"Initializing ProtobufModel for: {predict_net.name} ...")
- super().__init__()
- assert isinstance(predict_net, caffe2_pb2.NetDef)
- assert isinstance(init_net, caffe2_pb2.NetDef)
- # create unique temporary workspace for each instance
- self.ws_name = "__tmp_ProtobufModel_{}__".format(next(self._ids))
- self.net = core.Net(predict_net)
-
- logger.info("Running init_net once to fill the parameters ...")
- with ScopedWS(self.ws_name, is_reset=True, is_cleanup=False) as ws:
- ws.RunNetOnce(init_net)
- uninitialized_external_input = []
- for blob in self.net.Proto().external_input:
- if blob not in ws.Blobs():
- uninitialized_external_input.append(blob)
- ws.CreateBlob(blob)
- ws.CreateNet(self.net)
-
- self._error_msgs = set()
- self._input_blobs = uninitialized_external_input
-
- def _infer_output_devices(self, inputs):
- """
- Returns:
- list[str]: list of device for each external output
- """
-
- def _get_device_type(torch_tensor):
- assert torch_tensor.device.type in ["cpu", "cuda"]
- assert torch_tensor.device.index == 0
- return torch_tensor.device.type
-
- predict_net = self.net.Proto()
- input_device_types = {
- (name, 0): _get_device_type(tensor) for name, tensor in zip(self._input_blobs, inputs)
- }
- device_type_map = infer_device_type(
- predict_net, known_status=input_device_types, device_name_style="pytorch"
- )
- ssa, versions = core.get_ssa(predict_net)
- versioned_outputs = [(name, versions[name]) for name in predict_net.external_output]
- output_devices = [device_type_map[outp] for outp in versioned_outputs]
- return output_devices
-
- def forward(self, inputs):
- """
- Args:
- inputs (tuple[torch.Tensor])
-
- Returns:
- tuple[torch.Tensor]
- """
- assert len(inputs) == len(self._input_blobs), (
- f"Length of inputs ({len(inputs)}) "
- f"doesn't match the required input blobs: {self._input_blobs}"
- )
-
- with ScopedWS(self.ws_name, is_reset=False, is_cleanup=False) as ws:
- for b, tensor in zip(self._input_blobs, inputs):
- ws.FeedBlob(b, tensor)
-
- try:
- ws.RunNet(self.net.Proto().name)
- except RuntimeError as e:
- if not str(e) in self._error_msgs:
- self._error_msgs.add(str(e))
- logger.warning("Encountered new RuntimeError: \n{}".format(str(e)))
- logger.warning("Catch the error and use partial results.")
-
- c2_outputs = [ws.FetchBlob(b) for b in self.net.Proto().external_output]
- # Remove outputs of current run, this is necessary in order to
- # prevent fetching the result from previous run if the model fails
- # in the middle.
- for b in self.net.Proto().external_output:
- # Needs to create uninitialized blob to make the net runable.
- # This is "equivalent" to: ws.RemoveBlob(b) then ws.CreateBlob(b),
- # but there'no such API.
- ws.FeedBlob(b, f"{b}, a C++ native class of type nullptr (uninitialized).")
-
- # Cast output to torch.Tensor on the desired device
- output_devices = (
- self._infer_output_devices(inputs)
- if any(t.device.type != "cpu" for t in inputs)
- else ["cpu" for _ in self.net.Proto().external_output]
- )
-
- outputs = []
- for name, c2_output, device in zip(
- self.net.Proto().external_output, c2_outputs, output_devices
- ):
- if not isinstance(c2_output, np.ndarray):
- raise RuntimeError(
- "Invalid output for blob {}, received: {}".format(name, c2_output)
- )
- outputs.append(torch.tensor(c2_output).to(device=device))
- return tuple(outputs)
-
-
-class ProtobufDetectionModel(torch.nn.Module):
- """
- A class works just like a pytorch meta arch in terms of inference, but running
- caffe2 model under the hood.
- """
-
- def __init__(self, predict_net, init_net, *, convert_outputs=None):
- """
- Args:
- predict_net, init_net (core.Net): caffe2 nets
- convert_outptus (callable): a function that converts caffe2
- outputs to the same format of the original pytorch model.
- By default, use the one defined in the caffe2 meta_arch.
- """
- super().__init__()
- self.protobuf_model = ProtobufModel(predict_net, init_net)
- self.size_divisibility = get_pb_arg_vali(predict_net, "size_divisibility", 0)
- self.device = get_pb_arg_vals(predict_net, "device", b"cpu").decode("ascii")
-
- if convert_outputs is None:
- meta_arch = get_pb_arg_vals(predict_net, "meta_architecture", b"GeneralizedRCNN")
- meta_arch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[meta_arch.decode("ascii")]
- self._convert_outputs = meta_arch.get_outputs_converter(predict_net, init_net)
- else:
- self._convert_outputs = convert_outputs
-
- def _convert_inputs(self, batched_inputs):
- # currently all models convert inputs in the same way
- return convert_batched_inputs_to_c2_format(
- batched_inputs, self.size_divisibility, self.device
- )
-
- def forward(self, batched_inputs):
- c2_inputs = self._convert_inputs(batched_inputs)
- c2_results = self.protobuf_model(c2_inputs)
- c2_results = dict(zip(self.protobuf_model.net.Proto().external_output, c2_results))
- return self._convert_outputs(batched_inputs, c2_inputs, c2_results)
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/deformable/deform_conv.h b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/deformable/deform_conv.h
deleted file mode 100644
index 965c1bfd47b58f9802d1c3fd69a5962517b2da61..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/deformable/deform_conv.h
+++ /dev/null
@@ -1,377 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#pragma once
-#include
-
-namespace detectron2 {
-
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-int deform_conv_forward_cuda(
- at::Tensor input,
- at::Tensor weight,
- at::Tensor offset,
- at::Tensor output,
- at::Tensor columns,
- at::Tensor ones,
- int kW,
- int kH,
- int dW,
- int dH,
- int padW,
- int padH,
- int dilationW,
- int dilationH,
- int group,
- int deformable_group,
- int im2col_step);
-
-int deform_conv_backward_input_cuda(
- at::Tensor input,
- at::Tensor offset,
- at::Tensor gradOutput,
- at::Tensor gradInput,
- at::Tensor gradOffset,
- at::Tensor weight,
- at::Tensor columns,
- int kW,
- int kH,
- int dW,
- int dH,
- int padW,
- int padH,
- int dilationW,
- int dilationH,
- int group,
- int deformable_group,
- int im2col_step);
-
-int deform_conv_backward_parameters_cuda(
- at::Tensor input,
- at::Tensor offset,
- at::Tensor gradOutput,
- at::Tensor gradWeight, // at::Tensor gradBias,
- at::Tensor columns,
- at::Tensor ones,
- int kW,
- int kH,
- int dW,
- int dH,
- int padW,
- int padH,
- int dilationW,
- int dilationH,
- int group,
- int deformable_group,
- float scale,
- int im2col_step);
-
-void modulated_deform_conv_cuda_forward(
- at::Tensor input,
- at::Tensor weight,
- at::Tensor bias,
- at::Tensor ones,
- at::Tensor offset,
- at::Tensor mask,
- at::Tensor output,
- at::Tensor columns,
- int kernel_h,
- int kernel_w,
- const int stride_h,
- const int stride_w,
- const int pad_h,
- const int pad_w,
- const int dilation_h,
- const int dilation_w,
- const int group,
- const int deformable_group,
- const bool with_bias);
-
-void modulated_deform_conv_cuda_backward(
- at::Tensor input,
- at::Tensor weight,
- at::Tensor bias,
- at::Tensor ones,
- at::Tensor offset,
- at::Tensor mask,
- at::Tensor columns,
- at::Tensor grad_input,
- at::Tensor grad_weight,
- at::Tensor grad_bias,
- at::Tensor grad_offset,
- at::Tensor grad_mask,
- at::Tensor grad_output,
- int kernel_h,
- int kernel_w,
- int stride_h,
- int stride_w,
- int pad_h,
- int pad_w,
- int dilation_h,
- int dilation_w,
- int group,
- int deformable_group,
- const bool with_bias);
-
-#endif
-
-inline int deform_conv_forward(
- at::Tensor input,
- at::Tensor weight,
- at::Tensor offset,
- at::Tensor output,
- at::Tensor columns,
- at::Tensor ones,
- int kW,
- int kH,
- int dW,
- int dH,
- int padW,
- int padH,
- int dilationW,
- int dilationH,
- int group,
- int deformable_group,
- int im2col_step) {
- if (input.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
- TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
- return deform_conv_forward_cuda(
- input,
- weight,
- offset,
- output,
- columns,
- ones,
- kW,
- kH,
- dW,
- dH,
- padW,
- padH,
- dilationW,
- dilationH,
- group,
- deformable_group,
- im2col_step);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
- AT_ERROR("This operator is not implemented on CPU");
-}
-
-inline int deform_conv_backward_input(
- at::Tensor input,
- at::Tensor offset,
- at::Tensor gradOutput,
- at::Tensor gradInput,
- at::Tensor gradOffset,
- at::Tensor weight,
- at::Tensor columns,
- int kW,
- int kH,
- int dW,
- int dH,
- int padW,
- int padH,
- int dilationW,
- int dilationH,
- int group,
- int deformable_group,
- int im2col_step) {
- if (gradOutput.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!");
- TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
- TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
- return deform_conv_backward_input_cuda(
- input,
- offset,
- gradOutput,
- gradInput,
- gradOffset,
- weight,
- columns,
- kW,
- kH,
- dW,
- dH,
- padW,
- padH,
- dilationW,
- dilationH,
- group,
- deformable_group,
- im2col_step);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
- AT_ERROR("This operator is not implemented on CPU");
-}
-
-inline int deform_conv_backward_filter(
- at::Tensor input,
- at::Tensor offset,
- at::Tensor gradOutput,
- at::Tensor gradWeight, // at::Tensor gradBias,
- at::Tensor columns,
- at::Tensor ones,
- int kW,
- int kH,
- int dW,
- int dH,
- int padW,
- int padH,
- int dilationW,
- int dilationH,
- int group,
- int deformable_group,
- float scale,
- int im2col_step) {
- if (gradOutput.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!");
- TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
- return deform_conv_backward_parameters_cuda(
- input,
- offset,
- gradOutput,
- gradWeight,
- columns,
- ones,
- kW,
- kH,
- dW,
- dH,
- padW,
- padH,
- dilationW,
- dilationH,
- group,
- deformable_group,
- scale,
- im2col_step);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
- AT_ERROR("This operator is not implemented on CPU");
-}
-
-inline void modulated_deform_conv_forward(
- at::Tensor input,
- at::Tensor weight,
- at::Tensor bias,
- at::Tensor ones,
- at::Tensor offset,
- at::Tensor mask,
- at::Tensor output,
- at::Tensor columns,
- int kernel_h,
- int kernel_w,
- const int stride_h,
- const int stride_w,
- const int pad_h,
- const int pad_w,
- const int dilation_h,
- const int dilation_w,
- const int group,
- const int deformable_group,
- const bool with_bias) {
- if (input.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
- TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!");
- TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
- return modulated_deform_conv_cuda_forward(
- input,
- weight,
- bias,
- ones,
- offset,
- mask,
- output,
- columns,
- kernel_h,
- kernel_w,
- stride_h,
- stride_w,
- pad_h,
- pad_w,
- dilation_h,
- dilation_w,
- group,
- deformable_group,
- with_bias);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
- AT_ERROR("This operator is not implemented on CPU");
-}
-
-inline void modulated_deform_conv_backward(
- at::Tensor input,
- at::Tensor weight,
- at::Tensor bias,
- at::Tensor ones,
- at::Tensor offset,
- at::Tensor mask,
- at::Tensor columns,
- at::Tensor grad_input,
- at::Tensor grad_weight,
- at::Tensor grad_bias,
- at::Tensor grad_offset,
- at::Tensor grad_mask,
- at::Tensor grad_output,
- int kernel_h,
- int kernel_w,
- int stride_h,
- int stride_w,
- int pad_h,
- int pad_w,
- int dilation_h,
- int dilation_w,
- int group,
- int deformable_group,
- const bool with_bias) {
- if (grad_output.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!");
- TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
- TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!");
- TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
- return modulated_deform_conv_cuda_backward(
- input,
- weight,
- bias,
- ones,
- offset,
- mask,
- columns,
- grad_input,
- grad_weight,
- grad_bias,
- grad_offset,
- grad_mask,
- grad_output,
- kernel_h,
- kernel_w,
- stride_h,
- stride_w,
- pad_h,
- pad_w,
- dilation_h,
- dilation_w,
- group,
- deformable_group,
- with_bias);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
- AT_ERROR("This operator is not implemented on CPU");
-}
-
-} // namespace detectron2
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/sampling.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/sampling.py
deleted file mode 100644
index a2d0f6648b349c5ea39fd29785b77c961a58fa22..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/sampling.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import torch
-
-from detectron2.layers import nonzero_tuple
-
-__all__ = ["subsample_labels"]
-
-
-def subsample_labels(
- labels: torch.Tensor, num_samples: int, positive_fraction: float, bg_label: int
-):
- """
- Return `num_samples` (or fewer, if not enough found)
- random samples from `labels` which is a mixture of positives & negatives.
- It will try to return as many positives as possible without
- exceeding `positive_fraction * num_samples`, and then try to
- fill the remaining slots with negatives.
-
- Args:
- labels (Tensor): (N, ) label vector with values:
- * -1: ignore
- * bg_label: background ("negative") class
- * otherwise: one or more foreground ("positive") classes
- num_samples (int): The total number of labels with value >= 0 to return.
- Values that are not sampled will be filled with -1 (ignore).
- positive_fraction (float): The number of subsampled labels with values > 0
- is `min(num_positives, int(positive_fraction * num_samples))`. The number
- of negatives sampled is `min(num_negatives, num_samples - num_positives_sampled)`.
- In order words, if there are not enough positives, the sample is filled with
- negatives. If there are also not enough negatives, then as many elements are
- sampled as is possible.
- bg_label (int): label index of background ("negative") class.
-
- Returns:
- pos_idx, neg_idx (Tensor):
- 1D vector of indices. The total length of both is `num_samples` or fewer.
- """
- positive = nonzero_tuple((labels != -1) & (labels != bg_label))[0]
- negative = nonzero_tuple(labels == bg_label)[0]
-
- num_pos = int(num_samples * positive_fraction)
- # protect against not enough positive examples
- num_pos = min(positive.numel(), num_pos)
- num_neg = num_samples - num_pos
- # protect against not enough negative examples
- num_neg = min(negative.numel(), num_neg)
-
- # randomly select positive and negative examples
- perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos]
- perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg]
-
- pos_idx = positive[perm1]
- neg_idx = negative[perm2]
- return pos_idx, neg_idx
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageQt.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageQt.py
deleted file mode 100644
index 9b7245454dfcccb4e822a6634168d405c0e791bb..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageQt.py
+++ /dev/null
@@ -1,216 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# a simple Qt image interface.
-#
-# history:
-# 2006-06-03 fl: created
-# 2006-06-04 fl: inherit from QImage instead of wrapping it
-# 2006-06-05 fl: removed toimage helper; move string support to ImageQt
-# 2013-11-13 fl: add support for Qt5 (aurelien.ballier@cyclonit.com)
-#
-# Copyright (c) 2006 by Secret Labs AB
-# Copyright (c) 2006 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import sys
-from io import BytesIO
-
-from . import Image
-from ._util import is_path
-
-qt_versions = [
- ["6", "PyQt6"],
- ["side6", "PySide6"],
-]
-
-# If a version has already been imported, attempt it first
-qt_versions.sort(key=lambda qt_version: qt_version[1] in sys.modules, reverse=True)
-for qt_version, qt_module in qt_versions:
- try:
- if qt_module == "PyQt6":
- from PyQt6.QtCore import QBuffer, QIODevice
- from PyQt6.QtGui import QImage, QPixmap, qRgba
- elif qt_module == "PySide6":
- from PySide6.QtCore import QBuffer, QIODevice
- from PySide6.QtGui import QImage, QPixmap, qRgba
- except (ImportError, RuntimeError):
- continue
- qt_is_installed = True
- break
-else:
- qt_is_installed = False
- qt_version = None
-
-
-def rgb(r, g, b, a=255):
- """(Internal) Turns an RGB color into a Qt compatible color integer."""
- # use qRgb to pack the colors, and then turn the resulting long
- # into a negative integer with the same bitpattern.
- return qRgba(r, g, b, a) & 0xFFFFFFFF
-
-
-def fromqimage(im):
- """
- :param im: QImage or PIL ImageQt object
- """
- buffer = QBuffer()
- if qt_version == "6":
- try:
- qt_openmode = QIODevice.OpenModeFlag
- except AttributeError:
- qt_openmode = QIODevice.OpenMode
- else:
- qt_openmode = QIODevice
- buffer.open(qt_openmode.ReadWrite)
- # preserve alpha channel with png
- # otherwise ppm is more friendly with Image.open
- if im.hasAlphaChannel():
- im.save(buffer, "png")
- else:
- im.save(buffer, "ppm")
-
- b = BytesIO()
- b.write(buffer.data())
- buffer.close()
- b.seek(0)
-
- return Image.open(b)
-
-
-def fromqpixmap(im):
- return fromqimage(im)
- # buffer = QBuffer()
- # buffer.open(QIODevice.ReadWrite)
- # # im.save(buffer)
- # # What if png doesn't support some image features like animation?
- # im.save(buffer, 'ppm')
- # bytes_io = BytesIO()
- # bytes_io.write(buffer.data())
- # buffer.close()
- # bytes_io.seek(0)
- # return Image.open(bytes_io)
-
-
-def align8to32(bytes, width, mode):
- """
- converts each scanline of data from 8 bit to 32 bit aligned
- """
-
- bits_per_pixel = {"1": 1, "L": 8, "P": 8, "I;16": 16}[mode]
-
- # calculate bytes per line and the extra padding if needed
- bits_per_line = bits_per_pixel * width
- full_bytes_per_line, remaining_bits_per_line = divmod(bits_per_line, 8)
- bytes_per_line = full_bytes_per_line + (1 if remaining_bits_per_line else 0)
-
- extra_padding = -bytes_per_line % 4
-
- # already 32 bit aligned by luck
- if not extra_padding:
- return bytes
-
- new_data = []
- for i in range(len(bytes) // bytes_per_line):
- new_data.append(
- bytes[i * bytes_per_line : (i + 1) * bytes_per_line]
- + b"\x00" * extra_padding
- )
-
- return b"".join(new_data)
-
-
-def _toqclass_helper(im):
- data = None
- colortable = None
- exclusive_fp = False
-
- # handle filename, if given instead of image name
- if hasattr(im, "toUtf8"):
- # FIXME - is this really the best way to do this?
- im = str(im.toUtf8(), "utf-8")
- if is_path(im):
- im = Image.open(im)
- exclusive_fp = True
-
- qt_format = QImage.Format if qt_version == "6" else QImage
- if im.mode == "1":
- format = qt_format.Format_Mono
- elif im.mode == "L":
- format = qt_format.Format_Indexed8
- colortable = []
- for i in range(256):
- colortable.append(rgb(i, i, i))
- elif im.mode == "P":
- format = qt_format.Format_Indexed8
- colortable = []
- palette = im.getpalette()
- for i in range(0, len(palette), 3):
- colortable.append(rgb(*palette[i : i + 3]))
- elif im.mode == "RGB":
- # Populate the 4th channel with 255
- im = im.convert("RGBA")
-
- data = im.tobytes("raw", "BGRA")
- format = qt_format.Format_RGB32
- elif im.mode == "RGBA":
- data = im.tobytes("raw", "BGRA")
- format = qt_format.Format_ARGB32
- elif im.mode == "I;16" and hasattr(qt_format, "Format_Grayscale16"): # Qt 5.13+
- im = im.point(lambda i: i * 256)
-
- format = qt_format.Format_Grayscale16
- else:
- if exclusive_fp:
- im.close()
- msg = f"unsupported image mode {repr(im.mode)}"
- raise ValueError(msg)
-
- size = im.size
- __data = data or align8to32(im.tobytes(), size[0], im.mode)
- if exclusive_fp:
- im.close()
- return {"data": __data, "size": size, "format": format, "colortable": colortable}
-
-
-if qt_is_installed:
-
- class ImageQt(QImage):
- def __init__(self, im):
- """
- An PIL image wrapper for Qt. This is a subclass of PyQt's QImage
- class.
-
- :param im: A PIL Image object, or a file name (given either as
- Python string or a PyQt string object).
- """
- im_data = _toqclass_helper(im)
- # must keep a reference, or Qt will crash!
- # All QImage constructors that take data operate on an existing
- # buffer, so this buffer has to hang on for the life of the image.
- # Fixes https://github.com/python-pillow/Pillow/issues/1370
- self.__data = im_data["data"]
- super().__init__(
- self.__data,
- im_data["size"][0],
- im_data["size"][1],
- im_data["format"],
- )
- if im_data["colortable"]:
- self.setColorTable(im_data["colortable"])
-
-
-def toqimage(im):
- return ImageQt(im)
-
-
-def toqpixmap(im):
- # # This doesn't work. For now using a dumb approach.
- # im_data = _toqclass_helper(im)
- # result = QPixmap(im_data["size"][0], im_data["size"][1])
- # result.loadFromData(im_data["data"])
- qimage = toqimage(im)
- return QPixmap.fromImage(qimage)
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/data/coco_panoptic_separated.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/data/coco_panoptic_separated.py
deleted file mode 100644
index 5ccbc77e64d1c92c99cbd7158d047bab54cb9f3d..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/data/coco_panoptic_separated.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from detectron2.config import LazyCall as L
-from detectron2.evaluation import (
- COCOEvaluator,
- COCOPanopticEvaluator,
- DatasetEvaluators,
- SemSegEvaluator,
-)
-
-from .coco import dataloader
-
-dataloader.train.dataset.names = "coco_2017_train_panoptic_separated"
-dataloader.train.dataset.filter_empty = False
-dataloader.test.dataset.names = "coco_2017_val_panoptic_separated"
-
-
-dataloader.evaluator = [
- L(COCOEvaluator)(
- dataset_name="${...test.dataset.names}",
- ),
- L(SemSegEvaluator)(
- dataset_name="${...test.dataset.names}",
- ),
- L(COCOPanopticEvaluator)(
- dataset_name="${...test.dataset.names}",
- ),
-]
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/transform_data.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/transform_data.py
deleted file mode 100644
index 7cac1bb7663b985165000b2b351d6ff630d2ba3f..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/transform_data.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from typing import BinaryIO, Dict, Union
-import torch
-
-
-def normalized_coords_transform(x0, y0, w, h):
- """
- Coordinates transform that maps top left corner to (-1, -1) and bottom
- right corner to (1, 1). Used for torch.grid_sample to initialize the
- grid
- """
-
- def f(p):
- return (2 * (p[0] - x0) / w - 1, 2 * (p[1] - y0) / h - 1)
-
- return f
-
-
-class DensePoseTransformData(object):
-
- # Horizontal symmetry label transforms used for horizontal flip
- MASK_LABEL_SYMMETRIES = [0, 1, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 14]
- # fmt: off
- POINT_LABEL_SYMMETRIES = [ 0, 1, 2, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15, 18, 17, 20, 19, 22, 21, 24, 23] # noqa
- # fmt: on
-
- def __init__(self, uv_symmetries: Dict[str, torch.Tensor], device: torch.device):
- self.mask_label_symmetries = DensePoseTransformData.MASK_LABEL_SYMMETRIES
- self.point_label_symmetries = DensePoseTransformData.POINT_LABEL_SYMMETRIES
- self.uv_symmetries = uv_symmetries
- self.device = torch.device("cpu")
-
- def to(self, device: torch.device, copy: bool = False) -> "DensePoseTransformData":
- """
- Convert transform data to the specified device
-
- Args:
- device (torch.device): device to convert the data to
- copy (bool): flag that specifies whether to copy or to reference the data
- in case the device is the same
- Return:
- An instance of `DensePoseTransformData` with data stored on the specified device
- """
- if self.device == device and not copy:
- return self
- uv_symmetry_map = {}
- for key in self.uv_symmetries:
- uv_symmetry_map[key] = self.uv_symmetries[key].to(device=device, copy=copy)
- return DensePoseTransformData(uv_symmetry_map, device)
-
- @staticmethod
- def load(io: Union[str, BinaryIO]):
- """
- Args:
- io: (str or binary file-like object): input file to load data from
- Returns:
- An instance of `DensePoseTransformData` with transforms loaded from the file
- """
- import scipy.io
-
- uv_symmetry_map = scipy.io.loadmat(io)
- uv_symmetry_map_torch = {}
- for key in ["U_transforms", "V_transforms"]:
- uv_symmetry_map_torch[key] = []
- map_src = uv_symmetry_map[key]
- map_dst = uv_symmetry_map_torch[key]
- for i in range(map_src.shape[1]):
- map_dst.append(torch.from_numpy(map_src[0, i]).to(dtype=torch.float))
- uv_symmetry_map_torch[key] = torch.stack(map_dst, dim=0)
- transform_data = DensePoseTransformData(uv_symmetry_map_torch, device=torch.device("cpu"))
- return transform_data
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/export_model.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/export_model.py
deleted file mode 100644
index f507dffe56a4121756874186eacdc9be0cbcdee1..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/export_model.py
+++ /dev/null
@@ -1,240 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import os
-from typing import Dict, List, Tuple
-import torch
-from torch import Tensor, nn
-
-import detectron2.data.transforms as T
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import build_detection_test_loader, detection_utils
-from detectron2.evaluation import COCOEvaluator, inference_on_dataset, print_csv_format
-from detectron2.export import (
- STABLE_ONNX_OPSET_VERSION,
- TracingAdapter,
- dump_torchscript_IR,
- scripting_with_instances,
-)
-from detectron2.modeling import GeneralizedRCNN, RetinaNet, build_model
-from detectron2.modeling.postprocessing import detector_postprocess
-from detectron2.projects.point_rend import add_pointrend_config
-from detectron2.structures import Boxes
-from detectron2.utils.env import TORCH_VERSION
-from detectron2.utils.file_io import PathManager
-from detectron2.utils.logger import setup_logger
-
-
-def setup_cfg(args):
- cfg = get_cfg()
- # cuda context is initialized before creating dataloader, so we don't fork anymore
- cfg.DATALOADER.NUM_WORKERS = 0
- add_pointrend_config(cfg)
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- return cfg
-
-
-def export_caffe2_tracing(cfg, torch_model, inputs):
- from detectron2.export import Caffe2Tracer
-
- tracer = Caffe2Tracer(cfg, torch_model, inputs)
- if args.format == "caffe2":
- caffe2_model = tracer.export_caffe2()
- caffe2_model.save_protobuf(args.output)
- # draw the caffe2 graph
- caffe2_model.save_graph(os.path.join(args.output, "model.svg"), inputs=inputs)
- return caffe2_model
- elif args.format == "onnx":
- import onnx
-
- onnx_model = tracer.export_onnx()
- onnx.save(onnx_model, os.path.join(args.output, "model.onnx"))
- elif args.format == "torchscript":
- ts_model = tracer.export_torchscript()
- with PathManager.open(os.path.join(args.output, "model.ts"), "wb") as f:
- torch.jit.save(ts_model, f)
- dump_torchscript_IR(ts_model, args.output)
-
-
-# experimental. API not yet final
-def export_scripting(torch_model):
- assert TORCH_VERSION >= (1, 8)
- fields = {
- "proposal_boxes": Boxes,
- "objectness_logits": Tensor,
- "pred_boxes": Boxes,
- "scores": Tensor,
- "pred_classes": Tensor,
- "pred_masks": Tensor,
- "pred_keypoints": torch.Tensor,
- "pred_keypoint_heatmaps": torch.Tensor,
- }
- assert args.format == "torchscript", "Scripting only supports torchscript format."
-
- class ScriptableAdapterBase(nn.Module):
- # Use this adapter to workaround https://github.com/pytorch/pytorch/issues/46944
- # by not retuning instances but dicts. Otherwise the exported model is not deployable
- def __init__(self):
- super().__init__()
- self.model = torch_model
- self.eval()
-
- if isinstance(torch_model, GeneralizedRCNN):
-
- class ScriptableAdapter(ScriptableAdapterBase):
- def forward(self, inputs: Tuple[Dict[str, torch.Tensor]]) -> List[Dict[str, Tensor]]:
- instances = self.model.inference(inputs, do_postprocess=False)
- return [i.get_fields() for i in instances]
-
- else:
-
- class ScriptableAdapter(ScriptableAdapterBase):
- def forward(self, inputs: Tuple[Dict[str, torch.Tensor]]) -> List[Dict[str, Tensor]]:
- instances = self.model(inputs)
- return [i.get_fields() for i in instances]
-
- ts_model = scripting_with_instances(ScriptableAdapter(), fields)
- with PathManager.open(os.path.join(args.output, "model.ts"), "wb") as f:
- torch.jit.save(ts_model, f)
- dump_torchscript_IR(ts_model, args.output)
- # TODO inference in Python now missing postprocessing glue code
- return None
-
-
-# experimental. API not yet final
-def export_tracing(torch_model, inputs):
- assert TORCH_VERSION >= (1, 8)
- image = inputs[0]["image"]
- inputs = [{"image": image}] # remove other unused keys
-
- if isinstance(torch_model, GeneralizedRCNN):
-
- def inference(model, inputs):
- # use do_postprocess=False so it returns ROI mask
- inst = model.inference(inputs, do_postprocess=False)[0]
- return [{"instances": inst}]
-
- else:
- inference = None # assume that we just call the model directly
-
- traceable_model = TracingAdapter(torch_model, inputs, inference)
-
- if args.format == "torchscript":
- ts_model = torch.jit.trace(traceable_model, (image,))
- with PathManager.open(os.path.join(args.output, "model.ts"), "wb") as f:
- torch.jit.save(ts_model, f)
- dump_torchscript_IR(ts_model, args.output)
- elif args.format == "onnx":
- with PathManager.open(os.path.join(args.output, "model.onnx"), "wb") as f:
- torch.onnx.export(traceable_model, (image,), f, opset_version=STABLE_ONNX_OPSET_VERSION)
- logger.info("Inputs schema: " + str(traceable_model.inputs_schema))
- logger.info("Outputs schema: " + str(traceable_model.outputs_schema))
-
- if args.format != "torchscript":
- return None
- if not isinstance(torch_model, (GeneralizedRCNN, RetinaNet)):
- return None
-
- def eval_wrapper(inputs):
- """
- The exported model does not contain the final resize step, which is typically
- unused in deployment but needed for evaluation. We add it manually here.
- """
- input = inputs[0]
- instances = traceable_model.outputs_schema(ts_model(input["image"]))[0]["instances"]
- postprocessed = detector_postprocess(instances, input["height"], input["width"])
- return [{"instances": postprocessed}]
-
- return eval_wrapper
-
-
-def get_sample_inputs(args):
-
- if args.sample_image is None:
- # get a first batch from dataset
- data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0])
- first_batch = next(iter(data_loader))
- return first_batch
- else:
- # get a sample data
- original_image = detection_utils.read_image(args.sample_image, format=cfg.INPUT.FORMAT)
- # Do same preprocessing as DefaultPredictor
- aug = T.ResizeShortestEdge(
- [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST
- )
- height, width = original_image.shape[:2]
- image = aug.get_transform(original_image).apply_image(original_image)
- image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
-
- inputs = {"image": image, "height": height, "width": width}
-
- # Sample ready
- sample_inputs = [inputs]
- return sample_inputs
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Export a model for deployment.")
- parser.add_argument(
- "--format",
- choices=["caffe2", "onnx", "torchscript"],
- help="output format",
- default="torchscript",
- )
- parser.add_argument(
- "--export-method",
- choices=["caffe2_tracing", "tracing", "scripting"],
- help="Method to export models",
- default="tracing",
- )
- parser.add_argument("--config-file", default="", metavar="FILE", help="path to config file")
- parser.add_argument("--sample-image", default=None, type=str, help="sample image for input")
- parser.add_argument("--run-eval", action="store_true")
- parser.add_argument("--output", help="output directory for the converted model")
- parser.add_argument(
- "opts",
- help="Modify config options using the command-line",
- default=None,
- nargs=argparse.REMAINDER,
- )
- args = parser.parse_args()
- logger = setup_logger()
- logger.info("Command line arguments: " + str(args))
- PathManager.mkdirs(args.output)
- # Disable re-specialization on new shapes. Otherwise --run-eval will be slow
- torch._C._jit_set_bailout_depth(1)
-
- cfg = setup_cfg(args)
-
- # create a torch model
- torch_model = build_model(cfg)
- DetectionCheckpointer(torch_model).resume_or_load(cfg.MODEL.WEIGHTS)
- torch_model.eval()
-
- # convert and save model
- if args.export_method == "caffe2_tracing":
- sample_inputs = get_sample_inputs(args)
- exported_model = export_caffe2_tracing(cfg, torch_model, sample_inputs)
- elif args.export_method == "scripting":
- exported_model = export_scripting(torch_model)
- elif args.export_method == "tracing":
- sample_inputs = get_sample_inputs(args)
- exported_model = export_tracing(torch_model, sample_inputs)
-
- # run evaluation with the converted model
- if args.run_eval:
- assert exported_model is not None, (
- "Python inference is not yet implemented for "
- f"export_method={args.export_method}, format={args.format}."
- )
- logger.info("Running evaluation ... this takes a long time if you export to CPU.")
- dataset = cfg.DATASETS.TEST[0]
- data_loader = build_detection_test_loader(cfg, dataset)
- # NOTE: hard-coded evaluator. change to the evaluator for your dataset
- evaluator = COCOEvaluator(dataset, output_dir=args.output)
- metrics = inference_on_dataset(exported_model, data_loader, evaluator)
- print_csv_format(metrics)
- logger.info("Success.")
diff --git a/spaces/chenglu/chenglu-my_awesome_model/README.md b/spaces/chenglu/chenglu-my_awesome_model/README.md
deleted file mode 100644
index 413ae078f4b04759cb5cba58045548fecc460c54..0000000000000000000000000000000000000000
--- a/spaces/chenglu/chenglu-my_awesome_model/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Chenglu-my Awesome Model
-emoji: 👀
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/tone_sandhi.py b/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/tone_sandhi.py
deleted file mode 100644
index 6a6e4c3e64f1a9e8b9da73fc6fbebf8a33e5602d..0000000000000000000000000000000000000000
--- a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/tone_sandhi.py
+++ /dev/null
@@ -1,769 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import List
-from typing import Tuple
-
-import jieba
-from pypinyin import lazy_pinyin
-from pypinyin import Style
-
-
-class ToneSandhi:
- def __init__(self):
- self.must_neural_tone_words = {
- "麻烦",
- "麻利",
- "鸳鸯",
- "高粱",
- "骨头",
- "骆驼",
- "马虎",
- "首饰",
- "馒头",
- "馄饨",
- "风筝",
- "难为",
- "队伍",
- "阔气",
- "闺女",
- "门道",
- "锄头",
- "铺盖",
- "铃铛",
- "铁匠",
- "钥匙",
- "里脊",
- "里头",
- "部分",
- "那么",
- "道士",
- "造化",
- "迷糊",
- "连累",
- "这么",
- "这个",
- "运气",
- "过去",
- "软和",
- "转悠",
- "踏实",
- "跳蚤",
- "跟头",
- "趔趄",
- "财主",
- "豆腐",
- "讲究",
- "记性",
- "记号",
- "认识",
- "规矩",
- "见识",
- "裁缝",
- "补丁",
- "衣裳",
- "衣服",
- "衙门",
- "街坊",
- "行李",
- "行当",
- "蛤蟆",
- "蘑菇",
- "薄荷",
- "葫芦",
- "葡萄",
- "萝卜",
- "荸荠",
- "苗条",
- "苗头",
- "苍蝇",
- "芝麻",
- "舒服",
- "舒坦",
- "舌头",
- "自在",
- "膏药",
- "脾气",
- "脑袋",
- "脊梁",
- "能耐",
- "胳膊",
- "胭脂",
- "胡萝",
- "胡琴",
- "胡同",
- "聪明",
- "耽误",
- "耽搁",
- "耷拉",
- "耳朵",
- "老爷",
- "老实",
- "老婆",
- "老头",
- "老太",
- "翻腾",
- "罗嗦",
- "罐头",
- "编辑",
- "结实",
- "红火",
- "累赘",
- "糨糊",
- "糊涂",
- "精神",
- "粮食",
- "簸箕",
- "篱笆",
- "算计",
- "算盘",
- "答应",
- "笤帚",
- "笑语",
- "笑话",
- "窟窿",
- "窝囊",
- "窗户",
- "稳当",
- "稀罕",
- "称呼",
- "秧歌",
- "秀气",
- "秀才",
- "福气",
- "祖宗",
- "砚台",
- "码头",
- "石榴",
- "石头",
- "石匠",
- "知识",
- "眼睛",
- "眯缝",
- "眨巴",
- "眉毛",
- "相声",
- "盘算",
- "白净",
- "痢疾",
- "痛快",
- "疟疾",
- "疙瘩",
- "疏忽",
- "畜生",
- "生意",
- "甘蔗",
- "琵琶",
- "琢磨",
- "琉璃",
- "玻璃",
- "玫瑰",
- "玄乎",
- "狐狸",
- "状元",
- "特务",
- "牲口",
- "牙碜",
- "牌楼",
- "爽快",
- "爱人",
- "热闹",
- "烧饼",
- "烟筒",
- "烂糊",
- "点心",
- "炊帚",
- "灯笼",
- "火候",
- "漂亮",
- "滑溜",
- "溜达",
- "温和",
- "清楚",
- "消息",
- "浪头",
- "活泼",
- "比方",
- "正经",
- "欺负",
- "模糊",
- "槟榔",
- "棺材",
- "棒槌",
- "棉花",
- "核桃",
- "栅栏",
- "柴火",
- "架势",
- "枕头",
- "枇杷",
- "机灵",
- "本事",
- "木头",
- "木匠",
- "朋友",
- "月饼",
- "月亮",
- "暖和",
- "明白",
- "时候",
- "新鲜",
- "故事",
- "收拾",
- "收成",
- "提防",
- "挖苦",
- "挑剔",
- "指甲",
- "指头",
- "拾掇",
- "拳头",
- "拨弄",
- "招牌",
- "招呼",
- "抬举",
- "护士",
- "折腾",
- "扫帚",
- "打量",
- "打算",
- "打点",
- "打扮",
- "打听",
- "打发",
- "扎实",
- "扁担",
- "戒指",
- "懒得",
- "意识",
- "意思",
- "情形",
- "悟性",
- "怪物",
- "思量",
- "怎么",
- "念头",
- "念叨",
- "快活",
- "忙活",
- "志气",
- "心思",
- "得罪",
- "张罗",
- "弟兄",
- "开通",
- "应酬",
- "庄稼",
- "干事",
- "帮手",
- "帐篷",
- "希罕",
- "师父",
- "师傅",
- "巴结",
- "巴掌",
- "差事",
- "工夫",
- "岁数",
- "屁股",
- "尾巴",
- "少爷",
- "小气",
- "小伙",
- "将就",
- "对头",
- "对付",
- "寡妇",
- "家伙",
- "客气",
- "实在",
- "官司",
- "学问",
- "学生",
- "字号",
- "嫁妆",
- "媳妇",
- "媒人",
- "婆家",
- "娘家",
- "委屈",
- "姑娘",
- "姐夫",
- "妯娌",
- "妥当",
- "妖精",
- "奴才",
- "女婿",
- "头发",
- "太阳",
- "大爷",
- "大方",
- "大意",
- "大夫",
- "多少",
- "多么",
- "外甥",
- "壮实",
- "地道",
- "地方",
- "在乎",
- "困难",
- "嘴巴",
- "嘱咐",
- "嘟囔",
- "嘀咕",
- "喜欢",
- "喇嘛",
- "喇叭",
- "商量",
- "唾沫",
- "哑巴",
- "哈欠",
- "哆嗦",
- "咳嗽",
- "和尚",
- "告诉",
- "告示",
- "含糊",
- "吓唬",
- "后头",
- "名字",
- "名堂",
- "合同",
- "吆喝",
- "叫唤",
- "口袋",
- "厚道",
- "厉害",
- "千斤",
- "包袱",
- "包涵",
- "匀称",
- "勤快",
- "动静",
- "动弹",
- "功夫",
- "力气",
- "前头",
- "刺猬",
- "刺激",
- "别扭",
- "利落",
- "利索",
- "利害",
- "分析",
- "出息",
- "凑合",
- "凉快",
- "冷战",
- "冤枉",
- "冒失",
- "养活",
- "关系",
- "先生",
- "兄弟",
- "便宜",
- "使唤",
- "佩服",
- "作坊",
- "体面",
- "位置",
- "似的",
- "伙计",
- "休息",
- "什么",
- "人家",
- "亲戚",
- "亲家",
- "交情",
- "云彩",
- "事情",
- "买卖",
- "主意",
- "丫头",
- "丧气",
- "两口",
- "东西",
- "东家",
- "世故",
- "不由",
- "不在",
- "下水",
- "下巴",
- "上头",
- "上司",
- "丈夫",
- "丈人",
- "一辈",
- "那个",
- "菩萨",
- "父亲",
- "母亲",
- "咕噜",
- "邋遢",
- "费用",
- "冤家",
- "甜头",
- "介绍",
- "荒唐",
- "大人",
- "泥鳅",
- "幸福",
- "熟悉",
- "计划",
- "扑腾",
- "蜡烛",
- "姥爷",
- "照顾",
- "喉咙",
- "吉他",
- "弄堂",
- "蚂蚱",
- "凤凰",
- "拖沓",
- "寒碜",
- "糟蹋",
- "倒腾",
- "报复",
- "逻辑",
- "盘缠",
- "喽啰",
- "牢骚",
- "咖喱",
- "扫把",
- "惦记",
- }
- self.must_not_neural_tone_words = {
- "男子",
- "女子",
- "分子",
- "原子",
- "量子",
- "莲子",
- "石子",
- "瓜子",
- "电子",
- "人人",
- "虎虎",
- }
- self.punc = ":,;。?!“”‘’':,;.?!"
-
- # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
- # e.g.
- # word: "家里"
- # pos: "s"
- # finals: ['ia1', 'i3']
- def _neural_sandhi(self, word: str, pos: str, finals: List[str]) -> List[str]:
- # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
- for j, item in enumerate(word):
- if (
- j - 1 >= 0
- and item == word[j - 1]
- and pos[0] in {"n", "v", "a"}
- and word not in self.must_not_neural_tone_words
- ):
- finals[j] = finals[j][:-1] + "5"
- ge_idx = word.find("个")
- if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶":
- finals[-1] = finals[-1][:-1] + "5"
- elif len(word) >= 1 and word[-1] in "的地得":
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 走了, 看着, 去过
- # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
- # finals[-1] = finals[-1][:-1] + "5"
- elif (
- len(word) > 1
- and word[-1] in "们子"
- and pos in {"r", "n"}
- and word not in self.must_not_neural_tone_words
- ):
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 桌上, 地下, 家里
- elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 上来, 下去
- elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
- finals[-1] = finals[-1][:-1] + "5"
- # 个做量词
- elif (
- ge_idx >= 1
- and (word[ge_idx - 1].isnumeric() or word[ge_idx - 1] in "几有两半多各整每做是")
- ) or word == "个":
- finals[ge_idx] = finals[ge_idx][:-1] + "5"
- else:
- if (
- word in self.must_neural_tone_words
- or word[-2:] in self.must_neural_tone_words
- ):
- finals[-1] = finals[-1][:-1] + "5"
-
- word_list = self._split_word(word)
- finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]]
- for i, word in enumerate(word_list):
- # conventional neural in Chinese
- if (
- word in self.must_neural_tone_words
- or word[-2:] in self.must_neural_tone_words
- ):
- finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
- finals = sum(finals_list, [])
- return finals
-
- def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # e.g. 看不懂
- if len(word) == 3 and word[1] == "不":
- finals[1] = finals[1][:-1] + "5"
- else:
- for i, char in enumerate(word):
- # "不" before tone4 should be bu2, e.g. 不怕
- if char == "不" and i + 1 < len(word) and finals[i + 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- return finals
-
- def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # "一" in number sequences, e.g. 一零零, 二一零
- if word.find("一") != -1 and all(
- [item.isnumeric() for item in word if item != "一"]
- ):
- return finals
- # "一" between reduplication words should be yi5, e.g. 看一看
- elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]:
- finals[1] = finals[1][:-1] + "5"
- # when "一" is ordinal word, it should be yi1
- elif word.startswith("第一"):
- finals[1] = finals[1][:-1] + "1"
- else:
- for i, char in enumerate(word):
- if char == "一" and i + 1 < len(word):
- # "一" before tone4 should be yi2, e.g. 一段
- if finals[i + 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- # "一" before non-tone4 should be yi4, e.g. 一天
- else:
- # "一" 后面如果是标点,还读一声
- if word[i + 1] not in self.punc:
- finals[i] = finals[i][:-1] + "4"
- return finals
-
- def _split_word(self, word: str) -> List[str]:
- word_list = jieba.cut_for_search(word)
- word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
- first_subword = word_list[0]
- first_begin_idx = word.find(first_subword)
- if first_begin_idx == 0:
- second_subword = word[len(first_subword) :]
- new_word_list = [first_subword, second_subword]
- else:
- second_subword = word[: -len(first_subword)]
- new_word_list = [second_subword, first_subword]
- return new_word_list
-
- def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
- if len(word) == 2 and self._all_tone_three(finals):
- finals[0] = finals[0][:-1] + "2"
- elif len(word) == 3:
- word_list = self._split_word(word)
- if self._all_tone_three(finals):
- # disyllabic + monosyllabic, e.g. 蒙古/包
- if len(word_list[0]) == 2:
- finals[0] = finals[0][:-1] + "2"
- finals[1] = finals[1][:-1] + "2"
- # monosyllabic + disyllabic, e.g. 纸/老虎
- elif len(word_list[0]) == 1:
- finals[1] = finals[1][:-1] + "2"
- else:
- finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]]
- if len(finals_list) == 2:
- for i, sub in enumerate(finals_list):
- # e.g. 所有/人
- if self._all_tone_three(sub) and len(sub) == 2:
- finals_list[i][0] = finals_list[i][0][:-1] + "2"
- # e.g. 好/喜欢
- elif (
- i == 1
- and not self._all_tone_three(sub)
- and finals_list[i][0][-1] == "3"
- and finals_list[0][-1][-1] == "3"
- ):
- finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
- finals = sum(finals_list, [])
- # split idiom into two words who's length is 2
- elif len(word) == 4:
- finals_list = [finals[:2], finals[2:]]
- finals = []
- for sub in finals_list:
- if self._all_tone_three(sub):
- sub[0] = sub[0][:-1] + "2"
- finals += sub
-
- return finals
-
- def _all_tone_three(self, finals: List[str]) -> bool:
- return all(x[-1] == "3" for x in finals)
-
- # merge "不" and the word behind it
- # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
- def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- last_word = ""
- for word, pos in seg:
- if last_word == "不":
- word = last_word + word
- if word != "不":
- new_seg.append((word, pos))
- last_word = word[:]
- if last_word == "不":
- new_seg.append((last_word, "d"))
- last_word = ""
- return new_seg
-
- # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
- # function 2: merge single "一" and the word behind it
- # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
- # e.g.
- # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
- # output seg: [['听一听', 'v']]
- def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- # function 1
- for i, (word, pos) in enumerate(seg):
- if (
- i - 1 >= 0
- and word == "一"
- and i + 1 < len(seg)
- and seg[i - 1][0] == seg[i + 1][0]
- and seg[i - 1][1] == "v"
- ):
- new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
- else:
- if (
- i - 2 >= 0
- and seg[i - 1][0] == "一"
- and seg[i - 2][0] == word
- and pos == "v"
- ):
- continue
- else:
- new_seg.append([word, pos])
- seg = new_seg
- new_seg = []
- # function 2
- for i, (word, pos) in enumerate(seg):
- if new_seg and new_seg[-1][0] == "一":
- new_seg[-1][0] = new_seg[-1][0] + word
- else:
- new_seg.append([word, pos])
- return new_seg
-
- # the first and the second words are all_tone_three
- def _merge_continuous_three_tones(
- self, seg: List[Tuple[str, str]]
- ) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if (
- i - 1 >= 0
- and self._all_tone_three(sub_finals_list[i - 1])
- and self._all_tone_three(sub_finals_list[i])
- and not merge_last[i - 1]
- ):
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if (
- not self._is_reduplication(seg[i - 1][0])
- and len(seg[i - 1][0]) + len(seg[i][0]) <= 3
- ):
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
-
- return new_seg
-
- def _is_reduplication(self, word: str) -> bool:
- return len(word) == 2 and word[0] == word[1]
-
- # the last char of first word and the first char of second word is tone_three
- def _merge_continuous_three_tones_2(
- self, seg: List[Tuple[str, str]]
- ) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if (
- i - 1 >= 0
- and sub_finals_list[i - 1][-1][-1] == "3"
- and sub_finals_list[i][0][-1] == "3"
- and not merge_last[i - 1]
- ):
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if (
- not self._is_reduplication(seg[i - 1][0])
- and len(seg[i - 1][0]) + len(seg[i][0]) <= 3
- ):
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "儿" and seg[i - 1][0] != "#":
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_reduplication(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if new_seg and word == new_seg[-1][0]:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def pre_merge_for_modify(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- seg = self._merge_bu(seg)
- try:
- seg = self._merge_yi(seg)
- except:
- print("_merge_yi failed")
- seg = self._merge_reduplication(seg)
- seg = self._merge_continuous_three_tones(seg)
- seg = self._merge_continuous_three_tones_2(seg)
- seg = self._merge_er(seg)
- return seg
-
- def modified_tone(self, word: str, pos: str, finals: List[str]) -> List[str]:
- finals = self._bu_sandhi(word, finals)
- finals = self._yi_sandhi(word, finals)
- finals = self._neural_sandhi(word, pos, finals)
- finals = self._three_sandhi(word, finals)
- return finals
diff --git a/spaces/chronopt-research/ViTExCo/app.py b/spaces/chronopt-research/ViTExCo/app.py
deleted file mode 100644
index 52e4d9fd2fdbac5623040708fa77ba7b6b7b349d..0000000000000000000000000000000000000000
--- a/spaces/chronopt-research/ViTExCo/app.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import numpy as np
-import shutil
-import os
-import argparse
-import torch
-import glob
-from tqdm import tqdm
-from PIL import Image
-from collections import OrderedDict
-from src.models.vit.config import load_config
-import torchvision.transforms as transforms
-import cv2
-from skimage import io
-
-from src.models.CNN.ColorVidNet import GeneralColorVidNet
-from src.models.vit.embed import GeneralEmbedModel
-from src.models.CNN.NonlocalNet import GeneralWarpNet
-from src.models.CNN.FrameColor import frame_colorization
-from src.utils import (
- RGB2Lab,
- ToTensor,
- Normalize,
- uncenter_l,
- tensor_lab2rgb,
- SquaredPadding,
- UnpaddingSquare
-)
-
-import gradio as gr
-
-def load_params(ckpt_file):
- params = torch.load(ckpt_file, map_location=device)
- new_params = []
- for key, value in params.items():
- new_params.append((key, value))
- return OrderedDict(new_params)
-
-def custom_transform(transforms, img):
- for transform in transforms:
- if isinstance(transform, SquaredPadding):
- img,padding=transform(img, return_paddings=True)
- else:
- img = transform(img)
- return img.to(device), padding
-
-def save_frames(predicted_rgb, video_name, frame_name):
- if predicted_rgb is not None:
- predicted_rgb = np.clip(predicted_rgb, 0, 255).astype(np.uint8)
- # frame_path_parts = frame_path.split(os.sep)
- # if os.path.exists(os.path.join(OUTPUT_RESULT_PATH, frame_path_parts[-2])):
- # shutil.rmtree(os.path.join(OUTPUT_RESULT_PATH, frame_path_parts[-2]))
- # os.makedirs(os.path.join(OUTPUT_RESULT_PATH, frame_path_parts[-2]), exist_ok=True)
- predicted_rgb = np.transpose(predicted_rgb, (1,2,0))
- pil_img = Image.fromarray(predicted_rgb)
- pil_img.save(os.path.join(OUTPUT_RESULT_PATH, video_name, frame_name))
-
-def extract_frames_from_video(video_path):
- cap = cv2.VideoCapture(video_path)
- fps = cap.get(cv2.CAP_PROP_FPS)
-
- # remove if exists folder
- output_frames_path = os.path.join(INPUT_VIDEO_FRAMES_PATH, os.path.basename(video_path))
- if os.path.exists(output_frames_path):
- shutil.rmtree(output_frames_path)
-
- # make new folder
- os.makedirs(output_frames_path)
-
- currentframe = 0
- frame_path_list = []
- while(True):
-
- # reading from frame
- ret,frame = cap.read()
-
- if ret:
- name = os.path.join(output_frames_path, f'{currentframe:09d}.jpg')
- frame_path_list.append(name)
- cv2.imwrite(name, frame)
- currentframe += 1
- else:
- break
-
- cap.release()
- cv2.destroyAllWindows()
-
- return frame_path_list, fps
-
-def combine_frames_from_folder(frames_list_path, fps = 30):
- frames_list = glob.glob(f'{frames_list_path}/*.jpg')
- frames_list.sort()
-
- sample_shape = cv2.imread(frames_list[0]).shape
-
- output_video_path = os.path.join(frames_list_path, 'output_video.mp4')
- out = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (sample_shape[1], sample_shape[0]))
- for filename in frames_list:
- img = cv2.imread(filename)
- out.write(img)
-
- out.release()
- return output_video_path
-
-
-def upscale_image(I_current_rgb, I_current_ab_predict):
- H, W = I_current_rgb.size
- high_lab_transforms = [
- SquaredPadding(target_size=max(H,W)),
- RGB2Lab(),
- ToTensor(),
- Normalize()
- ]
- # current_frame_pil_rgb = Image.fromarray(np.clip(I_current_rgb.squeeze(0).permute(1,2,0).cpu().numpy() * 255, 0, 255).astype('uint8'))
- high_lab_current, paddings = custom_transform(high_lab_transforms, I_current_rgb)
- high_lab_current = torch.unsqueeze(high_lab_current,dim=0).to(device)
- high_l_current = high_lab_current[:, 0:1, :, :]
- high_ab_current = high_lab_current[:, 1:3, :, :]
- upsampler = torch.nn.Upsample(scale_factor=max(H,W)/224,mode="bilinear")
- high_ab_predict = upsampler(I_current_ab_predict)
- I_predict_rgb = tensor_lab2rgb(torch.cat((uncenter_l(high_l_current), high_ab_predict), dim=1))
- upadded = UnpaddingSquare()
- I_predict_rgb = upadded(I_predict_rgb, paddings)
- return I_predict_rgb
-
-def colorize_video(video_path, ref_np):
- frames_list, fps = extract_frames_from_video(video_path)
-
- frame_ref = Image.fromarray(ref_np).convert("RGB")
- I_last_lab_predict = None
- IB_lab, IB_paddings = custom_transform(transforms, frame_ref)
- IB_lab = IB_lab.unsqueeze(0).to(device)
- IB_l = IB_lab[:, 0:1, :, :]
- IB_ab = IB_lab[:, 1:3, :, :]
-
- with torch.no_grad():
- I_reference_lab = IB_lab
- I_reference_l = I_reference_lab[:, 0:1, :, :]
- I_reference_ab = I_reference_lab[:, 1:3, :, :]
- I_reference_rgb = tensor_lab2rgb(torch.cat((uncenter_l(I_reference_l), I_reference_ab), dim=1)).to(device)
- features_B = embed_net(I_reference_rgb)
-
- video_path_parts = frames_list[0].split(os.sep)
-
- if os.path.exists(os.path.join(OUTPUT_RESULT_PATH, video_path_parts[-2])):
- shutil.rmtree(os.path.join(OUTPUT_RESULT_PATH, video_path_parts[-2]))
- os.makedirs(os.path.join(OUTPUT_RESULT_PATH, video_path_parts[-2]), exist_ok=True)
-
- for frame_path in tqdm(frames_list):
- curr_frame = Image.open(frame_path).convert("RGB")
- IA_lab, IA_paddings = custom_transform(transforms, curr_frame)
- IA_lab = IA_lab.unsqueeze(0).to(device)
- IA_l = IA_lab[:, 0:1, :, :]
- IA_ab = IA_lab[:, 1:3, :, :]
-
- if I_last_lab_predict is None:
- I_last_lab_predict = torch.zeros_like(IA_lab).to(device)
-
- with torch.no_grad():
- I_current_lab = IA_lab
- I_current_ab_predict, _ = frame_colorization(
- IA_l,
- I_reference_lab,
- I_last_lab_predict,
- features_B,
- embed_net,
- nonlocal_net,
- colornet,
- luminance_noise=0,
- temperature=1e-10,
- joint_training=False
- )
- I_last_lab_predict = torch.cat((IA_l, I_current_ab_predict), dim=1)
-
- # IA_predict_rgb = tensor_lab2rgb(torch.cat((uncenter_l(IA_l), I_current_ab_predict), dim=1))
- IA_predict_rgb = upscale_image(curr_frame, I_current_ab_predict)
- #IA_predict_rgb = torch.nn.functional.upsample_bilinear(IA_predict_rgb, scale_factor=2)
- save_frames(IA_predict_rgb.squeeze(0).cpu().numpy() * 255, video_path_parts[-2], os.path.basename(frame_path))
- return combine_frames_from_folder(os.path.join(OUTPUT_RESULT_PATH, video_path_parts[-2]), fps)
-
-if __name__ == '__main__':
- # Init global variables
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
- INPUT_VIDEO_FRAMES_PATH = 'inputs'
- OUTPUT_RESULT_PATH = 'outputs'
- weight_path = 'checkpoints'
-
- embed_net=GeneralEmbedModel(pretrained_model="swin-tiny", device=device).to(device)
- nonlocal_net = GeneralWarpNet(feature_channel=128).to(device)
- colornet=GeneralColorVidNet(7).to(device)
-
- embed_net.eval()
- nonlocal_net.eval()
- colornet.eval()
-
- # Load weights
- # embed_net_params = load_params(os.path.join(weight_path, "embed_net.pth"))
- nonlocal_net_params = load_params(os.path.join(weight_path, "nonlocal_net.pth"))
- colornet_params = load_params(os.path.join(weight_path, "colornet.pth"))
-
- # embed_net.load_state_dict(embed_net_params, strict=True)
- nonlocal_net.load_state_dict(nonlocal_net_params, strict=True)
- colornet.load_state_dict(colornet_params, strict=True)
-
- transforms = [SquaredPadding(target_size=224),
- RGB2Lab(),
- ToTensor(),
- Normalize()]
-
- #examples = [[vid, ref] for vid, ref in zip(sorted(glob.glob('examples/*/*.mp4')), sorted(glob.glob('examples/*/*.jpg')))]
- demo = gr.Interface(colorize_video,
- inputs=[gr.Video(), gr.Image()],
- outputs="playable_video")#,
- #examples=examples,
- #cache_examples=True)
- demo.launch()
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/upload_button.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/upload_button.py
deleted file mode 100644
index fb75d5a3723fa5247ae864114a355b60c9fb870d..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/upload_button.py
+++ /dev/null
@@ -1,211 +0,0 @@
-"""gr.UploadButton() component."""
-
-from __future__ import annotations
-
-import tempfile
-import warnings
-from typing import Any, Callable, Literal
-
-from gradio_client import utils as client_utils
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import FileSerializable
-
-from gradio import utils
-from gradio.components.base import Component, IOComponent, _Keywords
-from gradio.deprecation import warn_deprecation, warn_style_method_deprecation
-from gradio.events import Clickable, Uploadable
-
-set_documentation_group("component")
-
-
-@document()
-class UploadButton(Clickable, Uploadable, IOComponent, FileSerializable):
- """
- Used to create an upload button, when cicked allows a user to upload files that satisfy the specified file type or generic files (if file_type not set).
- Preprocessing: passes the uploaded file as a {file-object} or {List[file-object]} depending on `file_count` (or a {bytes}/{List{bytes}} depending on `type`)
- Postprocessing: expects function to return a {str} path to a file, or {List[str]} consisting of paths to files.
- Examples-format: a {str} path to a local file that populates the component.
- Demos: upload_button
- """
-
- def __init__(
- self,
- label: str = "Upload a File",
- value: str | list[str] | Callable | None = None,
- *,
- variant: Literal["primary", "secondary", "stop"] = "secondary",
- visible: bool = True,
- size: Literal["sm", "lg"] | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- interactive: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- type: Literal["file", "bytes"] = "file",
- file_count: Literal["single", "multiple", "directory"] = "single",
- file_types: list[str] | None = None,
- **kwargs,
- ):
- """
- Parameters:
- label: Text to display on the button. Defaults to "Upload a File".
- value: File or list of files to upload by default.
- variant: 'primary' for main call-to-action, 'secondary' for a more subdued style, 'stop' for a stop button.
- visible: If False, component will be hidden.
- size: Size of the button. Can be "sm" or "lg".
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- interactive: If False, the UploadButton will be in a disabled state.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- type: Type of value to be returned by component. "file" returns a temporary file object with the same base name as the uploaded file, whose full path can be retrieved by file_obj.name, "binary" returns an bytes object.
- file_count: if single, allows user to upload one file. If "multiple", user uploads multiple files. If "directory", user uploads all files in selected directory. Return type will be list for each file in case of "multiple" or "directory".
- file_types: List of type of files to be uploaded. "file" allows any file to be uploaded, "image" allows only image files to be uploaded, "audio" allows only audio files to be uploaded, "video" allows only video files to be uploaded, "text" allows only text files to be uploaded.
- """
- self.type = type
- self.file_count = file_count
- if file_count == "directory" and file_types is not None:
- warnings.warn(
- "The `file_types` parameter is ignored when `file_count` is 'directory'."
- )
- if file_types is not None and not isinstance(file_types, list):
- raise ValueError(
- f"Parameter file_types must be a list. Received {file_types.__class__.__name__}"
- )
- self.size = size
- self.file_types = file_types
- self.label = label
- self.variant = variant
- IOComponent.__init__(
- self,
- label=label,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- scale=scale,
- min_width=min_width,
- interactive=interactive,
- **kwargs,
- )
-
- def get_config(self):
- return {
- "label": self.label,
- "value": self.value,
- "size": self.size,
- "file_count": self.file_count,
- "file_types": self.file_types,
- "scale": self.scale,
- "min_width": self.min_width,
- "variant": self.variant,
- "interactive": self.interactive,
- **Component.get_config(self),
- }
-
- @staticmethod
- def update(
- value: str
- | list[str]
- | Literal[_Keywords.NO_VALUE]
- | None = _Keywords.NO_VALUE,
- size: Literal["sm", "lg"] | None = None,
- variant: Literal["primary", "secondary", "stop"] | None = None,
- interactive: bool | None = None,
- visible: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- ):
- return {
- "variant": variant,
- "interactive": interactive,
- "size": size,
- "visible": visible,
- "value": value,
- "scale": scale,
- "min_width": min_width,
- "__type__": "update",
- }
-
- def preprocess(
- self, x: list[dict[str, Any]] | None
- ) -> (
- bytes
- | tempfile._TemporaryFileWrapper
- | list[bytes | tempfile._TemporaryFileWrapper]
- | None
- ):
- """
- Parameters:
- x: List of JSON objects with filename as 'name' property and base64 data as 'data' property
- Returns:
- File objects in requested format
- """
- if x is None:
- return None
-
- def process_single_file(f) -> bytes | tempfile._TemporaryFileWrapper:
- file_name, data, is_file = (
- f["name"],
- f["data"],
- f.get("is_file", False),
- )
- if self.type == "file":
- if is_file:
- path = self.make_temp_copy_if_needed(file_name)
- else:
- data, _ = client_utils.decode_base64_to_binary(data)
- path = self.file_bytes_to_file(
- data, dir=self.DEFAULT_TEMP_DIR, file_name=file_name
- )
- path = str(utils.abspath(path))
- self.temp_files.add(path)
- file = tempfile.NamedTemporaryFile(
- delete=False, dir=self.DEFAULT_TEMP_DIR
- )
- file.name = path
- file.orig_name = file_name # type: ignore
- return file
- elif self.type == "bytes":
- if is_file:
- with open(file_name, "rb") as file_data:
- return file_data.read()
- return client_utils.decode_base64_to_binary(data)[0]
- else:
- raise ValueError(
- "Unknown type: "
- + str(self.type)
- + ". Please choose from: 'file', 'bytes'."
- )
-
- if self.file_count == "single":
- if isinstance(x, list):
- return process_single_file(x[0])
- else:
- return process_single_file(x)
- else:
- if isinstance(x, list):
- return [process_single_file(f) for f in x]
- else:
- return process_single_file(x)
-
- def style(
- self,
- *,
- full_width: bool | None = None,
- size: Literal["sm", "lg"] | None = None,
- **kwargs,
- ):
- """
- This method is deprecated. Please set these arguments in the constructor instead.
- """
- warn_style_method_deprecation()
- if full_width is not None:
- warn_deprecation(
- "Use `scale` in place of full_width in the constructor. "
- "scale=1 will make the button expand, whereas 0 will not."
- )
- self.scale = 1 if full_width else None
- if size is not None:
- self.size = size
- return self
diff --git a/spaces/cifkao/context-probing/README.md b/spaces/cifkao/context-probing/README.md
deleted file mode 100644
index 91cf7861e877ddf264db20ef666373d69bccf502..0000000000000000000000000000000000000000
--- a/spaces/cifkao/context-probing/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Context Length Probing
-emoji: 🔎
-colorFrom: green
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: mit
-models:
-- distilgpt2
-- gpt2
-- EleutherAI/gpt-neo-125m
-- roneneldan/TinyStories-8M
-- roneneldan/TinyStories-33M
----
diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Free Latest Games For Nokia 6300 The Ultimate Guide to Gaming on Your Phone.md b/spaces/cihyFjudo/fairness-paper-search/Download Free Latest Games For Nokia 6300 The Ultimate Guide to Gaming on Your Phone.md
deleted file mode 100644
index 3f0ce2a5fb1033c99934d7af74f4413fd7447956..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Download Free Latest Games For Nokia 6300 The Ultimate Guide to Gaming on Your Phone.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Found 21706 Free Nokia 6300 Java Games. Download Nokia 6300 Software for free to your mobile phone or tablet.Touchscreen 128x128 128x160 176x204 176x208 176x220 208x208 240x320 240x400 320x240 352x416 360x640 480x800 New Popular Top Rated Folders (All) ActionAdventureArcadeCasino / CardCasualOtherPuzzle / BoardRacingRole PlayingRomanceShoot Em UpSimulationSportsStrategy
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/English Hate Story 3 Movie Download Bluray Hindi Movies A Revenge Saga with a Twist.md b/spaces/cihyFjudo/fairness-paper-search/English Hate Story 3 Movie Download Bluray Hindi Movies A Revenge Saga with a Twist.md
deleted file mode 100644
index 18478f916ea547b804cf9d5f0c6d50c72d3d7afb..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/English Hate Story 3 Movie Download Bluray Hindi Movies A Revenge Saga with a Twist.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
English Hate Story 3 Movie Download Bluray Hindi Movies
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/FULL Architect 3D Ultimate Plus 2017 19.0.1.1001 License Keys A Complete Guide.md b/spaces/cihyFjudo/fairness-paper-search/FULL Architect 3D Ultimate Plus 2017 19.0.1.1001 License Keys A Complete Guide.md
deleted file mode 100644
index fb76257ca4bb915b19b5fc51def37eae05b1bf4d..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/FULL Architect 3D Ultimate Plus 2017 19.0.1.1001 License Keys A Complete Guide.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Same here... Quite the mistake. I booted Ophcrack ASAP when I got to my new school this year and cracked the pass within seconds, it's "envision" WTF lol but I also think they should disable LM hash in the first place because there's nothing older than XP SP3, leaving LM hash to be only a security issue
Student for LanSchool Classic is a free app for Android published in the Teaching & Training Tools list of apps, part of Education.
The company that develops Student for LanSchool Classic is Lenovo Software. The latest version released by its developer is 9.1.0.66. This app was rated by 1 users of our site and has an average rating of 0.5.
To install Student for LanSchool Classic on your Android device, just click the green Continue To App button above to start the installation process. The app is listed on our website since 2021-08-11 and was downloaded 1134 times. We have already checked if the download link is safe, however for your own protection we recommend that you scan the downloaded app with your antivirus. Your antivirus may detect the Student for LanSchool Classic as malware as malware if the download link to com.lanschool.student is broken.
How to install Student for LanSchool Classic on your Android device:
Click on the Continue To App button on our website. This will redirect you to Google Play.
Once the Student for LanSchool Classic is shown in the Google Play listing of your Android device, you can start its download and installation. Tap on the Install button located below the search bar and to the right of the app icon.
A pop-up window with the permissions required by Student for LanSchool Classic will be shown. Click on Accept to continue the process.
Student for LanSchool Classic will be downloaded onto your device, displaying a progress. Once the download completes, the installation will start and you'll get a notification after the installation is finished.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Manual For Singer 6038C Learn How to Sew with Your Machine.md b/spaces/cihyFjudo/fairness-paper-search/Manual For Singer 6038C Learn How to Sew with Your Machine.md
deleted file mode 100644
index 7e2fb4f99da032bb6d5e196eb38b7d92d49cfe25..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Manual For Singer 6038C Learn How to Sew with Your Machine.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
Need a manual for your Singer 6038 Sewing Machine? Below you can view and download the PDF manual for free. There are also frequently asked questions, a product rating and feedback from users to enable you to optimally use your product. If this is not the manual you want, please contact us.
-
According to the 20U Singer Sewing Machine manual, "How often you will need to clean and lubricate the machine will depend on how often you will use it. When in regular use, the machine should be cleaned periodically to remove lint and fluff which may have accumulated around the working parts."
Unplug the machine. Open the motor compartment. Place a paper towel beneath the pressure foot, covering the arm/table of the machine. Put the sewing machine oil nozzle against the motor shaft (a long arm that goes up and down) and squirt a drop of oil. Squirt a drop of oil against any moving cog, wheel or part. Use the hand wheel to manually move the Singer machine gears, distributing the oil. Wipe excess oil away with paper towels. Plug in machine and test.
-
Unplug the machine. Consult the Singer sewing machine manual for the belt replacement instructions and replace the belt. Belts that are too tight or too loose will slow the machine. Plug in the machine and test. If the machine continues to run slowly, take the machine to a Singer repair shop.
-
You will see a list of service manuals or instruction manuals . If the manual you are requesting is not available and it can be substituted with another like machine manual that offers the same info it will be substituted.
-
Please Note: Your Sewing Machine or Serger Manual will arrive within 7- 9 working days, generally sooner many ship the same day. On the other hand, a hard to find manual will take a minimum of 2 to 3 weeks to arrive since it takes time for the manufacturer to locate your specific manual.
Also note that some manuals may be photocopies and not originals because some manuals are out-of-print.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Pronest 2012 Full License Crack !!HOT!! 41 31.md b/spaces/cihyFjudo/fairness-paper-search/Pronest 2012 Full License Crack !!HOT!! 41 31.md
deleted file mode 100644
index b416794d42e5ee8ff7861274d5900fee3b135bc1..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Pronest 2012 Full License Crack !!HOT!! 41 31.md
+++ /dev/null
@@ -1,70 +0,0 @@
-## Pronest 2012 Full License Crack 41 31
-
-
-
-
-
- 
-
-
-
-
-
-**Pronest 2012 Full License Crack 41 31 >>>>> [https://venemena.blogspot.com/?download=2txRfK](https://venemena.blogspot.com/?download=2txRfK)**
-
-
-
-
-
-
-
-
-
-
-
- Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Pronest 2012 Full License Crack 41 31":
-
-# How to Get ProNest 2012 Full License Crack 41 31 for Free
-
-
-
-ProNest 2012 is a powerful nesting software that helps you optimize your cutting process and reduce material waste. It supports various cutting technologies, such as plasma, laser, waterjet, oxyfuel, and punch. It also integrates with Hypertherm SureCut⢠technology, which provides advanced features like True Hole®, Rapid Partâ¢, and True Bevelâ¢.
-
-
-
-However, ProNest 2012 is not a cheap software. It requires a license key to activate and use all its features. If you are looking for a way to get ProNest 2012 full license crack 41 31 for free, you may be tempted to download it from some shady websites or torrent sites. But beware, these sources may contain viruses, malware, or spyware that can harm your computer or steal your personal information.
-
-
-
-So, how can you get ProNest 2012 full license crack 41 31 for free without risking your security and privacy? The answer is simple: you can't. There is no such thing as a free lunch. If you want to use ProNest 2012 legally and safely, you have to purchase it from the official website or an authorized reseller. This way, you can enjoy the benefits of ProNest 2012 without worrying about any legal or technical issues.
-
-
-
-ProNest 2012 is worth every penny you spend on it. It can help you improve your productivity, quality, and profitability. It can also save you time and money by reducing material waste and cutting costs. It is compatible with most CAD/CAM software and CNC machines. It has a user-friendly interface and a comprehensive online help system.
-
-
-
-If you are still not convinced, you can try ProNest 2012 for free for 14 days. You can download the trial version from the official website and see for yourself how ProNest 2012 can enhance your cutting performance. You can also contact the customer support team if you have any questions or issues.
-
-
-
-Don't fall for the trap of ProNest 2012 full license crack 41 31. It is illegal, unsafe, and unreliable. Instead, invest in ProNest 2012 and get the best nesting software for your cutting needs.
-
-Here are a few more paragraphs for the article:
-
-ProNest 2012 is not only a nesting software, but also a comprehensive CAD/CAM solution that can help you design, edit, and optimize your parts for cutting. You can use the integrated 2D CAD program to create and modify CAD files, or import them from various industry-standard file formats. You can also use the Variable Shape Parts library to generate common parts from templates. ProNest 2012 can automatically correct and smooth CAD files, map CAD layers to processes, and update nests for part revisions.
-
-
-
-ProNest 2012 also offers a work order processing module that can help you manage your cutting jobs more efficiently and effectively. You can create work orders with multiple parts and plates of different grades and gauges, and assign them to different machines and operators. You can also track the status of your work orders, generate reports, and export data to your ERP or MRP system. ProNest 2012 can help you improve your material utilization, reduce your inventory, and increase your on-time delivery.
-
-
-
-ProNest 2012 is compatible with all major brands and models of cutting machines, including plasma, laser, oxyfuel, waterjet, and combination punch. It supports advanced cutting features like beveling, drilling, tapping, marking, and repositioning. It also supports Hypertherm's SureCut⢠technologies, which can improve your cut quality and reduce your operating costs. For example, True Hole® technology can produce significantly better hole quality than conventional plasma cutting methods.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/PsdImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/PsdImagePlugin.py
deleted file mode 100644
index 5a5d60d568c78b1546d0564b38a64fec2e2ca0b1..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/PsdImagePlugin.py
+++ /dev/null
@@ -1,303 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# Adobe PSD 2.5/3.0 file handling
-#
-# History:
-# 1995-09-01 fl Created
-# 1997-01-03 fl Read most PSD images
-# 1997-01-18 fl Fixed P and CMYK support
-# 2001-10-21 fl Added seek/tell support (for layers)
-#
-# Copyright (c) 1997-2001 by Secret Labs AB.
-# Copyright (c) 1995-2001 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import io
-
-from . import Image, ImageFile, ImagePalette
-from ._binary import i8
-from ._binary import i16be as i16
-from ._binary import i32be as i32
-from ._binary import si16be as si16
-
-MODES = {
- # (photoshop mode, bits) -> (pil mode, required channels)
- (0, 1): ("1", 1),
- (0, 8): ("L", 1),
- (1, 8): ("L", 1),
- (2, 8): ("P", 1),
- (3, 8): ("RGB", 3),
- (4, 8): ("CMYK", 4),
- (7, 8): ("L", 1), # FIXME: multilayer
- (8, 8): ("L", 1), # duotone
- (9, 8): ("LAB", 3),
-}
-
-
-# --------------------------------------------------------------------.
-# read PSD images
-
-
-def _accept(prefix):
- return prefix[:4] == b"8BPS"
-
-
-##
-# Image plugin for Photoshop images.
-
-
-class PsdImageFile(ImageFile.ImageFile):
- format = "PSD"
- format_description = "Adobe Photoshop"
- _close_exclusive_fp_after_loading = False
-
- def _open(self):
- read = self.fp.read
-
- #
- # header
-
- s = read(26)
- if not _accept(s) or i16(s, 4) != 1:
- msg = "not a PSD file"
- raise SyntaxError(msg)
-
- psd_bits = i16(s, 22)
- psd_channels = i16(s, 12)
- psd_mode = i16(s, 24)
-
- mode, channels = MODES[(psd_mode, psd_bits)]
-
- if channels > psd_channels:
- msg = "not enough channels"
- raise OSError(msg)
- if mode == "RGB" and psd_channels == 4:
- mode = "RGBA"
- channels = 4
-
- self.mode = mode
- self._size = i32(s, 18), i32(s, 14)
-
- #
- # color mode data
-
- size = i32(read(4))
- if size:
- data = read(size)
- if mode == "P" and size == 768:
- self.palette = ImagePalette.raw("RGB;L", data)
-
- #
- # image resources
-
- self.resources = []
-
- size = i32(read(4))
- if size:
- # load resources
- end = self.fp.tell() + size
- while self.fp.tell() < end:
- read(4) # signature
- id = i16(read(2))
- name = read(i8(read(1)))
- if not (len(name) & 1):
- read(1) # padding
- data = read(i32(read(4)))
- if len(data) & 1:
- read(1) # padding
- self.resources.append((id, name, data))
- if id == 1039: # ICC profile
- self.info["icc_profile"] = data
-
- #
- # layer and mask information
-
- self.layers = []
-
- size = i32(read(4))
- if size:
- end = self.fp.tell() + size
- size = i32(read(4))
- if size:
- _layer_data = io.BytesIO(ImageFile._safe_read(self.fp, size))
- self.layers = _layerinfo(_layer_data, size)
- self.fp.seek(end)
- self.n_frames = len(self.layers)
- self.is_animated = self.n_frames > 1
-
- #
- # image descriptor
-
- self.tile = _maketile(self.fp, mode, (0, 0) + self.size, channels)
-
- # keep the file open
- self._fp = self.fp
- self.frame = 1
- self._min_frame = 1
-
- def seek(self, layer):
- if not self._seek_check(layer):
- return
-
- # seek to given layer (1..max)
- try:
- name, mode, bbox, tile = self.layers[layer - 1]
- self.mode = mode
- self.tile = tile
- self.frame = layer
- self.fp = self._fp
- return name, bbox
- except IndexError as e:
- msg = "no such layer"
- raise EOFError(msg) from e
-
- def tell(self):
- # return layer number (0=image, 1..max=layers)
- return self.frame
-
-
-def _layerinfo(fp, ct_bytes):
- # read layerinfo block
- layers = []
-
- def read(size):
- return ImageFile._safe_read(fp, size)
-
- ct = si16(read(2))
-
- # sanity check
- if ct_bytes < (abs(ct) * 20):
- msg = "Layer block too short for number of layers requested"
- raise SyntaxError(msg)
-
- for _ in range(abs(ct)):
- # bounding box
- y0 = i32(read(4))
- x0 = i32(read(4))
- y1 = i32(read(4))
- x1 = i32(read(4))
-
- # image info
- mode = []
- ct_types = i16(read(2))
- types = list(range(ct_types))
- if len(types) > 4:
- continue
-
- for _ in types:
- type = i16(read(2))
-
- if type == 65535:
- m = "A"
- else:
- m = "RGBA"[type]
-
- mode.append(m)
- read(4) # size
-
- # figure out the image mode
- mode.sort()
- if mode == ["R"]:
- mode = "L"
- elif mode == ["B", "G", "R"]:
- mode = "RGB"
- elif mode == ["A", "B", "G", "R"]:
- mode = "RGBA"
- else:
- mode = None # unknown
-
- # skip over blend flags and extra information
- read(12) # filler
- name = ""
- size = i32(read(4)) # length of the extra data field
- if size:
- data_end = fp.tell() + size
-
- length = i32(read(4))
- if length:
- fp.seek(length - 16, io.SEEK_CUR)
-
- length = i32(read(4))
- if length:
- fp.seek(length, io.SEEK_CUR)
-
- length = i8(read(1))
- if length:
- # Don't know the proper encoding,
- # Latin-1 should be a good guess
- name = read(length).decode("latin-1", "replace")
-
- fp.seek(data_end)
- layers.append((name, mode, (x0, y0, x1, y1)))
-
- # get tiles
- for i, (name, mode, bbox) in enumerate(layers):
- tile = []
- for m in mode:
- t = _maketile(fp, m, bbox, 1)
- if t:
- tile.extend(t)
- layers[i] = name, mode, bbox, tile
-
- return layers
-
-
-def _maketile(file, mode, bbox, channels):
- tile = None
- read = file.read
-
- compression = i16(read(2))
-
- xsize = bbox[2] - bbox[0]
- ysize = bbox[3] - bbox[1]
-
- offset = file.tell()
-
- if compression == 0:
- #
- # raw compression
- tile = []
- for channel in range(channels):
- layer = mode[channel]
- if mode == "CMYK":
- layer += ";I"
- tile.append(("raw", bbox, offset, layer))
- offset = offset + xsize * ysize
-
- elif compression == 1:
- #
- # packbits compression
- i = 0
- tile = []
- bytecount = read(channels * ysize * 2)
- offset = file.tell()
- for channel in range(channels):
- layer = mode[channel]
- if mode == "CMYK":
- layer += ";I"
- tile.append(("packbits", bbox, offset, layer))
- for y in range(ysize):
- offset = offset + i16(bytecount, i)
- i += 2
-
- file.seek(offset)
-
- if offset & 1:
- read(1) # padding
-
- return tile
-
-
-# --------------------------------------------------------------------
-# registry
-
-
-Image.register_open(PsdImageFile.format, PsdImageFile, _accept)
-
-Image.register_extension(PsdImageFile.format, ".psd")
-
-Image.register_mime(PsdImageFile.format, "image/vnd.adobe.photoshop")
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/async_timeout/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/async_timeout/__init__.py
deleted file mode 100644
index 1ffb069fce9b2b9a03515404155a7e5cc439484a..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/async_timeout/__init__.py
+++ /dev/null
@@ -1,239 +0,0 @@
-import asyncio
-import enum
-import sys
-import warnings
-from types import TracebackType
-from typing import Optional, Type
-
-
-if sys.version_info >= (3, 8):
- from typing import final
-else:
- from typing_extensions import final
-
-
-if sys.version_info >= (3, 11):
-
- def _uncancel_task(task: "asyncio.Task[object]") -> None:
- task.uncancel()
-
-else:
-
- def _uncancel_task(task: "asyncio.Task[object]") -> None:
- pass
-
-
-__version__ = "4.0.3"
-
-
-__all__ = ("timeout", "timeout_at", "Timeout")
-
-
-def timeout(delay: Optional[float]) -> "Timeout":
- """timeout context manager.
-
- Useful in cases when you want to apply timeout logic around block
- of code or in cases when asyncio.wait_for is not suitable. For example:
-
- >>> async with timeout(0.001):
- ... async with aiohttp.get('https://github.com') as r:
- ... await r.text()
-
-
- delay - value in seconds or None to disable timeout logic
- """
- loop = asyncio.get_running_loop()
- if delay is not None:
- deadline = loop.time() + delay # type: Optional[float]
- else:
- deadline = None
- return Timeout(deadline, loop)
-
-
-def timeout_at(deadline: Optional[float]) -> "Timeout":
- """Schedule the timeout at absolute time.
-
- deadline argument points on the time in the same clock system
- as loop.time().
-
- Please note: it is not POSIX time but a time with
- undefined starting base, e.g. the time of the system power on.
-
- >>> async with timeout_at(loop.time() + 10):
- ... async with aiohttp.get('https://github.com') as r:
- ... await r.text()
-
-
- """
- loop = asyncio.get_running_loop()
- return Timeout(deadline, loop)
-
-
-class _State(enum.Enum):
- INIT = "INIT"
- ENTER = "ENTER"
- TIMEOUT = "TIMEOUT"
- EXIT = "EXIT"
-
-
-@final
-class Timeout:
- # Internal class, please don't instantiate it directly
- # Use timeout() and timeout_at() public factories instead.
- #
- # Implementation note: `async with timeout()` is preferred
- # over `with timeout()`.
- # While technically the Timeout class implementation
- # doesn't need to be async at all,
- # the `async with` statement explicitly points that
- # the context manager should be used from async function context.
- #
- # This design allows to avoid many silly misusages.
- #
- # TimeoutError is raised immediately when scheduled
- # if the deadline is passed.
- # The purpose is to time out as soon as possible
- # without waiting for the next await expression.
-
- __slots__ = ("_deadline", "_loop", "_state", "_timeout_handler", "_task")
-
- def __init__(
- self, deadline: Optional[float], loop: asyncio.AbstractEventLoop
- ) -> None:
- self._loop = loop
- self._state = _State.INIT
-
- self._task: Optional["asyncio.Task[object]"] = None
- self._timeout_handler = None # type: Optional[asyncio.Handle]
- if deadline is None:
- self._deadline = None # type: Optional[float]
- else:
- self.update(deadline)
-
- def __enter__(self) -> "Timeout":
- warnings.warn(
- "with timeout() is deprecated, use async with timeout() instead",
- DeprecationWarning,
- stacklevel=2,
- )
- self._do_enter()
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> Optional[bool]:
- self._do_exit(exc_type)
- return None
-
- async def __aenter__(self) -> "Timeout":
- self._do_enter()
- return self
-
- async def __aexit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> Optional[bool]:
- self._do_exit(exc_type)
- return None
-
- @property
- def expired(self) -> bool:
- """Is timeout expired during execution?"""
- return self._state == _State.TIMEOUT
-
- @property
- def deadline(self) -> Optional[float]:
- return self._deadline
-
- def reject(self) -> None:
- """Reject scheduled timeout if any."""
- # cancel is maybe better name but
- # task.cancel() raises CancelledError in asyncio world.
- if self._state not in (_State.INIT, _State.ENTER):
- raise RuntimeError(f"invalid state {self._state.value}")
- self._reject()
-
- def _reject(self) -> None:
- self._task = None
- if self._timeout_handler is not None:
- self._timeout_handler.cancel()
- self._timeout_handler = None
-
- def shift(self, delay: float) -> None:
- """Advance timeout on delay seconds.
-
- The delay can be negative.
-
- Raise RuntimeError if shift is called when deadline is not scheduled
- """
- deadline = self._deadline
- if deadline is None:
- raise RuntimeError("cannot shift timeout if deadline is not scheduled")
- self.update(deadline + delay)
-
- def update(self, deadline: float) -> None:
- """Set deadline to absolute value.
-
- deadline argument points on the time in the same clock system
- as loop.time().
-
- If new deadline is in the past the timeout is raised immediately.
-
- Please note: it is not POSIX time but a time with
- undefined starting base, e.g. the time of the system power on.
- """
- if self._state == _State.EXIT:
- raise RuntimeError("cannot reschedule after exit from context manager")
- if self._state == _State.TIMEOUT:
- raise RuntimeError("cannot reschedule expired timeout")
- if self._timeout_handler is not None:
- self._timeout_handler.cancel()
- self._deadline = deadline
- if self._state != _State.INIT:
- self._reschedule()
-
- def _reschedule(self) -> None:
- assert self._state == _State.ENTER
- deadline = self._deadline
- if deadline is None:
- return
-
- now = self._loop.time()
- if self._timeout_handler is not None:
- self._timeout_handler.cancel()
-
- self._task = asyncio.current_task()
- if deadline <= now:
- self._timeout_handler = self._loop.call_soon(self._on_timeout)
- else:
- self._timeout_handler = self._loop.call_at(deadline, self._on_timeout)
-
- def _do_enter(self) -> None:
- if self._state != _State.INIT:
- raise RuntimeError(f"invalid state {self._state.value}")
- self._state = _State.ENTER
- self._reschedule()
-
- def _do_exit(self, exc_type: Optional[Type[BaseException]]) -> None:
- if exc_type is asyncio.CancelledError and self._state == _State.TIMEOUT:
- assert self._task is not None
- _uncancel_task(self._task)
- self._timeout_handler = None
- self._task = None
- raise asyncio.TimeoutError
- # timeout has not expired
- self._state = _State.EXIT
- self._reject()
- return None
-
- def _on_timeout(self) -> None:
- assert self._task is not None
- self._task.cancel()
- self._state = _State.TIMEOUT
- # drop the reference early
- self._timeout_handler = None
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/commontypes.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/commontypes.py
deleted file mode 100644
index 8ec97c756a4b1023fd3963dd39b706f7c0e34373..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/commontypes.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import sys
-from . import model
-from .error import FFIError
-
-
-COMMON_TYPES = {}
-
-try:
- # fetch "bool" and all simple Windows types
- from _cffi_backend import _get_common_types
- _get_common_types(COMMON_TYPES)
-except ImportError:
- pass
-
-COMMON_TYPES['FILE'] = model.unknown_type('FILE', '_IO_FILE')
-COMMON_TYPES['bool'] = '_Bool' # in case we got ImportError above
-
-for _type in model.PrimitiveType.ALL_PRIMITIVE_TYPES:
- if _type.endswith('_t'):
- COMMON_TYPES[_type] = _type
-del _type
-
-_CACHE = {}
-
-def resolve_common_type(parser, commontype):
- try:
- return _CACHE[commontype]
- except KeyError:
- cdecl = COMMON_TYPES.get(commontype, commontype)
- if not isinstance(cdecl, str):
- result, quals = cdecl, 0 # cdecl is already a BaseType
- elif cdecl in model.PrimitiveType.ALL_PRIMITIVE_TYPES:
- result, quals = model.PrimitiveType(cdecl), 0
- elif cdecl == 'set-unicode-needed':
- raise FFIError("The Windows type %r is only available after "
- "you call ffi.set_unicode()" % (commontype,))
- else:
- if commontype == cdecl:
- raise FFIError(
- "Unsupported type: %r. Please look at "
- "http://cffi.readthedocs.io/en/latest/cdef.html#ffi-cdef-limitations "
- "and file an issue if you think this type should really "
- "be supported." % (commontype,))
- result, quals = parser.parse_type_and_quals(cdecl) # recursive
-
- assert isinstance(result, model.BaseTypeByIdentity)
- _CACHE[commontype] = result, quals
- return result, quals
-
-
-# ____________________________________________________________
-# extra types for Windows (most of them are in commontypes.c)
-
-
-def win_common_types():
- return {
- "UNICODE_STRING": model.StructType(
- "_UNICODE_STRING",
- ["Length",
- "MaximumLength",
- "Buffer"],
- [model.PrimitiveType("unsigned short"),
- model.PrimitiveType("unsigned short"),
- model.PointerType(model.PrimitiveType("wchar_t"))],
- [-1, -1, -1]),
- "PUNICODE_STRING": "UNICODE_STRING *",
- "PCUNICODE_STRING": "const UNICODE_STRING *",
-
- "TBYTE": "set-unicode-needed",
- "TCHAR": "set-unicode-needed",
- "LPCTSTR": "set-unicode-needed",
- "PCTSTR": "set-unicode-needed",
- "LPTSTR": "set-unicode-needed",
- "PTSTR": "set-unicode-needed",
- "PTBYTE": "set-unicode-needed",
- "PTCHAR": "set-unicode-needed",
- }
-
-if sys.platform == 'win32':
- COMMON_TYPES.update(win_common_types())
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/click/_winconsole.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/click/_winconsole.py
deleted file mode 100644
index 6b20df315b23ecd1e3d0ec32c11c0b5ced577efe..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/click/_winconsole.py
+++ /dev/null
@@ -1,279 +0,0 @@
-# This module is based on the excellent work by Adam Bartoš who
-# provided a lot of what went into the implementation here in
-# the discussion to issue1602 in the Python bug tracker.
-#
-# There are some general differences in regards to how this works
-# compared to the original patches as we do not need to patch
-# the entire interpreter but just work in our little world of
-# echo and prompt.
-import io
-import sys
-import time
-import typing as t
-from ctypes import byref
-from ctypes import c_char
-from ctypes import c_char_p
-from ctypes import c_int
-from ctypes import c_ssize_t
-from ctypes import c_ulong
-from ctypes import c_void_p
-from ctypes import POINTER
-from ctypes import py_object
-from ctypes import Structure
-from ctypes.wintypes import DWORD
-from ctypes.wintypes import HANDLE
-from ctypes.wintypes import LPCWSTR
-from ctypes.wintypes import LPWSTR
-
-from ._compat import _NonClosingTextIOWrapper
-
-assert sys.platform == "win32"
-import msvcrt # noqa: E402
-from ctypes import windll # noqa: E402
-from ctypes import WINFUNCTYPE # noqa: E402
-
-c_ssize_p = POINTER(c_ssize_t)
-
-kernel32 = windll.kernel32
-GetStdHandle = kernel32.GetStdHandle
-ReadConsoleW = kernel32.ReadConsoleW
-WriteConsoleW = kernel32.WriteConsoleW
-GetConsoleMode = kernel32.GetConsoleMode
-GetLastError = kernel32.GetLastError
-GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32))
-CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))(
- ("CommandLineToArgvW", windll.shell32)
-)
-LocalFree = WINFUNCTYPE(c_void_p, c_void_p)(("LocalFree", windll.kernel32))
-
-STDIN_HANDLE = GetStdHandle(-10)
-STDOUT_HANDLE = GetStdHandle(-11)
-STDERR_HANDLE = GetStdHandle(-12)
-
-PyBUF_SIMPLE = 0
-PyBUF_WRITABLE = 1
-
-ERROR_SUCCESS = 0
-ERROR_NOT_ENOUGH_MEMORY = 8
-ERROR_OPERATION_ABORTED = 995
-
-STDIN_FILENO = 0
-STDOUT_FILENO = 1
-STDERR_FILENO = 2
-
-EOF = b"\x1a"
-MAX_BYTES_WRITTEN = 32767
-
-try:
- from ctypes import pythonapi
-except ImportError:
- # On PyPy we cannot get buffers so our ability to operate here is
- # severely limited.
- get_buffer = None
-else:
-
- class Py_buffer(Structure):
- _fields_ = [
- ("buf", c_void_p),
- ("obj", py_object),
- ("len", c_ssize_t),
- ("itemsize", c_ssize_t),
- ("readonly", c_int),
- ("ndim", c_int),
- ("format", c_char_p),
- ("shape", c_ssize_p),
- ("strides", c_ssize_p),
- ("suboffsets", c_ssize_p),
- ("internal", c_void_p),
- ]
-
- PyObject_GetBuffer = pythonapi.PyObject_GetBuffer
- PyBuffer_Release = pythonapi.PyBuffer_Release
-
- def get_buffer(obj, writable=False):
- buf = Py_buffer()
- flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE
- PyObject_GetBuffer(py_object(obj), byref(buf), flags)
-
- try:
- buffer_type = c_char * buf.len
- return buffer_type.from_address(buf.buf)
- finally:
- PyBuffer_Release(byref(buf))
-
-
-class _WindowsConsoleRawIOBase(io.RawIOBase):
- def __init__(self, handle):
- self.handle = handle
-
- def isatty(self):
- super().isatty()
- return True
-
-
-class _WindowsConsoleReader(_WindowsConsoleRawIOBase):
- def readable(self):
- return True
-
- def readinto(self, b):
- bytes_to_be_read = len(b)
- if not bytes_to_be_read:
- return 0
- elif bytes_to_be_read % 2:
- raise ValueError(
- "cannot read odd number of bytes from UTF-16-LE encoded console"
- )
-
- buffer = get_buffer(b, writable=True)
- code_units_to_be_read = bytes_to_be_read // 2
- code_units_read = c_ulong()
-
- rv = ReadConsoleW(
- HANDLE(self.handle),
- buffer,
- code_units_to_be_read,
- byref(code_units_read),
- None,
- )
- if GetLastError() == ERROR_OPERATION_ABORTED:
- # wait for KeyboardInterrupt
- time.sleep(0.1)
- if not rv:
- raise OSError(f"Windows error: {GetLastError()}")
-
- if buffer[0] == EOF:
- return 0
- return 2 * code_units_read.value
-
-
-class _WindowsConsoleWriter(_WindowsConsoleRawIOBase):
- def writable(self):
- return True
-
- @staticmethod
- def _get_error_message(errno):
- if errno == ERROR_SUCCESS:
- return "ERROR_SUCCESS"
- elif errno == ERROR_NOT_ENOUGH_MEMORY:
- return "ERROR_NOT_ENOUGH_MEMORY"
- return f"Windows error {errno}"
-
- def write(self, b):
- bytes_to_be_written = len(b)
- buf = get_buffer(b)
- code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2
- code_units_written = c_ulong()
-
- WriteConsoleW(
- HANDLE(self.handle),
- buf,
- code_units_to_be_written,
- byref(code_units_written),
- None,
- )
- bytes_written = 2 * code_units_written.value
-
- if bytes_written == 0 and bytes_to_be_written > 0:
- raise OSError(self._get_error_message(GetLastError()))
- return bytes_written
-
-
-class ConsoleStream:
- def __init__(self, text_stream: t.TextIO, byte_stream: t.BinaryIO) -> None:
- self._text_stream = text_stream
- self.buffer = byte_stream
-
- @property
- def name(self) -> str:
- return self.buffer.name
-
- def write(self, x: t.AnyStr) -> int:
- if isinstance(x, str):
- return self._text_stream.write(x)
- try:
- self.flush()
- except Exception:
- pass
- return self.buffer.write(x)
-
- def writelines(self, lines: t.Iterable[t.AnyStr]) -> None:
- for line in lines:
- self.write(line)
-
- def __getattr__(self, name: str) -> t.Any:
- return getattr(self._text_stream, name)
-
- def isatty(self) -> bool:
- return self.buffer.isatty()
-
- def __repr__(self):
- return f""
-
-
-def _get_text_stdin(buffer_stream: t.BinaryIO) -> t.TextIO:
- text_stream = _NonClosingTextIOWrapper(
- io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)),
- "utf-16-le",
- "strict",
- line_buffering=True,
- )
- return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream))
-
-
-def _get_text_stdout(buffer_stream: t.BinaryIO) -> t.TextIO:
- text_stream = _NonClosingTextIOWrapper(
- io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)),
- "utf-16-le",
- "strict",
- line_buffering=True,
- )
- return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream))
-
-
-def _get_text_stderr(buffer_stream: t.BinaryIO) -> t.TextIO:
- text_stream = _NonClosingTextIOWrapper(
- io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)),
- "utf-16-le",
- "strict",
- line_buffering=True,
- )
- return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream))
-
-
-_stream_factories: t.Mapping[int, t.Callable[[t.BinaryIO], t.TextIO]] = {
- 0: _get_text_stdin,
- 1: _get_text_stdout,
- 2: _get_text_stderr,
-}
-
-
-def _is_console(f: t.TextIO) -> bool:
- if not hasattr(f, "fileno"):
- return False
-
- try:
- fileno = f.fileno()
- except (OSError, io.UnsupportedOperation):
- return False
-
- handle = msvcrt.get_osfhandle(fileno)
- return bool(GetConsoleMode(handle, byref(DWORD())))
-
-
-def _get_windows_console_stream(
- f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str]
-) -> t.Optional[t.TextIO]:
- if (
- get_buffer is not None
- and encoding in {"utf-16-le", None}
- and errors in {"strict", None}
- and _is_console(f)
- ):
- func = _stream_factories.get(f.fileno())
- if func is not None:
- b = getattr(f, "buffer", None)
-
- if b is None:
- return None
-
- return func(b)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/timeTools.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/timeTools.py
deleted file mode 100644
index 175ce81563daf3e9a924701dd2c9d4b71084c286..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/timeTools.py
+++ /dev/null
@@ -1,88 +0,0 @@
-"""fontTools.misc.timeTools.py -- tools for working with OpenType timestamps.
-"""
-
-import os
-import time
-from datetime import datetime, timezone
-import calendar
-
-
-epoch_diff = calendar.timegm((1904, 1, 1, 0, 0, 0, 0, 0, 0))
-
-DAYNAMES = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
-MONTHNAMES = [
- None,
- "Jan",
- "Feb",
- "Mar",
- "Apr",
- "May",
- "Jun",
- "Jul",
- "Aug",
- "Sep",
- "Oct",
- "Nov",
- "Dec",
-]
-
-
-def asctime(t=None):
- """
- Convert a tuple or struct_time representing a time as returned by gmtime()
- or localtime() to a 24-character string of the following form:
-
- >>> asctime(time.gmtime(0))
- 'Thu Jan 1 00:00:00 1970'
-
- If t is not provided, the current time as returned by localtime() is used.
- Locale information is not used by asctime().
-
- This is meant to normalise the output of the built-in time.asctime() across
- different platforms and Python versions.
- In Python 3.x, the day of the month is right-justified, whereas on Windows
- Python 2.7 it is padded with zeros.
-
- See https://github.com/fonttools/fonttools/issues/455
- """
- if t is None:
- t = time.localtime()
- s = "%s %s %2s %s" % (
- DAYNAMES[t.tm_wday],
- MONTHNAMES[t.tm_mon],
- t.tm_mday,
- time.strftime("%H:%M:%S %Y", t),
- )
- return s
-
-
-def timestampToString(value):
- return asctime(time.gmtime(max(0, value + epoch_diff)))
-
-
-def timestampFromString(value):
- wkday, mnth = value[:7].split()
- t = datetime.strptime(value[7:], " %d %H:%M:%S %Y")
- t = t.replace(month=MONTHNAMES.index(mnth), tzinfo=timezone.utc)
- wkday_idx = DAYNAMES.index(wkday)
- assert t.weekday() == wkday_idx, '"' + value + '" has inconsistent weekday'
- return int(t.timestamp()) - epoch_diff
-
-
-def timestampNow():
- # https://reproducible-builds.org/specs/source-date-epoch/
- source_date_epoch = os.environ.get("SOURCE_DATE_EPOCH")
- if source_date_epoch is not None:
- return int(source_date_epoch) - epoch_diff
- return int(time.time() - epoch_diff)
-
-
-def timestampSinceEpoch(value):
- return int(value - epoch_diff)
-
-
-if __name__ == "__main__":
- import sys
- import doctest
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_sei.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_sei.h
deleted file mode 100644
index 4189f5e6f7446b7f0066a28987a759ed4034ceb8..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_sei.h
+++ /dev/null
@@ -1,127 +0,0 @@
-/*
- * HEVC Supplementary Enhancement Information messages
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_HEVC_SEI_H
-#define AVCODEC_HEVC_SEI_H
-
-#include
-
-#include "libavutil/buffer.h"
-
-#include "get_bits.h"
-#include "hevc.h"
-#include "h2645_sei.h"
-#include "sei.h"
-
-
-typedef enum {
- HEVC_SEI_PIC_STRUCT_FRAME_DOUBLING = 7,
- HEVC_SEI_PIC_STRUCT_FRAME_TRIPLING = 8
-} HEVC_SEI_PicStructType;
-
-typedef struct HEVCSEIPictureHash {
- uint8_t md5[3][16];
- uint8_t is_md5;
-} HEVCSEIPictureHash;
-
-typedef struct HEVCSEIFramePacking {
- int present;
- int arrangement_type;
- int content_interpretation_type;
- int quincunx_subsampling;
- int current_frame_is_frame0_flag;
-} HEVCSEIFramePacking;
-
-typedef struct HEVCSEIPictureTiming {
- int picture_struct;
-} HEVCSEIPictureTiming;
-
-typedef struct HEVCSEIMasteringDisplay {
- int present;
- uint16_t display_primaries[3][2];
- uint16_t white_point[2];
- uint32_t max_luminance;
- uint32_t min_luminance;
-} HEVCSEIMasteringDisplay;
-
-typedef struct HEVCSEIContentLight {
- int present;
- uint16_t max_content_light_level;
- uint16_t max_pic_average_light_level;
-} HEVCSEIContentLight;
-
-typedef struct HEVCSEIAlternativeTransfer {
- int present;
- int preferred_transfer_characteristics;
-} HEVCSEIAlternativeTransfer;
-
-typedef struct HEVCSEITimeCode {
- int present;
- uint8_t num_clock_ts;
- uint8_t clock_timestamp_flag[3];
- uint8_t units_field_based_flag[3];
- uint8_t counting_type[3];
- uint8_t full_timestamp_flag[3];
- uint8_t discontinuity_flag[3];
- uint8_t cnt_dropped_flag[3];
- uint16_t n_frames[3];
- uint8_t seconds_value[3];
- uint8_t minutes_value[3];
- uint8_t hours_value[3];
- uint8_t seconds_flag[3];
- uint8_t minutes_flag[3];
- uint8_t hours_flag[3];
- uint8_t time_offset_length[3];
- int32_t time_offset_value[3];
-} HEVCSEITimeCode;
-
-typedef struct HEVCSEI {
- H2645SEI common;
- HEVCSEIPictureHash picture_hash;
- HEVCSEIPictureTiming picture_timing;
- HEVCSEIMasteringDisplay mastering_display;
- HEVCSEIContentLight content_light;
- int active_seq_parameter_set_id;
- HEVCSEITimeCode timecode;
-} HEVCSEI;
-
-struct HEVCParamSets;
-
-int ff_hevc_decode_nal_sei(GetBitContext *gb, void *logctx, HEVCSEI *s,
- const struct HEVCParamSets *ps, enum HEVCNALUnitType type);
-
-static inline int ff_hevc_sei_ctx_replace(HEVCSEI *dst, const HEVCSEI *src)
-{
- return ff_h2645_sei_ctx_replace(&dst->common, &src->common);
-}
-
-/**
- * Reset SEI values that are stored on the Context.
- * e.g. Caption data that was extracted during NAL
- * parsing.
- *
- * @param sei HEVCSEI.
- */
-static inline void ff_hevc_reset_sei(HEVCSEI *sei)
-{
- ff_h2645_sei_reset(&sei->common);
-}
-
-#endif /* AVCODEC_HEVC_SEI_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaom.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaom.c
deleted file mode 100644
index 0befaaa5306ec5bca79be9c2587efdcbf6abce20..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaom.c
+++ /dev/null
@@ -1,49 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * AOM common functions
- */
-
-#include "libavutil/pixdesc.h"
-#include "libaom.h"
-
-void ff_aom_image_copy_16_to_8(AVFrame *pic, struct aom_image *img)
-{
- const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(pic->format);
- int i;
-
- for (i = 0; i < desc->nb_components; i++) {
- int w = img->d_w;
- int h = img->d_h;
- int x, y;
-
- if (i) {
- w = (w + img->x_chroma_shift) >> img->x_chroma_shift;
- h = (h + img->y_chroma_shift) >> img->y_chroma_shift;
- }
-
- for (y = 0; y < h; y++) {
- uint16_t *src = (uint16_t *)(img->planes[i] + y * img->stride[i]);
- uint8_t *dst = pic->data[i] + y * pic->linesize[i];
- for (x = 0; x < w; x++)
- *dst++ = *src++;
- }
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libspeexdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libspeexdec.c
deleted file mode 100644
index 47fc5d6a4b2797fa222096e5961df1ae8c62f4d7..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libspeexdec.c
+++ /dev/null
@@ -1,206 +0,0 @@
-/*
- * Copyright (C) 2008 David Conrad
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-#include
-#include
-#include
-
-#include "libavutil/channel_layout.h"
-#include "libavutil/common.h"
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "decode.h"
-
-typedef struct LibSpeexContext {
- SpeexBits bits;
- SpeexStereoState stereo;
- void *dec_state;
- int frame_size;
- int pktsize;
-} LibSpeexContext;
-
-
-static av_cold int libspeex_decode_init(AVCodecContext *avctx)
-{
- LibSpeexContext *s = avctx->priv_data;
- const SpeexMode *mode;
- SpeexHeader *header = NULL;
- int spx_mode, channels = avctx->ch_layout.nb_channels;
-
- if (avctx->extradata && avctx->extradata_size >= 80) {
- header = speex_packet_to_header(avctx->extradata,
- avctx->extradata_size);
- if (!header)
- av_log(avctx, AV_LOG_WARNING, "Invalid Speex header\n");
- }
- if (avctx->codec_tag == MKTAG('S', 'P', 'X', 'N')) {
- int quality;
- if (!avctx->extradata || avctx->extradata && avctx->extradata_size < 47) {
- av_log(avctx, AV_LOG_ERROR, "Missing or invalid extradata.\n");
- return AVERROR_INVALIDDATA;
- }
-
- quality = avctx->extradata[37];
- if (quality > 10) {
- av_log(avctx, AV_LOG_ERROR, "Unsupported quality mode %d.\n", quality);
- return AVERROR_PATCHWELCOME;
- }
-
- s->pktsize = ((const int[]){5,10,15,20,20,28,28,38,38,46,62})[quality];
-
- spx_mode = 0;
- } else if (header) {
- avctx->sample_rate = header->rate;
- channels = header->nb_channels;
- spx_mode = header->mode;
- speex_header_free(header);
- } else {
- switch (avctx->sample_rate) {
- case 8000: spx_mode = 0; break;
- case 16000: spx_mode = 1; break;
- case 32000: spx_mode = 2; break;
- default:
- /* libspeex can handle any mode if initialized as ultra-wideband */
- av_log(avctx, AV_LOG_WARNING, "Invalid sample rate: %d\n"
- "Decoding as 32kHz ultra-wideband\n",
- avctx->sample_rate);
- spx_mode = 2;
- }
- }
-
- mode = speex_lib_get_mode(spx_mode);
- if (!mode) {
- av_log(avctx, AV_LOG_ERROR, "Unknown Speex mode %d", spx_mode);
- return AVERROR_INVALIDDATA;
- }
- s->frame_size = 160 << spx_mode;
- if (!avctx->sample_rate)
- avctx->sample_rate = 8000 << spx_mode;
-
- if (channels < 1 || channels > 2) {
- /* libspeex can handle mono or stereo if initialized as stereo */
- av_log(avctx, AV_LOG_ERROR, "Invalid channel count: %d.\n"
- "Decoding as stereo.\n", channels);
- channels = 2;
- }
- av_channel_layout_uninit(&avctx->ch_layout);
- avctx->ch_layout = channels == 2 ? (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO :
- (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO;
-
- speex_bits_init(&s->bits);
- s->dec_state = speex_decoder_init(mode);
- if (!s->dec_state) {
- av_log(avctx, AV_LOG_ERROR, "Error initializing libspeex decoder.\n");
- return -1;
- }
-
- if (channels == 2) {
- SpeexCallback callback;
- callback.callback_id = SPEEX_INBAND_STEREO;
- callback.func = speex_std_stereo_request_handler;
- callback.data = &s->stereo;
- s->stereo = (SpeexStereoState)SPEEX_STEREO_STATE_INIT;
- speex_decoder_ctl(s->dec_state, SPEEX_SET_HANDLER, &callback);
- }
-
- return 0;
-}
-
-static int libspeex_decode_frame(AVCodecContext *avctx, AVFrame *frame,
- int *got_frame_ptr, AVPacket *avpkt)
-{
- uint8_t *buf = avpkt->data;
- int buf_size = avpkt->size;
- LibSpeexContext *s = avctx->priv_data;
- int16_t *output;
- int ret, consumed = 0;
- avctx->sample_fmt = AV_SAMPLE_FMT_S16;
-
- /* get output buffer */
- frame->nb_samples = s->frame_size;
- if ((ret = ff_get_buffer(avctx, frame, 0)) < 0)
- return ret;
- output = (int16_t *)frame->data[0];
-
- /* if there is not enough data left for the smallest possible frame or the
- next 5 bits are a terminator code, reset the libspeex buffer using the
- current packet, otherwise ignore the current packet and keep decoding
- frames from the libspeex buffer. */
- if (speex_bits_remaining(&s->bits) < 5 ||
- speex_bits_peek_unsigned(&s->bits, 5) == 0xF) {
- /* check for flush packet */
- if (!buf || !buf_size) {
- *got_frame_ptr = 0;
- return buf_size;
- }
- if (s->pktsize && buf_size == 62)
- buf_size = s->pktsize;
- /* set new buffer */
- speex_bits_read_from(&s->bits, buf, buf_size);
- consumed = avpkt->size;
- }
-
- /* decode a single frame */
- ret = speex_decode_int(s->dec_state, &s->bits, output);
- if (ret <= -2) {
- av_log(avctx, AV_LOG_ERROR, "Error decoding Speex frame.\n");
- return AVERROR_INVALIDDATA;
- }
- if (avctx->ch_layout.nb_channels == 2)
- speex_decode_stereo_int(output, s->frame_size, &s->stereo);
-
- *got_frame_ptr = 1;
-
- if (!avctx->bit_rate)
- speex_decoder_ctl(s->dec_state, SPEEX_GET_BITRATE, &avctx->bit_rate);
- return consumed;
-}
-
-static av_cold int libspeex_decode_close(AVCodecContext *avctx)
-{
- LibSpeexContext *s = avctx->priv_data;
-
- speex_bits_destroy(&s->bits);
- speex_decoder_destroy(s->dec_state);
-
- return 0;
-}
-
-static av_cold void libspeex_decode_flush(AVCodecContext *avctx)
-{
- LibSpeexContext *s = avctx->priv_data;
- speex_bits_reset(&s->bits);
-}
-
-const FFCodec ff_libspeex_decoder = {
- .p.name = "libspeex",
- CODEC_LONG_NAME("libspeex Speex"),
- .p.type = AVMEDIA_TYPE_AUDIO,
- .p.id = AV_CODEC_ID_SPEEX,
- .p.capabilities = AV_CODEC_CAP_SUBFRAMES | AV_CODEC_CAP_DELAY | AV_CODEC_CAP_DR1 | AV_CODEC_CAP_CHANNEL_CONF,
- .p.wrapper_name = "libspeex",
- .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE,
- .priv_data_size = sizeof(LibSpeexContext),
- .init = libspeex_decode_init,
- .close = libspeex_decode_close,
- FF_CODEC_DECODE_CB(libspeex_decode_frame),
- .flush = libspeex_decode_flush,
-};
diff --git a/spaces/congsaPfin/Manga-OCR/logs/BSEB Class 12 Dummy Registration Card 2024 Out Steps to Download and Edit.md b/spaces/congsaPfin/Manga-OCR/logs/BSEB Class 12 Dummy Registration Card 2024 Out Steps to Download and Edit.md
deleted file mode 100644
index 0237af66d242840e6e2f6eff1d3b768566bca27d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/BSEB Class 12 Dummy Registration Card 2024 Out Steps to Download and Edit.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
How to Download Dummy Registration Card for Bihar Board Exams 2024
-
If you are a student who is going to appear for the Bihar Board exams in 2024, you must download and check your dummy registration card before the final registration card is issued. A dummy registration card is a provisional document that contains your personal and academic details for the exams. It helps you to verify and correct any errors in your name, photo, date of birth, nationality, gender, caste, religion, or subjects. In this article, we will tell you how to download dummy registration card for BSEB Matric exam 2024 and BSEB Intermediate exam 2024. We will also tell you how to make corrections in dummy registration card, what are the benefits and risks of dummy registration card, and show you some examples of dummy registration card.
What is a Dummy Registration Card and Why You Need It?
-
A dummy registration card is a provisional document that contains your personal and academic details for the Bihar Board exams 2024. It is issued by the Bihar School Examination Board (BSEB) after you register yourself for the exams through your school. The dummy registration card has information such as:
-
-
Your name
-
Your photo
-
Your date of birth
-
Your nationality
-
Your gender
-
Your caste
-
Your religion
-
Your subjects
-
Your exam center details
-
-
You need to download and check your dummy registration card to verify and correct any errors before the final registration card is issued. The final registration card is the official document that confirms your eligibility for the exams.
How to Download Dummy Registration Card for BSEB Matric Exam 2024?
-
If you are a student of class 10th who is going to appear for the BSEB Matric exam 2024, you can download your dummy registration card by following these simple steps:
Click on the link that says "Student Registration Card"
-
Enter your school code, name, father's name, and date of birth and click on login
-
Your BSEB Matric Dummy Registration Card 2024 will be displayed on the screen
-
Download and print the card for future reference
-
-
You should download your dummy registration card as soon as possible and check it carefully for any errors. If you find any errors, you should contact your school principal and apply for corrections through them. The deadline for making corrections is June 26, 2023 for Matric students.
-
How to download BSEB dummy registration card for 10th and 12th class
-Bihar board dummy registration card 2022-23 online download link
-BSEB 12th dummy registration card correction date and process
-Bihar board 10th dummy registration card 2022 pdf download
-BSEB inter dummy registration card 2023-24 download for arts, commerce, science
-Bihar board matric dummy registration card 2022-23 last date and fees
-BSEB dummy registration card 2024 for class 9th, 10th, 11th, 12th
-Bihar board intermediate dummy registration card download website
-BSEB secondary dummy registration card online application form
-Bihar board senior secondary dummy registration card download steps
-BSEB dummy registration card admit card download for 10th and 12th exam
-Bihar board dummy registration card result date and link
-BSEB dummy registration card correction form download and submit
-Bihar board dummy registration card verification and validation process
-BSEB dummy registration card details and importance for students
-Bihar board dummy registration card helpline number and email id
-BSEB dummy registration card login page and password reset
-Bihar board dummy registration card sample and format download
-BSEB dummy registration card print out and duplicate copy download
-Bihar board dummy registration card status check and update
-BSEB dummy registration card error and mistake correction guide
-Bihar board dummy registration card notification and news update
-BSEB dummy registration card eligibility criteria and documents required
-Bihar board dummy registration card exam date and time table download
-BSEB dummy registration card syllabus and exam pattern download
-Bihar board dummy registration card question paper and answer key download
-BSEB dummy registration card cut off marks and merit list download
-Bihar board dummy registration card revaluation and rechecking form download
-BSEB dummy registration card supplementary exam form and admit card download
-Bihar board dummy registration card migration certificate and mark sheet download
-
How to Download Dummy Registration Card for BSEB Intermediate Exam 2024?
-
If you are a student of class 12th who is going to appear for the BSEB Intermediate exam 2024, you can download your dummy registration card by following these simple steps:
Click on the link that says "Student Registration Card"
-
Enter your school code, father's name, and date of birth and click on login
-
Your BSEB Intermediate Dummy Registration Card 2024 will be displayed on the screen
-
Download and print the card for future reference
-
-
You should download your dummy registration card as soon as possible and check it carefully for any errors. If you find any errors, you should contact your school principal and apply for corrections through them. The deadline for making corrections is June 23, 2023 for Intermediate students.
-
How to Make Corrections in Dummy Registration Card?
-
It is very important to make corrections in your dummy registration card if you find any errors in your name, photo, date of birth, nationality, gender, caste, religion, or subjects. These errors can affect your eligibility for the exams or result declaration. To make corrections in your dummy registration card, you need to follow these steps:
-
-
Check your dummy registration card carefully for any errors in your personal or academic details
-
If you find any errors, contact your school principal and apply for corrections through them
-
You need to submit a written application along with the proof of the correct details to your school principal
-
Your school principal will forward your application to the BSEB office for verification and correction
-
You will receive a confirmation message from BSEB after the correction is done
-
You can download your corrected dummy registration card from the official website of BSEB
-
-
You should make corrections in your dummy registration card within the deadline specified by BSEB. The deadline for making corrections is June 26, 2023 for Matric students and June 23, 2023 for Intermediate students.
-
Benefits of Dummy Registration Card
-
Dummy registration card is a useful document that helps you to avoid any mistakes in your final registration card that can affect your eligibility for the exams. Some of the benefits of dummy registration card are:
-
-
Dummy registration card helps you to verify and correct any errors in your personal or academic details before the final registration card is issued
-
Dummy registration card also helps you to prepare for the exams by knowing your subjects and exam center details
-
Dummy registration card acts as a proof of your registration for the exams and can be used in case of any discrepancy or dispute
-
Dummy registration card also helps you to get admission in colleges or universities after passing the exams
-
-
Risks of Dummy Registration Card
-
Dummy registration card is a provisional document that needs to be checked and corrected before the final registration card is issued. If you do not download or check your dummy registration card, you may face some risks such as:
-
-
If you do not download or check your dummy registration card, you may miss the opportunity to correct any errors in your final registration card
-
If you do not make corrections in your dummy registration card within the deadline, you may face problems during the exams or result declaration
-
If you do not have a valid dummy registration card, you may not be able to appear for the exams or get your result
-
If you have any mismatch or discrepancy in your dummy registration card and final registration card, you may face legal action or cancellation of your candidature
-
-
Therefore, it is very important to download and check your dummy registration card and make corrections if needed before the final registration card is issued.
-
Examples of Dummy Registration Card
-
Here are some examples of how a dummy registration card looks like for Matric and Intermediate students. You can see the details such as name, photo, date of birth, nationality, gender, caste, religion, subjects, and exam center details on the card.
-
-
-
Matric Dummy Registration Card
-
Intermediate Dummy Registration Card
-
-
-
-
-
-
-
Conclusion
-
In this article, we have explained how to download dummy registration card for Bihar Board exams 2024. We have also told you why you need to download and check your dummy registration card, how to make corrections in dummy registration card, what are the benefits and risks of dummy registration card, and shown you some examples of dummy registration card. We hope this article has helped you to understand the importance and process of downloading and checking your dummy registration card for BSEB Matric exam 2024 and BSEB Intermediate exam 2024. If you have any queries or doubts, you can ask us in the comments section below. We wish you all the best for your exams!
-
FAQs
-
Here are some frequently asked questions and answers about dummy registration card for Bihar Board exams 2024.
-
-
What is the official website to download dummy registration card for Bihar Board exams 2024?
-
The official website to download dummy registration card for Bihar Board exams 2024 is biharboardonline.com. You can visit the website and click on the link that says "Student Registration Card" to download your dummy registration card.
-
What is the deadline to make corrections in dummy registration card for Bihar Board exams 2024?
-
The deadline to make corrections in dummy registration card for Bihar Board exams 2024 is June 26, 2023 for Matric students and June 23, 2023 for Intermediate students. You need to contact your school principal and apply for corrections through them before the deadline.
-
What are the documents required to make corrections in dummy registration card for Bihar Board exams 2024?
-
You need to submit a written application along with the proof of the correct details to your school principal to make corrections in dummy registration card for Bihar Board exams 2024. The proof can be your Aadhaar card, birth certificate, caste certificate, school leaving certificate, or any other relevant document.
-
What are the consequences of not downloading or checking dummy registration card for Bihar Board exams 2024?
-
If you do not download or check your dummy registration card for Bihar Board exams 2024, you may face some consequences such as missing the opportunity to correct any errors in your final registration card, facing problems during the exams or result declaration, or having a mismatch or discrepancy in your dummy registration card and final registration card.
-
How can I contact BSEB if I have any issue with my dummy registration card for Bihar Board exams 2024?
-
You can contact BSEB if you have any issue with your dummy registration card for Bihar Board exams 2024 by calling their helpline number at +91-612-2232074 or +91-612-2232257. You can also email them at bsebsehelpdesk@gmail.com or visit their office at Sinha Library Road, Patna - 800017.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Easy Drawing for Kids PDFs and Have Fun Learning to Draw.md b/spaces/congsaPfin/Manga-OCR/logs/Download Easy Drawing for Kids PDFs and Have Fun Learning to Draw.md
deleted file mode 100644
index 52a64d1ba841b6dd611f4bf43e75d7c7a664ca25..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Easy Drawing for Kids PDFs and Have Fun Learning to Draw.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
Easy Drawing for Kids: How to Download Free PDFs
-
Do you want to help your kids develop their creativity, imagination, and fine motor skills? Do you want to find some fun and easy drawing activities that they can enjoy at home or on the go? Do you want to access hundreds of free PDFs of easy drawing for kids that you can print or view on any device? If you answered yes to any of these questions, then this article is for you.
-
Introduction
-
What is easy drawing for kids?
-
Easy drawing for kids is a type of art activity that involves simple and step-by-step instructions on how to draw various objects, animals, characters, and scenes. Easy drawing for kids is suitable for children of all ages and skill levels, as it helps them learn the basic shapes, proportions, colors, and techniques of drawing. Easy drawing for kids can also boost their confidence, self-expression, and concentration.
Easy drawing for kids has many benefits for your child's development and well-being. Some of the benefits are:
-
-
It stimulates their brain and enhances their cognitive abilities, such as memory, problem-solving, and spatial awareness.
-
It improves their hand-eye coordination, fine motor skills, and dexterity.
-
It fosters their creativity, imagination, and originality.
-
It helps them express their emotions, feelings, and thoughts.
-
It relaxes them and reduces their stress and anxiety.
-
It makes them happy and proud of their achievements.
-
-
How to find and download free PDFs of easy drawing for kids?
-
One of the best ways to access easy drawing for kids activities is to download free PDFs from the internet. PDFs are portable document format files that can be viewed, printed, or shared on any device. PDFs are also easy to store, organize, and access on your computer or mobile device. To find and download free PDFs of easy drawing for kids, you need two things: a reliable source of PDFs and a good PDF reader app. In the next section, we will show you some of the best websites and apps for easy drawing for kids PDFs.
-
Main Body
-
Best websites for easy drawing for kids PDFs
-
There are many websites that offer free PDFs of easy drawing for kids activities. However, not all of them are safe, reliable, or high-quality. To help you find the best ones, we have selected three websites that we think are worth checking out. Here they are:
-
Art for Kids Hub
-
Art for Kids Hub is a popular YouTube channel that features hundreds of videos on how to draw various things in a fun and easy way. The channel is hosted by Rob, a father of four kids who loves doing art together with them. On their website, you can find free PDFs of each video lesson that you can download or print. The PDFs include a list of supplies, a grid guide, and step-by-step instructions with pictures. You can also browse the PDFs by category, such as animals, cartoons, holidays, seasons, etc.
-
easy drawing worksheets for kids.pdf free download
-easy drawing lessons for kids.pdf free download
-easy drawing tutorials for kids.pdf free download
-easy drawing printables for kids.pdf free download
-easy drawing activities for kids.pdf free download
-easy drawing guides for kids.pdf free download
-easy drawing ideas for kids.pdf free download
-easy drawing projects for kids.pdf free download
-easy drawing tips for kids.pdf free download
-easy drawing instructions for kids.pdf free download
-easy drawing animals for kids.pdf free download
-easy drawing cartoons for kids.pdf free download
-easy drawing nature for kids.pdf free download
-easy drawing birds for kids.pdf free download
-easy drawing reptiles for kids.pdf free download
-easy drawing sea animals for kids.pdf free download
-easy drawing holidays for kids.pdf free download
-easy drawing bugs for kids.pdf free download
-easy drawing food for kids.pdf free download
-easy drawing calendar for kids.pdf free download
-easy drawing superheroes for kids.pdf free download
-easy drawing toys for kids.pdf free download
-easy drawing vehicles for kids.pdf free download
-easy drawing mazes for kids.pdf free download
-easy drawing word searches for kids.pdf free download
-easy coloring pages for kids.pdf free download
-easy coloring books for kids.pdf free download
-easy coloring worksheets for kids.pdf free download
-easy coloring printables for kids.pdf free download
-easy coloring activities for kids.pdf free download
-how to draw easy things for kids.pdf free download
-how to draw easy animals for kids.pdf free download
-how to draw easy cartoons for kids.pdf free download
-how to draw easy characters for kids.pdf free download
-how to draw easy flowers for kids.pdf free download
-how to draw easy faces for kids.pdf free download
-how to draw easy dinosaurs for kids.pdf free download
-how to draw easy dragons for kids.pdf free download
-how to draw easy cars for kids.pdf free download
-how to draw easy trucks for kids.pdf free download
-learn to draw easy step by step for kids.pdf free download
-learn to draw easy pictures for kids.pdf free download
-learn to draw easy shapes for kids.pdf free download
-learn to draw easy patterns for kids.pdf free download
-learn to draw easy landscapes for kids.pdf free download
-learn to draw easy people for kids.pdf free download
-learn to draw easy objects for kids.pdf free download
-learn to draw easy fruits and vegetables for kids.pdf free download
-
Easy Drawings for Kids
-
Easy Drawings for Kids is another YouTube channel that teaches kids how to draw cute and simple things. The channel has over 400 videos on topics such as food, fruits, flowers , vehicles, etc. On their website, you can find free PDFs of each video lesson that you can download or print. The PDFs include a grid guide and step-by-step instructions with pictures. You can also search the PDFs by keyword or browse them by category.
-
Art Projects for Kids
-
Art Projects for Kids is a website created by Kathy Barbro, an art teacher who shares her ideas and resources for kids' art projects. On her website, you can find over 1000 free PDFs of easy drawing for kids activities that you can download or print. The PDFs include a list of supplies, a grid guide, and step-by-step instructions with pictures. You can also filter the PDFs by grade level, subject, theme, medium, etc.
-
Best apps for easy drawing for kids PDFs
-
Once you have downloaded some PDFs of easy drawing for kids activities, you need a good app to view, edit, or share them on your device. There are many apps that can handle PDF files, but not all of them are user-friendly, secure, or feature-rich. To help you find the best ones, we have selected three apps that we think are worth trying out. Here they are:
-
Adobe Acrobat Reader
-
Adobe Acrobat Reader is one of the most popular and trusted apps for viewing, annotating, and signing PDF files. It is available for free on Windows, Mac, iOS, Android, and web browsers. With this app, you can easily open and view any PDF file on your device. You can also add comments, highlights, stamps, or drawings to your PDFs. You can also fill out forms, sign documents, or scan paper documents with your camera. You can also share your PDFs with others via email, cloud services, or social media.
-
PDF Reader by Kdan Mobile
-
PDF Reader by Kdan Mobile is another great app for managing and editing PDF files. It is available for free on Windows, Mac, iOS, Android, and web browsers. With this app, you can easily open and view any PDF file on your device. You can also annotate, highlight, underline, or strikeout text on your PDFs. You can also add shapes, stamps, signatures, or drawings to your PDFs. You can also merge, split, rotate, or reorder pages on your PDFs. You can also convert your PDFs to other formats such as Word, Excel , PowerPoint, or image files. You can also share your PDFs with others via email, cloud services, or QR code.
-
Xodo PDF Reader & Editor
-
Xodo PDF Reader & Editor is a powerful and versatile app for working with PDF files. It is available for free on Windows, Mac, iOS, Android, and web browsers. With this app, you can easily open and view any PDF file on your device. You can also annotate, highlight, bookmark, or search text on your PDFs. You can also add signatures, stamps, shapes, or drawings to your PDFs. You can also create, edit, or fill out forms on your PDFs. You can also collaborate with others on your PDFs in real-time via chat or voice call.
-
Conclusion
-
Summary of the main points
-
In this article, we have shown you how to find and download free PDFs of easy drawing for kids activities. We have also recommended some of the best websites and apps for easy drawing for kids PDFs. Easy drawing for kids is a fun and beneficial art activity that can help your kids develop their skills and express their creativity. By downloading free PDFs of easy drawing for kids activities, you can provide your kids with endless hours of entertainment and learning.
-
Call to action
-
Now that you know how to download free PDFs of easy drawing for kids activities, why not give it a try? You can start by visiting one of the websites we mentioned above and downloading some PDFs that interest you and your kids. Then, you can open them with one of the apps we suggested and enjoy drawing together with your kids. You will be amazed by how much fun and satisfaction you and your kids will get from easy drawing for kids.
-
If you liked this article, please share it with your friends and family who might also be interested in easy drawing for kids. Also, feel free to leave a comment below and let us know what you think about easy drawing for kids and the websites and apps we recommended. We would love to hear from you!
-
FAQs
-
Here are some of the frequently asked questions about easy drawing for kids:
-
-
Q: How do I print the PDFs of easy drawing for kids?
-
A: To print the PDFs of easy drawing for kids, you need a printer that is connected to your device. Then, you can open the PDF file with one of the apps we mentioned above and select the print option. You can also adjust the settings such as paper size, orientation, quality, etc. before printing.
-
Q: How do I save the PDFs of easy drawing for kids on my device?
-
A: To save the PDFs of easy drawing for kids on your device, you need to download them from one of the websites we mentioned above. Then, you can choose where to save them on your device's storage or cloud service. You can also rename them or create folders to organize them.
-
Q: How do I edit the PDFs of easy drawing for kids?
-
A: To edit the PDFs of easy drawing for kids, you need one of the apps we mentioned above that allows editing features. Then, you can open the PDF file with the app and use the tools to add annotations, comments, highlights, stamps, signatures, shapes, drawings, etc. You can also edit the text or images on the PDF file if they are editable.
-
Q: How do I share the PDFs of easy drawing for kids with others?
-
A: To share the PDFs of easy drawing for kids with others, you need one of the apps we mentioned above that allows sharing features. Then, you can open the PDF file with the app and select the share option. You can then choose how to share it with others via email, cloud service, social media, QR code, etc.
-
Q: How do I find more PDFs of easy drawing for kids?
-
A: To find more PDFs of easy drawing for kids, you can visit more websites that offer free PDFs of easy drawing for kids activities. You can also search online using keywords such as "easy drawing for kids pdf", "how to draw pdf", "drawing tutorials pdf", etc.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Rebaixados Elite Brasil and Join the Demoted Car Community.md b/spaces/congsaPfin/Manga-OCR/logs/Download Rebaixados Elite Brasil and Join the Demoted Car Community.md
deleted file mode 100644
index 02caf2deda3feb0ab8c90fa2087a6b5dd3b4768b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Rebaixados Elite Brasil and Join the Demoted Car Community.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
Rebaixados Elite Brasil: A Fun and Customizable Car Game
-
If you are a fan of car games, you might want to check out Rebaixados Elite Brasil, a Brazil-inspired demoted car game where you can customize your car and character. In this game, you can lower your car to the floor, change the color of the xenon, turn up the bass of the music, and drive around a realistic Brazilian city. You can also choose from various accessories for your car and character, such as wheels, speakers, shirts, glasses, caps, and shoes. Rebaixados Elite Brasil is a game that has the most modification options on your car and character.
-
What is Rebaixados Elite Brasil?
-
A Brazil-inspired demoted car game
-
Rebaixados Elite Brasil (REB) is a game that simulates the culture of demoted cars in Brazil. Demoted cars are cars that have been lowered to the ground, usually with modified suspensions, wheels, tires, exhausts, and sound systems. These cars are often driven by young people who enjoy music, speed, and style. REB is a game that lets you experience this culture virtually.
REB is a game that has the most modification options on your car and character. You can customize your car's color, wheels, glass, xenon, neon, speakers, LED, trunk, hood, doors, windows, wipers, and more. You can also customize your character's shirt, glasses, cap, shorts, shoes, and more. You can create your own unique style and personality in this game.
-
A game with realistic physics and graphics
-
REB is a game that has realistic physics and graphics. The cars have detailed models and interiors that you can view in 360 degrees. The cars also have interactive elements such as opening doors, hood, trunk, windows, turning on wipers, etc. The cars behave according to the laws of physics such as gravity, inertia, friction, etc. The game also has day and night mode and filters for the camera that enhance the visual effects.
-
How to download and play Rebaixados Elite Brasil?
-
Download from Google Play or App Store
-
REB is available for both Android and iOS devices. You can download it from Google Play or App Store for free. The game has over 50 million downloads on Google Play and over 1 million downloads on App Store. The game requires iOS 11.0 or later or Android 4.4 or later to run.
-
Install and launch the game
-
After downloading the game, you can install it on your device and launch it. The game will ask you to grant some permissions such as access to your storage, microphone, and camera. You can allow or deny these permissions according to your preference. The game will also ask you to choose your language from English, Portuguese, or Spanish.
-
Choose your car and character
-
When you start the game, you will see a garage with several cars to choose from. You can swipe left or right to see the different models and tap on the one you like. You can also tap on the character icon on the top left corner to choose your character. You can change the gender, skin color, hair style, and facial features of your character.
-
Explore the city and customize your car
-
After choosing your car and character, you can tap on the play button on the bottom right corner to enter the city. You can drive around the city and explore different locations such as gas stations, shops, parks, etc. You can also tap on the menu button on the top right corner to access various options such as customization, settings, camera, music, etc. You can customize your car and character by tapping on the customization button and selecting the category you want to modify. You can also buy new cars and accessories with the coins you earn by playing the game.
-
rebaixados elite brasil apk free download
-rebaixados elite brasil game for android
-rebaixados elite brasil mod apk unlimited money
-rebaixados elite brasil online multiplayer
-rebaixados elite brasil car customization
-rebaixados elite brasil latest version update
-rebaixados elite brasil sebby games
-rebaixados elite brasil play store
-rebaixados elite brasil apk combo
-rebaixados elite brasil demoted car game
-rebaixados elite brasil how to refuel car
-rebaixados elite brasil best cars to lower
-rebaixados elite brasil character customization
-rebaixados elite brasil tips and tricks
-rebaixados elite brasil gameplay video
-rebaixados elite brasil review and rating
-rebaixados elite brasil download for pc
-rebaixados elite brasil cheats and hacks
-rebaixados elite brasil new features and improvements
-rebaixados elite brasil brazil-inspired game
-rebaixados elite brasil offline mode
-rebaixados elite brasil realistic graphics and physics
-rebaixados elite brasil soundtrack and music
-rebaixados elite brasil support and feedback
-rebaixados elite brasil alternatives and similar games
-
What are the features of Rebaixados Elite Brasil?
-
Detailed car models and interiors
-
REB has over 30 car models to choose from, each with its own unique design and features. The cars have detailed interiors that you can view in 360 degrees by tapping on the camera button and selecting the interior mode. You can also interact with some elements of the car such as opening doors, hood, trunk, windows, turning on wipers, etc.
-
Various accessories for the car and character
-
REB has a lot of accessories for the car and character that you can buy with coins or watch ads to get for free. For the car, you can buy different types of wheels, speakers, neon lights, xenon lights, LED lights, stickers, license plates, etc. For the character, you can buy different types of shirts, glasses, caps, shorts, shoes, etc. You can also change the color of some accessories by tapping on them and selecting the color palette.
-
Neon, xenon, speakers, and wheels
-
REB has some special features that make the game more fun and realistic. You can turn on neon lights under your car by tapping on the neon button on the bottom left corner. You can change the color of the neon lights by tapping on them and selecting the color palette. You can also turn on xenon lights on your headlights by tapping on the xenon button on the bottom left corner. You can change the color of the xenon lights by tapping on them and selecting the color palette. You can also turn up the bass of the music by tapping on the speaker button on the bottom left corner. You can choose from different genres of music such as funk, rap, rock, etc. You can also change the size and style of your wheels by tapping on the wheel button on the bottom left corner. You can choose from different types of wheels such as steel, alloy, chrome, etc.
-
Day and night mode and camera filters
-
REB has a day and night mode that changes according to the time of the day. You can see the sun setting and rising in the game and enjoy the different lighting effects. You can also change the camera filters by tapping on the camera button and selecting the filter mode. You can choose from different filters such as sepia, black and white, vintage, etc.
-
Functional gas station and steering wheel control
-
REB has a functional gas station where you can refuel your car. You can see the gas level of your car on the top left corner of the screen. You can drive to the gas station and park your car near the pump. You can then tap on the gas button on the bottom left corner and drag it to your car. You can see the gas level increasing as you fill up your car. You can also control your car with a steering wheel by tapping on the steering wheel button on the bottom left corner. You can tilt your device to steer your car or use the arrows on the screen.
-
What are some tips and tricks for Rebaixados Elite Brasil?
-
Refuel your car regularly
-
One of the tips for playing REB is to refuel your car regularly. Your car consumes gas as you drive around the city and if you run out of gas, you will not be able to move your car. You can see the gas level of your car on the top left corner of the screen. You can refuel your car at the gas station by following the steps mentioned above.
-
Use the accelerometer or arrows to control your car
-
Another tip for playing REB is to use the accelerometer or arrows to control your car. You can choose between two modes of control: accelerometer or arrows. You can change the mode by tapping on the settings button on the top right corner and selecting the control option. If you choose accelerometer, you can tilt your device to steer your car. If you choose arrows, you can use the arrows on the screen to steer your car.
-
Turn up the bass of the music
-
A third tip for playing REB is to turn up the bass of the music. REB has a feature that lets you adjust the bass of the music by tapping on the speaker button on the bottom left corner. You can choose from different genres of music such as funk, rap, rock, etc. You can also change the volume of the music by tapping on the volume button on the bottom left corner. The music adds to the atmosphere and mood of the game and makes it more enjoyable.
-
Change the color of the xenon and neon
-
A fourth tip for playing REB is to change the color of the xenon and neon lights. REB has a feature that lets you turn on and off the xenon and neon lights by tapping on the xenon and neon buttons on the bottom left corner. You can also change the color of the lights by tapping on them and selecting the color palette. You can choose from different colors such as red, blue, green, yellow, etc. The lights make your car look more cool and stylish.
-
Join the Facebook group and YouTube channel for more updates
-
A fifth tip for playing REB is to join the Facebook group and YouTube channel for more updates. REB has a Facebook group and a YouTube channel where you can interact with other players, share your screenshots and videos, get tips and tricks, and get news and updates about the game. You can join the Facebook group by tapping on the Facebook button on the top right corner and following the link. You can subscribe to the YouTube channel by tapping on the YouTube button on the top right corner and following the link.
-
What are some reviews of Rebaixados Elite Brasil?
-
Positive reviews from users who enjoy the game
-
REB has received many positive reviews from users who enjoy the game. Here are some examples of positive reviews:
-
-
"This game is awesome! I love how you can customize your car and character. The graphics are amazing and the music is lit. I recommend this game to anyone who likes car games."
-
"This is one of the best car games I have ever played. The cars are realistic and have many options to modify. The city is beautiful and has many places to explore. The game is very fun and addictive."
-
"This game is very cool and fun. I like how you can lower your car to the floor, change the color of the xenon and neon, and turn up the bass of the music. The game is very realistic and has a lot of features."
-
-
Negative reviews from users who encounter glitches or data issues
-
REB has also received some negative reviews from users who encounter glitches or data issues. Here are some examples of negative reviews:
-
-
"This game is good but it has a lot of bugs and glitches. Sometimes the game crashes or freezes. Sometimes the car gets stuck or flips over. Sometimes the accessories disappear or change color. Please fix these issues."
-
"This game is nice but it has a problem with the data. I lost all my progress and coins after I updated the game. I had a lot of cars and accessories that I bought with real money. I contacted the support but they did not help me. This is unfair and frustrating."
-
"This game is boring and repetitive. It has no missions or challenges. It has no multiplayer mode or online chat. It has no traffic or police. It has no variety or excitement. It is just driving around the same city with the same cars."
-
-
Overall rating of 4.4 out of 5 stars on App Store and 4.3 out of 5 stars on Google Play
-
REB has an overall rating of 4.4 out of 5 stars on App Store and 4.3 out of 5 stars on Google Play, based on thousands of user reviews. The game has been praised for its graphics, customization, realism, and fun factor. The game has also been criticized for its bugs, glitches, data issues, and lack of variety.
-
Conclusion
-
Rebaixados Elite Brasil is a fun and customizable car game that simulates the culture of demoted cars in Brazil. You can lower your car to the floor, change the color of the xenon and neon, turn up the bass of the music, and drive around a realistic Brazilian city. You can also choose from various accessories for your car and character, such as wheels, speakers, shirts, glasses, caps, and shoes. You can download the game from Google Play or App Store for free and enjoy its features such as detailed car models and interiors, day and night mode and camera filters, functional gas station and steering wheel control, etc.
-
FAQs
-
-
Q: How can I earn more coins in Rebaixados Elite Brasil?
-
A: You can earn more coins by playing the game regularly, watching ads, completing tasks, or buying them with real money.
-
Q: How can I change the language of Rebaixados Elite Brasil?
-
A: You can change the language by tapping on the settings button on the top right corner and selecting the language option.
-
Q: How can I share my screenshots and videos of Rebaixados Elite Brasil?
-
A: You can share your screenshots and videos by tapping on the camera button on the top right corner and selecting the share option.
-
Q: How can I contact the developers of Rebaixados Elite Brasil?
-
A: You can contact the developers by tapping on the settings button on the top right corner and selecting the contact option.
-
Q: How can I rate and review Rebaixados Elite Brasil?
-
A: You can rate and review Rebaixados Elite Brasil by tapping on the settings button on the top right corner and selecting the rate option.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Subway Surfers for Windows in Minutes Follow These Easy Steps.md b/spaces/congsaPfin/Manga-OCR/logs/Download Subway Surfers for Windows in Minutes Follow These Easy Steps.md
deleted file mode 100644
index e44767614c13988af5a55a66c3eb3aad3f93f730..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Subway Surfers for Windows in Minutes Follow These Easy Steps.md
+++ /dev/null
@@ -1,85 +0,0 @@
-
-
How to Download Subway Surfers on Windows
-
Subway Surfers is a popular arcade game that lets you run, jump, slide, and surf through various subway tracks while dodging trains, obstacles, and an angry inspector. The game is available for Android and iOS devices, but you might want to play it on your PC for a bigger screen, better graphics, and more comfortable controls. In this article, we will show you three methods to download Subway Surfers on Windows using different tools. You can choose the one that suits you best and enjoy this fun and addictive game.
BlueStacks is a powerful emulator that allows you to run Android apps and games on your PC. It has many features that enhance your gaming experience, such as Airplane Mode, Macros, Multi Instance, Script, etc. You can also play Subway Surfers online in your browser without downloading it. Here are the steps to download Subway Surfers on Windows using BlueStacks:
-
Step 1: Download and install BlueStacks on your PC
-
Go to the official website of BlueStacks and click on the "Download" button. This will download a .exe file that you need to run. Follow the instructions in the pop-up window to install BlueStacks on your PC.
-
Step 2: Complete Google sign-in to access the Play Store
-
After installing BlueStack
s on your PC, you will see a welcome screen. Click on the "Google Play" icon and sign in with your Google account. This will give you access to the Play Store, where you can find and download Subway Surfers.
-
Step 3: Search for Subway Surfers in the Play Store and install it
-
In the Play Store, type "Subway Surfers" in the search bar and hit enter. You will see a list of results. Click on the one that says "Subway Surfers" by SYBO Games. This will take you to the game's page, where you can see its details, ratings, reviews, screenshots, etc. Click on the "Install" button to download Subway Surfers on your PC.
-
Step 4: Click on the Subway Surfers icon on the home screen and start playing
-
Once Subway Surfers is installed, you will see its icon on the home screen of BlueStacks. Click on it to launch the game. You can use your mouse or keyboard to control your character and perform various actions. You can also customize your settings, such as sound, graphics, language, etc. Enjoy playing Subway Surfers on your PC with BlueStacks.
-
Method 2: Using APKPure App Store
-
APKPure App Store is an alternative app store that allows you to download and install Android apps and games on your PC. It has a large collection of apps and games that are updated regularly. You can also download older versions of apps and games if you want. Here are the steps to download Subway Surfers on Windows using APKPure App Store:
-
How to install subway surfers on windows 10 PC
-Subway surfers windows download guide with bluestacks emulator
-How to play subway surfers on windows without ads
-Subway surfers for windows 10 free download and installation
-How to run subway surfers on windows PC with apkpure
-Subway surfers windows gameplay tips and tricks
-How to use airplane mode in subway surfers on windows
-Subway surfers for windows 10 review and rating
-How to unlock new characters in subway surfers on windows
-Subway surfers for windows 10 system requirements and compatibility
-How to update subway surfers on windows PC
-Subway surfers windows download link and file size
-How to fix subway surfers not working on windows 10
-Subway surfers for windows 10 cheats and hacks
-How to connect subway surfers on windows with facebook
-Subway surfers for windows 10 best settings and options
-How to uninstall subway surfers from windows PC
-Subway surfers for windows 10 keyboard controls and shortcuts
-How to record subway surfers gameplay on windows PC
-Subway surfers for windows 10 latest version and features
-How to transfer subway surfers data from android to windows PC
-Subway surfers for windows 10 offline mode and online mode
-How to get unlimited coins and keys in subway surfers on windows
-Subway surfers for windows 10 comparison with android and ios versions
-How to change language and region in subway surfers on windows
-How to customize subway surfers on windows PC with skins and themes
-Subway surfers for windows 10 achievements and rewards
-How to join subway surfers world tour on windows PC
-Subway surfers for windows 10 challenges and missions
-How to sync subway surfers progress across multiple devices on windows PC
-How to download subway surfers mod apk on windows PC
-Subway surfers for windows 10 pros and cons
-How to play subway surfers in browser on windows PC without downloading
-Subway surfers for windows 10 frequently asked questions and answers
-How to contact subway surfers support team on windows PC
-
Step 1: Download and install APKPure App Store on your PC
-
Go to the official website of APKPure App Store and click on the "Download" button. This will download a .exe file that you need to run. Follow the instructions in the pop-up window to install APKPure App Store on your PC.
-
Step 2: Search for Subway Surfers in the APKPure App Store and download it
-
After installing APKPure App Store on your PC, you will see a user-friendly interface. Click on the "Search" icon and type "Subway Surfers" in the search bar and hit enter. You will see a list of results. Click on the one that says "Subway Surfers" by SYBO Games. This will take you to the game's page, where you can see its details, ratings, reviews, screenshots, etc. Click on the "Download" button to download Subway Surfers on your PC.
-
Step 3: Open the downloaded file and install Subway Surfers on your PC
-
Once Subway Surfers is downloaded, you will see a notification in the bottom right corner of your screen. Click on it to open the downloaded file. You will see a pop-up window that asks you to confirm the installation of Subway Surfers on your PC. Click on "Yes" to proceed. Follow the instructions in the pop-up window to install Subway Surfers on your PC.
-
Step 4: Click on the Subway Surfers icon on the desktop and start playing
-
Once Subway Surfers is installed, you will see its icon on your desktop. Click on it to launch the game. You can use your mouse or keyboard to control your character and perform various actions. You can also customize your settings, such as sound, graphics, language, etc. Enjoy playing Subway Surfers on your PC with APKPure App Store.
-
Method 3: Using YouTube Video Guide
-
If you prefer watching a video guide rather than reading text instructions, you can use YouTube to find a video guide that shows you how to download Subway Surfers on Windows using different tools. There are many video guides available on YouTube that cover this topic, but you need to choose one that is clear, reliable, and up-to-date. Here are the steps to download Subway Surfers on Windows using YouTube video guide:
-
Step 1: Go to YouTube and search for "How to Download Subway Surfers game in PC or Laptop"
-
Go to the official website of YouTube and type "How to Download Subway Surfers game in PC or Laptop" in the search bar and hit enter. You will see a list of results that match your query.
-
Step 2: Choose a video guide that suits your preferences and watch it carefully
-
Browse through the results and choose a video guide that suits your preferences. Some factors that you might want to consider are: - The length of the video - The quality of the video - The credibility of the source - The date of the video - The number of views, likes, and comments Choose a video guide that has a high rating, a large number of views, a recent date, and a trustworthy source. Watch the video carefully and pay attention to the steps and instructions given by the narrator.
-
Step 3: Follow the instructions given in the video guide and download Subway Surfers on your PC
-
After watching the video guide, follow the instructions given by the narrator and download Subway Surfers on your PC. Depending on the video guide you chose, you might need to use different tools, such as emulators, app stores, or websites. Make sure you follow the steps correctly and download Subway Surfers from a safe and secure source.
-
Step 4: Click on the Subway Surfers icon on your PC and start playing
-
Once Subway Surfers is downloaded, you will see its icon on your PC. Click on it to launch the game. You can use your mouse or keyboard to control your character and perform various actions. You can also customize your settings, such as sound, graphics, language, etc. Enjoy playing Subway Surfers on your PC with YouTube video guide.
-
Conclusion
-
In this article, we have shown you three methods to download Subway Surfers on Windows using different tools. You can choose the one that suits you best and enjoy this fun and addictive game. Here are some tips for playing Subway Surfers on PC: - Collect coins and power-ups to boost your score and unlock new characters and items - Avoid crashing into trains, barriers, and other obstacles that will slow you down or end your run - Use hoverboards, jetpacks, magnets, and other gadgets to enhance your gameplay - Complete missions and challenges to earn rewards and achievements - Join events and seasons to experience new themes and locations We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about downloading Subway Surfers on PC:
-
Q: Is Subway Surfers free to play on PC?
-
A: Yes, Subway Surfers is free to play on PC. You don't need to pay anything to download or play it. However, you can make in-app purchases to buy coins, keys, or other items if you want.
-
Q: Is Subway Surfers safe to download on PC?
-
A: Yes, Subway Surfers is safe to download on PC as long as you use a reliable source, such as BlueStacks emulator, APKPure App Store, or YouTube video guide. Avoid downloading Subway Surfers from unknown or suspicious sources that might contain malware or viruses.
-
Q: Can I play Subway Surfers offline on PC?
-
A: Yes, you can play Subway Surfers offline on PC. You don't need an internet connection to play it. However, you might need an internet connection to download it or access some features, such as online leaderboards or events.
-
Q: Can I sync my progress between my PC and my mobile device?
-
A: Yes, you can sync your progress between your PC and your mobile device if you use the same Google account to sign in to both devices. This way, you can continue your game from where you left off on either device.
-
Q: How can I update Subway Surfers on PC?
-
A: You can update Subway Surfers on PC by following the same method that you used to download it. For example, if you used BlueStacks emulator, you can go to the Play Store and check for updates. If you used APKPure App Store, you can go to the app store and check for updates. If you used YouTube video guide, you can watch a new video guide that shows how to update Subway Surfers on PC.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/aspp.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/aspp.py
deleted file mode 100644
index 14861aa9ede4fea6a69a49f189bcab997b558148..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/aspp.py
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-from copy import deepcopy
-import fvcore.nn.weight_init as weight_init
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from .batch_norm import get_norm
-from .blocks import DepthwiseSeparableConv2d
-from .wrappers import Conv2d
-
-
-class ASPP(nn.Module):
- """
- Atrous Spatial Pyramid Pooling (ASPP).
- """
-
- def __init__(
- self,
- in_channels,
- out_channels,
- dilations,
- *,
- norm,
- activation,
- pool_kernel_size=None,
- dropout: float = 0.0,
- use_depthwise_separable_conv=False,
- ):
- """
- Args:
- in_channels (int): number of input channels for ASPP.
- out_channels (int): number of output channels.
- dilations (list): a list of 3 dilations in ASPP.
- norm (str or callable): normalization for all conv layers.
- See :func:`layers.get_norm` for supported format. norm is
- applied to all conv layers except the conv following
- global average pooling.
- activation (callable): activation function.
- pool_kernel_size (tuple, list): the average pooling size (kh, kw)
- for image pooling layer in ASPP. If set to None, it always
- performs global average pooling. If not None, it must be
- divisible by the shape of inputs in forward(). It is recommended
- to use a fixed input feature size in training, and set this
- option to match this size, so that it performs global average
- pooling in training, and the size of the pooling window stays
- consistent in inference.
- dropout (float): apply dropout on the output of ASPP. It is used in
- the official DeepLab implementation with a rate of 0.1:
- https://github.com/tensorflow/models/blob/21b73d22f3ed05b650e85ac50849408dd36de32e/research/deeplab/model.py#L532 # noqa
- use_depthwise_separable_conv (bool): use DepthwiseSeparableConv2d
- for 3x3 convs in ASPP, proposed in :paper:`DeepLabV3+`.
- """
- super(ASPP, self).__init__()
- assert len(dilations) == 3, "ASPP expects 3 dilations, got {}".format(len(dilations))
- self.pool_kernel_size = pool_kernel_size
- self.dropout = dropout
- use_bias = norm == ""
- self.convs = nn.ModuleList()
- # conv 1x1
- self.convs.append(
- Conv2d(
- in_channels,
- out_channels,
- kernel_size=1,
- bias=use_bias,
- norm=get_norm(norm, out_channels),
- activation=deepcopy(activation),
- )
- )
- weight_init.c2_xavier_fill(self.convs[-1])
- # atrous convs
- for dilation in dilations:
- if use_depthwise_separable_conv:
- self.convs.append(
- DepthwiseSeparableConv2d(
- in_channels,
- out_channels,
- kernel_size=3,
- padding=dilation,
- dilation=dilation,
- norm1=norm,
- activation1=deepcopy(activation),
- norm2=norm,
- activation2=deepcopy(activation),
- )
- )
- else:
- self.convs.append(
- Conv2d(
- in_channels,
- out_channels,
- kernel_size=3,
- padding=dilation,
- dilation=dilation,
- bias=use_bias,
- norm=get_norm(norm, out_channels),
- activation=deepcopy(activation),
- )
- )
- weight_init.c2_xavier_fill(self.convs[-1])
- # image pooling
- # We do not add BatchNorm because the spatial resolution is 1x1,
- # the original TF implementation has BatchNorm.
- if pool_kernel_size is None:
- image_pooling = nn.Sequential(
- nn.AdaptiveAvgPool2d(1),
- Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)),
- )
- else:
- image_pooling = nn.Sequential(
- nn.AvgPool2d(kernel_size=pool_kernel_size, stride=1),
- Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)),
- )
- weight_init.c2_xavier_fill(image_pooling[1])
- self.convs.append(image_pooling)
-
- self.project = Conv2d(
- 5 * out_channels,
- out_channels,
- kernel_size=1,
- bias=use_bias,
- norm=get_norm(norm, out_channels),
- activation=deepcopy(activation),
- )
- weight_init.c2_xavier_fill(self.project)
-
- def forward(self, x):
- size = x.shape[-2:]
- if self.pool_kernel_size is not None:
- if size[0] % self.pool_kernel_size[0] or size[1] % self.pool_kernel_size[1]:
- raise ValueError(
- "`pool_kernel_size` must be divisible by the shape of inputs. "
- "Input size: {} `pool_kernel_size`: {}".format(size, self.pool_kernel_size)
- )
- res = []
- for conv in self.convs:
- res.append(conv(x))
- res[-1] = F.interpolate(res[-1], size=size, mode="bilinear", align_corners=False)
- res = torch.cat(res, dim=1)
- res = self.project(res)
- res = F.dropout(res, self.dropout, training=self.training) if self.dropout > 0 else res
- return res
diff --git a/spaces/crashedice/signify/signify/__init__.py b/spaces/crashedice/signify/signify/__init__.py
deleted file mode 100644
index f102a9cadfa89ce554b3b26d2b90bfba2e05273c..0000000000000000000000000000000000000000
--- a/spaces/crashedice/signify/signify/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-__version__ = "0.0.1"
diff --git a/spaces/danterivers/music-generation-samples/tests/modules/test_conv.py b/spaces/danterivers/music-generation-samples/tests/modules/test_conv.py
deleted file mode 100644
index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000
--- a/spaces/danterivers/music-generation-samples/tests/modules/test_conv.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-import math
-import random
-
-import pytest
-import torch
-from torch import nn
-
-from audiocraft.modules import (
- NormConv1d,
- NormConvTranspose1d,
- StreamableConv1d,
- StreamableConvTranspose1d,
- pad1d,
- unpad1d,
-)
-
-
-def test_get_extra_padding_for_conv1d():
- # TODO: Implement me!
- pass
-
-
-def test_pad1d_zeros():
- x = torch.randn(1, 1, 20)
-
- xp1 = pad1d(x, (0, 5), mode='constant', value=0.)
- assert xp1.shape[-1] == 25
- xp2 = pad1d(x, (5, 5), mode='constant', value=0.)
- assert xp2.shape[-1] == 30
- xp3 = pad1d(x, (0, 0), mode='constant', value=0.)
- assert xp3.shape[-1] == 20
- xp4 = pad1d(x, (10, 30), mode='constant', value=0.)
- assert xp4.shape[-1] == 60
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, 0), mode='constant', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (0, -1), mode='constant', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, -1), mode='constant', value=0.)
-
-
-def test_pad1d_reflect():
- x = torch.randn(1, 1, 20)
-
- xp1 = pad1d(x, (0, 5), mode='reflect', value=0.)
- assert xp1.shape[-1] == 25
- xp2 = pad1d(x, (5, 5), mode='reflect', value=0.)
- assert xp2.shape[-1] == 30
- xp3 = pad1d(x, (0, 0), mode='reflect', value=0.)
- assert xp3.shape[-1] == 20
- xp4 = pad1d(x, (10, 30), mode='reflect', value=0.)
- assert xp4.shape[-1] == 60
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, 0), mode='reflect', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (0, -1), mode='reflect', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, -1), mode='reflect', value=0.)
-
-
-def test_unpad1d():
- x = torch.randn(1, 1, 20)
-
- u1 = unpad1d(x, (5, 5))
- assert u1.shape[-1] == 10
- u2 = unpad1d(x, (0, 5))
- assert u2.shape[-1] == 15
- u3 = unpad1d(x, (5, 0))
- assert u3.shape[-1] == 15
- u4 = unpad1d(x, (0, 0))
- assert u4.shape[-1] == x.shape[-1]
-
- with pytest.raises(AssertionError):
- unpad1d(x, (-1, 0))
-
- with pytest.raises(AssertionError):
- unpad1d(x, (0, -1))
-
- with pytest.raises(AssertionError):
- unpad1d(x, (-1, -1))
-
-
-class TestNormConv1d:
-
- def test_norm_conv1d_modules(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
-
- C_out, kernel_size, stride = 1, 4, 1
- expected_out_length = int((T - kernel_size) / stride + 1)
- wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm')
- gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm')
- nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none')
-
- assert isinstance(wn_conv.norm, nn.Identity)
- assert isinstance(wn_conv.conv, nn.Conv1d)
-
- assert isinstance(gn_conv.norm, nn.GroupNorm)
- assert isinstance(gn_conv.conv, nn.Conv1d)
-
- assert isinstance(nn_conv.norm, nn.Identity)
- assert isinstance(nn_conv.conv, nn.Conv1d)
-
- for conv_layer in [wn_conv, gn_conv, nn_conv]:
- out = conv_layer(t0)
- assert isinstance(out, torch.Tensor)
- assert list(out.shape) == [N, C_out, expected_out_length]
-
-
-class TestNormConvTranspose1d:
-
- def test_normalizations(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
-
- C_out, kernel_size, stride = 1, 4, 1
- expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1
-
- wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm')
- gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm')
- nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none')
-
- assert isinstance(wn_convtr.norm, nn.Identity)
- assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d)
-
- assert isinstance(gn_convtr.norm, nn.GroupNorm)
- assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d)
-
- assert isinstance(nn_convtr.norm, nn.Identity)
- assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d)
-
- for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]:
- out = convtr_layer(t0)
- assert isinstance(out, torch.Tensor)
- assert list(out.shape) == [N, C_out, expected_out_length]
-
-
-class TestStreamableConv1d:
-
- def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation):
- # StreamableConv1d internally pads to make sure that the last window is full
- padding_total = (kernel_size - 1) * dilation - (stride - 1)
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length // stride
-
- def test_streamable_conv1d(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
- C_out = 1
-
- # conv params are [(kernel_size, stride, dilation)]
- conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)]
- for causal, (kernel_size, stride, dilation) in product([False, True], conv_params):
- expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation)
- sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal)
- out = sconv(t0)
- assert isinstance(out, torch.Tensor)
- print(list(out.shape), [N, C_out, expected_out_length])
- assert list(out.shape) == [N, C_out, expected_out_length]
-
-
-class TestStreamableConvTranspose1d:
-
- def get_streamable_convtr1d_output_length(self, length, kernel_size, stride):
- padding_total = (kernel_size - stride)
- return (length - 1) * stride - padding_total + (kernel_size - 1) + 1
-
- def test_streamable_convtr1d(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
-
- C_out = 1
-
- with pytest.raises(AssertionError):
- StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5)
- StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.)
- StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2)
-
- # causal params are [(causal, trim_right)]
- causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)]
- # conv params are [(kernel_size, stride)]
- conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)]
- for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params):
- expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride)
- sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride,
- causal=causal, trim_right_ratio=trim_right_ratio)
- out = sconvtr(t0)
- assert isinstance(out, torch.Tensor)
- assert list(out.shape) == [N, C_out, expected_out_length]
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/__init__.py
deleted file mode 100644
index ed00764f7c193ca9bcd0bf67196da59c30048a28..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/__init__.py
+++ /dev/null
@@ -1,26 +0,0 @@
-"""fontTools.ttLib -- a package for dealing with TrueType fonts."""
-
-from fontTools.misc.loggingTools import deprecateFunction
-import logging
-
-
-log = logging.getLogger(__name__)
-
-
-class TTLibError(Exception):
- pass
-
-
-class TTLibFileIsCollectionError(TTLibError):
- pass
-
-
-@deprecateFunction("use logging instead", category=DeprecationWarning)
-def debugmsg(msg):
- import time
-
- print(msg + time.strftime(" (%H:%M:%S)", time.localtime(time.time())))
-
-
-from fontTools.ttLib.ttFont import *
-from fontTools.ttLib.ttCollection import TTCollection
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/json_component.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/json_component.py
deleted file mode 100644
index bdd32c51febf8a7aaaa0fbab65d55c387e7c9576..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/json_component.py
+++ /dev/null
@@ -1,122 +0,0 @@
-"""gr.JSON() component."""
-
-from __future__ import annotations
-
-import json
-from typing import Any, Callable, Literal
-
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import JSONSerializable
-
-from gradio.components.base import IOComponent, _Keywords
-from gradio.deprecation import warn_style_method_deprecation
-from gradio.events import (
- Changeable,
-)
-
-set_documentation_group("component")
-
-
-@document()
-class JSON(Changeable, IOComponent, JSONSerializable):
- """
- Used to display arbitrary JSON output prettily.
- Preprocessing: this component does *not* accept input.
- Postprocessing: expects a {str} filepath to a file containing valid JSON -- or a {list} or {dict} that is valid JSON
-
- Demos: zip_to_json, blocks_xray
- """
-
- def __init__(
- self,
- value: str | dict | list | Callable | None = None,
- *,
- label: str | None = None,
- every: float | None = None,
- show_label: bool | None = None,
- container: bool = True,
- scale: int | None = None,
- min_width: int = 160,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- value: Default value. If callable, the function will be called whenever the app loads to set the initial value of the component.
- label: component name in interface.
- every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
- show_label: if True, will display label.
- container: If True, will place the component in a container - providing some extra padding around the border.
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- visible: If False, component will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- """
- IOComponent.__init__(
- self,
- label=label,
- every=every,
- show_label=show_label,
- container=container,
- scale=scale,
- min_width=min_width,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- **kwargs,
- )
-
- def get_config(self):
- return {
- "value": self.value,
- **IOComponent.get_config(self),
- }
-
- @staticmethod
- def update(
- value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE,
- label: str | None = None,
- show_label: bool | None = None,
- container: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- visible: bool | None = None,
- ):
- updated_config = {
- "label": label,
- "show_label": show_label,
- "container": container,
- "scale": scale,
- "min_width": min_width,
- "visible": visible,
- "value": value,
- "__type__": "update",
- }
- return updated_config
-
- def postprocess(self, y: dict | list | str | None) -> dict | list | None:
- """
- Parameters:
- y: either a string filepath to a JSON file, or a Python list or dict that can be converted to JSON
- Returns:
- JSON output in Python list or dict format
- """
- if y is None:
- return None
- if isinstance(y, str):
- return json.loads(y)
- else:
- return y
-
- def style(self, *, container: bool | None = None, **kwargs):
- """
- This method is deprecated. Please set these arguments in the constructor instead.
- """
- warn_style_method_deprecation()
- if container is not None:
- self.container = container
- return self
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/cli/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/cli/__init__.py
deleted file mode 100644
index 0c796253489147f941b78b5bb04a82935a72edab..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/cli/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from gradio_client.cli import deploy_discord
-
-__all__ = ["deploy_discord"]
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/__init__.py
deleted file mode 100644
index e6b60c18caa05288676c98d09a9db1ea2be2731d..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-"""Read resources contained within a package."""
-
-from ._common import (
- as_file,
- files,
- Package,
-)
-
-from .abc import ResourceReader
-
-
-__all__ = [
- 'Package',
- 'ResourceReader',
- 'as_file',
- 'files',
-]
diff --git a/spaces/dcq/freegpt-webui/client/css/button.css b/spaces/dcq/freegpt-webui/client/css/button.css
deleted file mode 100644
index 5f604a8460d048458249f78be9dc544ade84801e..0000000000000000000000000000000000000000
--- a/spaces/dcq/freegpt-webui/client/css/button.css
+++ /dev/null
@@ -1,26 +0,0 @@
-.button {
- display: flex;
- padding: 8px 12px;
- align-items: center;
- justify-content: center;
- border: 1px solid var(--conversations);
- border-radius: var(--border-radius-1);
- width: 100%;
- background: transparent;
- cursor: pointer;
-}
-
-.button span {
- color: var(--colour-3);
- font-size: 0.875rem;
-}
-
-.button i::before {
- margin-right: 8px;
-}
-
-@media screen and (max-width: 990px) {
- .button span {
- font-size: 0.75rem;
- }
-}
diff --git a/spaces/declare-lab/tango/diffusers/examples/community/stable_diffusion_controlnet_img2img.py b/spaces/declare-lab/tango/diffusers/examples/community/stable_diffusion_controlnet_img2img.py
deleted file mode 100644
index a8a51b5489a3ab877012c1c843b720472fabd591..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/examples/community/stable_diffusion_controlnet_img2img.py
+++ /dev/null
@@ -1,989 +0,0 @@
-# Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/
-
-import inspect
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-
-import numpy as np
-import PIL.Image
-import torch
-from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
-
-from diffusers import AutoencoderKL, ControlNetModel, DiffusionPipeline, UNet2DConditionModel, logging
-from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput, StableDiffusionSafetyChecker
-from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
-from diffusers.schedulers import KarrasDiffusionSchedulers
-from diffusers.utils import (
- PIL_INTERPOLATION,
- is_accelerate_available,
- is_accelerate_version,
- randn_tensor,
- replace_example_docstring,
-)
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import numpy as np
- >>> import torch
- >>> from PIL import Image
- >>> from diffusers import ControlNetModel, UniPCMultistepScheduler
- >>> from diffusers.utils import load_image
-
- >>> input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
-
- >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
-
- >>> pipe_controlnet = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5",
- controlnet=controlnet,
- safety_checker=None,
- torch_dtype=torch.float16
- )
-
- >>> pipe_controlnet.scheduler = UniPCMultistepScheduler.from_config(pipe_controlnet.scheduler.config)
- >>> pipe_controlnet.enable_xformers_memory_efficient_attention()
- >>> pipe_controlnet.enable_model_cpu_offload()
-
- # using image with edges for our canny controlnet
- >>> control_image = load_image(
- "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vermeer_canny_edged.png")
-
-
- >>> result_img = pipe_controlnet(controlnet_conditioning_image=control_image,
- image=input_image,
- prompt="an android robot, cyberpank, digitl art masterpiece",
- num_inference_steps=20).images[0]
-
- >>> result_img.show()
- ```
-"""
-
-
-def prepare_image(image):
- if isinstance(image, torch.Tensor):
- # Batch single image
- if image.ndim == 3:
- image = image.unsqueeze(0)
-
- image = image.to(dtype=torch.float32)
- else:
- # preprocess image
- if isinstance(image, (PIL.Image.Image, np.ndarray)):
- image = [image]
-
- if isinstance(image, list) and isinstance(image[0], PIL.Image.Image):
- image = [np.array(i.convert("RGB"))[None, :] for i in image]
- image = np.concatenate(image, axis=0)
- elif isinstance(image, list) and isinstance(image[0], np.ndarray):
- image = np.concatenate([i[None, :] for i in image], axis=0)
-
- image = image.transpose(0, 3, 1, 2)
- image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
-
- return image
-
-
-def prepare_controlnet_conditioning_image(
- controlnet_conditioning_image,
- width,
- height,
- batch_size,
- num_images_per_prompt,
- device,
- dtype,
- do_classifier_free_guidance,
-):
- if not isinstance(controlnet_conditioning_image, torch.Tensor):
- if isinstance(controlnet_conditioning_image, PIL.Image.Image):
- controlnet_conditioning_image = [controlnet_conditioning_image]
-
- if isinstance(controlnet_conditioning_image[0], PIL.Image.Image):
- controlnet_conditioning_image = [
- np.array(i.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]))[None, :]
- for i in controlnet_conditioning_image
- ]
- controlnet_conditioning_image = np.concatenate(controlnet_conditioning_image, axis=0)
- controlnet_conditioning_image = np.array(controlnet_conditioning_image).astype(np.float32) / 255.0
- controlnet_conditioning_image = controlnet_conditioning_image.transpose(0, 3, 1, 2)
- controlnet_conditioning_image = torch.from_numpy(controlnet_conditioning_image)
- elif isinstance(controlnet_conditioning_image[0], torch.Tensor):
- controlnet_conditioning_image = torch.cat(controlnet_conditioning_image, dim=0)
-
- image_batch_size = controlnet_conditioning_image.shape[0]
-
- if image_batch_size == 1:
- repeat_by = batch_size
- else:
- # image batch size is the same as prompt batch size
- repeat_by = num_images_per_prompt
-
- controlnet_conditioning_image = controlnet_conditioning_image.repeat_interleave(repeat_by, dim=0)
-
- controlnet_conditioning_image = controlnet_conditioning_image.to(device=device, dtype=dtype)
-
- if do_classifier_free_guidance:
- controlnet_conditioning_image = torch.cat([controlnet_conditioning_image] * 2)
-
- return controlnet_conditioning_image
-
-
-class StableDiffusionControlNetImg2ImgPipeline(DiffusionPipeline):
- """
- Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/
- """
-
- _optional_components = ["safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- controlnet: Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel],
- scheduler: KarrasDiffusionSchedulers,
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPImageProcessor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
-
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
-
- if isinstance(controlnet, (list, tuple)):
- controlnet = MultiControlNetModel(controlnet)
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- controlnet=controlnet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding.
-
- When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several
- steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- def enable_sequential_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
- text_encoder, vae, controlnet, and safety checker have their state dicts saved to CPU and then are moved to a
- `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
- Note that offloading happens on a submodule basis. Memory savings are higher than with
- `enable_model_cpu_offload`, but performance is lower.
- """
- if is_accelerate_available():
- from accelerate import cpu_offload
- else:
- raise ImportError("Please install accelerate via `pip install accelerate`")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.controlnet]:
- cpu_offload(cpu_offloaded_model, device)
-
- if self.safety_checker is not None:
- cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True)
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- hook = None
- for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- if self.safety_checker is not None:
- # the safety checker can offload the vae again
- _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
-
- # control net hook has be manually offloaded as it alternates with unet
- cpu_offload_with_hook(self.controlnet, device)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- @property
- def _execution_device(self):
- r"""
- Returns the device on which the pipeline's models will be executed. After calling
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
- hooks.
- """
- if not hasattr(self.unet, "_hf_hook"):
- return self.device
- for module in self.unet.modules():
- if (
- hasattr(module, "_hf_hook")
- and hasattr(module._hf_hook, "execution_device")
- and module._hf_hook.execution_device is not None
- ):
- return torch.device(module._hf_hook.execution_device)
- return self.device
-
- def _encode_prompt(
- self,
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt=None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- """
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- if prompt_embeds is None:
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = self.tokenizer.batch_decode(
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = text_inputs.attention_mask.to(device)
- else:
- attention_mask = None
-
- prompt_embeds = self.text_encoder(
- text_input_ids.to(device),
- attention_mask=attention_mask,
- )
- prompt_embeds = prompt_embeds[0]
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance and negative_prompt_embeds is None:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = prompt_embeds.shape[1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = uncond_input.attention_mask.to(device)
- else:
- attention_mask = None
-
- negative_prompt_embeds = self.text_encoder(
- uncond_input.input_ids.to(device),
- attention_mask=attention_mask,
- )
- negative_prompt_embeds = negative_prompt_embeds[0]
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
-
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
-
- return prompt_embeds
-
- def run_safety_checker(self, image, device, dtype):
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
- )
- else:
- has_nsfw_concept = None
- return image, has_nsfw_concept
-
- def decode_latents(self, latents):
- latents = 1 / self.vae.config.scaling_factor * latents
- image = self.vae.decode(latents).sample
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- def check_controlnet_conditioning_image(self, image, prompt, prompt_embeds):
- image_is_pil = isinstance(image, PIL.Image.Image)
- image_is_tensor = isinstance(image, torch.Tensor)
- image_is_pil_list = isinstance(image, list) and isinstance(image[0], PIL.Image.Image)
- image_is_tensor_list = isinstance(image, list) and isinstance(image[0], torch.Tensor)
-
- if not image_is_pil and not image_is_tensor and not image_is_pil_list and not image_is_tensor_list:
- raise TypeError(
- "image must be passed and be one of PIL image, torch tensor, list of PIL images, or list of torch tensors"
- )
-
- if image_is_pil:
- image_batch_size = 1
- elif image_is_tensor:
- image_batch_size = image.shape[0]
- elif image_is_pil_list:
- image_batch_size = len(image)
- elif image_is_tensor_list:
- image_batch_size = len(image)
- else:
- raise ValueError("controlnet condition image is not valid")
-
- if prompt is not None and isinstance(prompt, str):
- prompt_batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- prompt_batch_size = len(prompt)
- elif prompt_embeds is not None:
- prompt_batch_size = prompt_embeds.shape[0]
- else:
- raise ValueError("prompt or prompt_embeds are not valid")
-
- if image_batch_size != 1 and image_batch_size != prompt_batch_size:
- raise ValueError(
- f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {image_batch_size}, prompt batch size: {prompt_batch_size}"
- )
-
- def check_inputs(
- self,
- prompt,
- image,
- controlnet_conditioning_image,
- height,
- width,
- callback_steps,
- negative_prompt=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- strength=None,
- controlnet_guidance_start=None,
- controlnet_guidance_end=None,
- controlnet_conditioning_scale=None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- # check controlnet condition image
-
- if isinstance(self.controlnet, ControlNetModel):
- self.check_controlnet_conditioning_image(controlnet_conditioning_image, prompt, prompt_embeds)
- elif isinstance(self.controlnet, MultiControlNetModel):
- if not isinstance(controlnet_conditioning_image, list):
- raise TypeError("For multiple controlnets: `image` must be type `list`")
-
- if len(controlnet_conditioning_image) != len(self.controlnet.nets):
- raise ValueError(
- "For multiple controlnets: `image` must have the same length as the number of controlnets."
- )
-
- for image_ in controlnet_conditioning_image:
- self.check_controlnet_conditioning_image(image_, prompt, prompt_embeds)
- else:
- assert False
-
- # Check `controlnet_conditioning_scale`
-
- if isinstance(self.controlnet, ControlNetModel):
- if not isinstance(controlnet_conditioning_scale, float):
- raise TypeError("For single controlnet: `controlnet_conditioning_scale` must be type `float`.")
- elif isinstance(self.controlnet, MultiControlNetModel):
- if isinstance(controlnet_conditioning_scale, list) and len(controlnet_conditioning_scale) != len(
- self.controlnet.nets
- ):
- raise ValueError(
- "For multiple controlnets: When `controlnet_conditioning_scale` is specified as `list`, it must have"
- " the same length as the number of controlnets"
- )
- else:
- assert False
-
- if isinstance(image, torch.Tensor):
- if image.ndim != 3 and image.ndim != 4:
- raise ValueError("`image` must have 3 or 4 dimensions")
-
- if image.ndim == 3:
- image_batch_size = 1
- image_channels, image_height, image_width = image.shape
- elif image.ndim == 4:
- image_batch_size, image_channels, image_height, image_width = image.shape
- else:
- assert False
-
- if image_channels != 3:
- raise ValueError("`image` must have 3 channels")
-
- if image.min() < -1 or image.max() > 1:
- raise ValueError("`image` should be in range [-1, 1]")
-
- if self.vae.config.latent_channels != self.unet.config.in_channels:
- raise ValueError(
- f"The config of `pipeline.unet` expects {self.unet.config.in_channels} but received"
- f" latent channels: {self.vae.config.latent_channels},"
- f" Please verify the config of `pipeline.unet` and the `pipeline.vae`"
- )
-
- if strength < 0 or strength > 1:
- raise ValueError(f"The value of `strength` should in [0.0, 1.0] but is {strength}")
-
- if controlnet_guidance_start < 0 or controlnet_guidance_start > 1:
- raise ValueError(
- f"The value of `controlnet_guidance_start` should in [0.0, 1.0] but is {controlnet_guidance_start}"
- )
-
- if controlnet_guidance_end < 0 or controlnet_guidance_end > 1:
- raise ValueError(
- f"The value of `controlnet_guidance_end` should in [0.0, 1.0] but is {controlnet_guidance_end}"
- )
-
- if controlnet_guidance_start > controlnet_guidance_end:
- raise ValueError(
- "The value of `controlnet_guidance_start` should be less than `controlnet_guidance_end`, but got"
- f" `controlnet_guidance_start` {controlnet_guidance_start} >= `controlnet_guidance_end` {controlnet_guidance_end}"
- )
-
- def get_timesteps(self, num_inference_steps, strength, device):
- # get the original timestep using init_timestep
- init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
-
- t_start = max(num_inference_steps - init_timestep, 0)
- timesteps = self.scheduler.timesteps[t_start:]
-
- return timesteps, num_inference_steps - t_start
-
- def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None):
- if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
- raise ValueError(
- f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
- )
-
- image = image.to(device=device, dtype=dtype)
-
- batch_size = batch_size * num_images_per_prompt
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if isinstance(generator, list):
- init_latents = [
- self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size)
- ]
- init_latents = torch.cat(init_latents, dim=0)
- else:
- init_latents = self.vae.encode(image).latent_dist.sample(generator)
-
- init_latents = self.vae.config.scaling_factor * init_latents
-
- if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0:
- raise ValueError(
- f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
- )
- else:
- init_latents = torch.cat([init_latents], dim=0)
-
- shape = init_latents.shape
- noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
-
- # get latents
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
- latents = init_latents
-
- return latents
-
- def _default_height_width(self, height, width, image):
- if isinstance(image, list):
- image = image[0]
-
- if height is None:
- if isinstance(image, PIL.Image.Image):
- height = image.height
- elif isinstance(image, torch.Tensor):
- height = image.shape[3]
-
- height = (height // 8) * 8 # round down to nearest multiple of 8
-
- if width is None:
- if isinstance(image, PIL.Image.Image):
- width = image.width
- elif isinstance(image, torch.Tensor):
- width = image.shape[2]
-
- width = (width // 8) * 8 # round down to nearest multiple of 8
-
- return height, width
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: Union[str, List[str]] = None,
- image: Union[torch.Tensor, PIL.Image.Image] = None,
- controlnet_conditioning_image: Union[
- torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image]
- ] = None,
- strength: float = 0.8,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- controlnet_conditioning_scale: Union[float, List[float]] = 1.0,
- controlnet_guidance_start: float = 0.0,
- controlnet_guidance_end: float = 1.0,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- image (`torch.Tensor` or `PIL.Image.Image`):
- `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
- be masked out with `mask_image` and repainted according to `prompt`.
- controlnet_conditioning_image (`torch.FloatTensor`, `PIL.Image.Image`, `List[torch.FloatTensor]` or `List[PIL.Image.Image]`):
- The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
- the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. PIL.Image.Image` can
- also be accepted as an image. The control image is automatically resized to fit the output image.
- strength (`float`, *optional*):
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
- will be used as a starting point, adding more noise to it the larger the `strength`. The number of
- denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
- be maximum and the denoising process will run for the full number of iterations specified in
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
- `self.processor` in
- [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
- controlnet_conditioning_scale (`float`, *optional*, defaults to 1.0):
- The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
- to the residual in the original unet.
- controlnet_guidance_start ('float', *optional*, defaults to 0.0):
- The percentage of total steps the controlnet starts applying. Must be between 0 and 1.
- controlnet_guidance_end ('float', *optional*, defaults to 1.0):
- The percentage of total steps the controlnet ends applying. Must be between 0 and 1. Must be greater
- than `controlnet_guidance_start`.
-
- Examples:
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- # 0. Default height and width to unet
- height, width = self._default_height_width(height, width, controlnet_conditioning_image)
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt,
- image,
- controlnet_conditioning_image,
- height,
- width,
- callback_steps,
- negative_prompt,
- prompt_embeds,
- negative_prompt_embeds,
- strength,
- controlnet_guidance_start,
- controlnet_guidance_end,
- controlnet_conditioning_scale,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- if isinstance(self.controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float):
- controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(self.controlnet.nets)
-
- # 3. Encode input prompt
- prompt_embeds = self._encode_prompt(
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- )
-
- # 4. Prepare image, and controlnet_conditioning_image
- image = prepare_image(image)
-
- # condition image(s)
- if isinstance(self.controlnet, ControlNetModel):
- controlnet_conditioning_image = prepare_controlnet_conditioning_image(
- controlnet_conditioning_image=controlnet_conditioning_image,
- width=width,
- height=height,
- batch_size=batch_size * num_images_per_prompt,
- num_images_per_prompt=num_images_per_prompt,
- device=device,
- dtype=self.controlnet.dtype,
- do_classifier_free_guidance=do_classifier_free_guidance,
- )
- elif isinstance(self.controlnet, MultiControlNetModel):
- controlnet_conditioning_images = []
-
- for image_ in controlnet_conditioning_image:
- image_ = prepare_controlnet_conditioning_image(
- controlnet_conditioning_image=image_,
- width=width,
- height=height,
- batch_size=batch_size * num_images_per_prompt,
- num_images_per_prompt=num_images_per_prompt,
- device=device,
- dtype=self.controlnet.dtype,
- do_classifier_free_guidance=do_classifier_free_guidance,
- )
-
- controlnet_conditioning_images.append(image_)
-
- controlnet_conditioning_image = controlnet_conditioning_images
- else:
- assert False
-
- # 5. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device)
- latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
-
- # 6. Prepare latent variables
- latents = self.prepare_latents(
- image,
- latent_timestep,
- batch_size,
- num_images_per_prompt,
- prompt_embeds.dtype,
- device,
- generator,
- )
-
- # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 8. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
-
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # compute the percentage of total steps we are at
- current_sampling_percent = i / len(timesteps)
-
- if (
- current_sampling_percent < controlnet_guidance_start
- or current_sampling_percent > controlnet_guidance_end
- ):
- # do not apply the controlnet
- down_block_res_samples = None
- mid_block_res_sample = None
- else:
- # apply the controlnet
- down_block_res_samples, mid_block_res_sample = self.controlnet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- controlnet_cond=controlnet_conditioning_image,
- conditioning_scale=controlnet_conditioning_scale,
- return_dict=False,
- )
-
- # predict the noise residual
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- down_block_additional_residuals=down_block_res_samples,
- mid_block_additional_residual=mid_block_res_sample,
- ).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # If we do sequential model offloading, let's offload unet and controlnet
- # manually for max memory savings
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.unet.to("cpu")
- self.controlnet.to("cpu")
- torch.cuda.empty_cache()
-
- if output_type == "latent":
- image = latents
- has_nsfw_concept = None
- elif output_type == "pil":
- # 8. Post-processing
- image = self.decode_latents(latents)
-
- # 9. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
-
- # 10. Convert to PIL
- image = self.numpy_to_pil(image)
- else:
- # 8. Post-processing
- image = self.decode_latents(latents)
-
- # 9. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/declare-lab/tango/diffusers/examples/community/text_inpainting.py b/spaces/declare-lab/tango/diffusers/examples/community/text_inpainting.py
deleted file mode 100644
index 99a488788a0de6db78ae7c2c89038565efd29551..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/examples/community/text_inpainting.py
+++ /dev/null
@@ -1,302 +0,0 @@
-from typing import Callable, List, Optional, Union
-
-import PIL
-import torch
-from transformers import (
- CLIPImageProcessor,
- CLIPSegForImageSegmentation,
- CLIPSegProcessor,
- CLIPTextModel,
- CLIPTokenizer,
-)
-
-from diffusers import DiffusionPipeline
-from diffusers.configuration_utils import FrozenDict
-from diffusers.models import AutoencoderKL, UNet2DConditionModel
-from diffusers.pipelines.stable_diffusion import StableDiffusionInpaintPipeline
-from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
-from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
-from diffusers.utils import deprecate, is_accelerate_available, logging
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-class TextInpainting(DiffusionPipeline):
- r"""
- Pipeline for text based inpainting using Stable Diffusion.
- Uses CLIPSeg to get a mask from the given text, then calls the Inpainting pipeline with the generated mask
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- segmentation_model ([`CLIPSegForImageSegmentation`]):
- CLIPSeg Model to generate mask from the given text. Please refer to the [model card]() for details.
- segmentation_processor ([`CLIPSegProcessor`]):
- CLIPSeg processor to get image, text features to translate prompt to English, if necessary. Please refer to the
- [model card](https://huggingface.co/docs/transformers/model_doc/clipseg) for details.
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
- feature_extractor ([`CLIPImageProcessor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
-
- def __init__(
- self,
- segmentation_model: CLIPSegForImageSegmentation,
- segmentation_processor: CLIPSegProcessor,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPImageProcessor,
- ):
- super().__init__()
-
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
- " file"
- )
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["steps_offset"] = 1
- scheduler._internal_dict = FrozenDict(new_config)
-
- if hasattr(scheduler.config, "skip_prk_steps") and scheduler.config.skip_prk_steps is False:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} has not set the configuration"
- " `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make"
- " sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to"
- " incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face"
- " Hub, it would be very nice if you could open a Pull request for the"
- " `scheduler/scheduler_config.json` file"
- )
- deprecate("skip_prk_steps not set", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["skip_prk_steps"] = True
- scheduler._internal_dict = FrozenDict(new_config)
-
- if safety_checker is None:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- self.register_modules(
- segmentation_model=segmentation_model,
- segmentation_processor=segmentation_processor,
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
-
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
- r"""
- Enable sliced attention computation.
-
- When this option is enabled, the attention module will split the input tensor in slices, to compute attention
- in several steps. This is useful to save some memory in exchange for a small speed decrease.
-
- Args:
- slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
- When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
- a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
- `attention_head_dim` must be a multiple of `slice_size`.
- """
- if slice_size == "auto":
- # half the attention head size is usually a good trade-off between
- # speed and memory
- slice_size = self.unet.config.attention_head_dim // 2
- self.unet.set_attention_slice(slice_size)
-
- def disable_attention_slicing(self):
- r"""
- Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
- back to computing attention in one step.
- """
- # set slice_size = `None` to disable `attention slicing`
- self.enable_attention_slicing(None)
-
- def enable_sequential_cpu_offload(self):
- r"""
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
- text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
- `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
- """
- if is_accelerate_available():
- from accelerate import cpu_offload
- else:
- raise ImportError("Please install accelerate via `pip install accelerate`")
-
- device = torch.device("cuda")
-
- for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]:
- if cpu_offloaded_model is not None:
- cpu_offload(cpu_offloaded_model, device)
-
- @property
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
- def _execution_device(self):
- r"""
- Returns the device on which the pipeline's models will be executed. After calling
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
- hooks.
- """
- if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
- return self.device
- for module in self.unet.modules():
- if (
- hasattr(module, "_hf_hook")
- and hasattr(module._hf_hook, "execution_device")
- and module._hf_hook.execution_device is not None
- ):
- return torch.device(module._hf_hook.execution_device)
- return self.device
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]],
- image: Union[torch.FloatTensor, PIL.Image.Image],
- text: str,
- height: int = 512,
- width: int = 512,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[torch.Generator] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- image (`PIL.Image.Image`):
- `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
- be masked out with `mask_image` and repainted according to `prompt`.
- text (`str``):
- The text to use to generate the mask.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
-
- # We use the input text to generate the mask
- inputs = self.segmentation_processor(
- text=[text], images=[image], padding="max_length", return_tensors="pt"
- ).to(self.device)
- outputs = self.segmentation_model(**inputs)
- mask = torch.sigmoid(outputs.logits).cpu().detach().unsqueeze(-1).numpy()
- mask_pil = self.numpy_to_pil(mask)[0].resize(image.size)
-
- # Run inpainting pipeline with the generated mask
- inpainting_pipeline = StableDiffusionInpaintPipeline(
- vae=self.vae,
- text_encoder=self.text_encoder,
- tokenizer=self.tokenizer,
- unet=self.unet,
- scheduler=self.scheduler,
- safety_checker=self.safety_checker,
- feature_extractor=self.feature_extractor,
- )
- return inpainting_pipeline(
- prompt=prompt,
- image=image,
- mask_image=mask_pil,
- height=height,
- width=width,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- negative_prompt=negative_prompt,
- num_images_per_prompt=num_images_per_prompt,
- eta=eta,
- generator=generator,
- latents=latents,
- output_type=output_type,
- return_dict=return_dict,
- callback=callback,
- callback_steps=callback_steps,
- )
diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/textual_inversion/README.md b/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/textual_inversion/README.md
deleted file mode 100644
index 0ed34966e9f1836d9744edf77f46c84bb8609e97..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/textual_inversion/README.md
+++ /dev/null
@@ -1,82 +0,0 @@
-## Textual Inversion fine-tuning example
-
-[Textual inversion](https://arxiv.org/abs/2208.01618) is a method to personalize text2image models like stable diffusion on your own images using just 3-5 examples.
-The `textual_inversion.py` script shows how to implement the training procedure and adapt it for stable diffusion.
-
-## Running on Colab
-
-Colab for training
-[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb)
-
-Colab for inference
-[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb)
-
-## Running locally with PyTorch
-### Installing the dependencies
-
-Before running the scripts, make sure to install the library's training dependencies:
-
-**Important**
-
-To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
-```bash
-git clone https://github.com/huggingface/diffusers
-cd diffusers
-pip install .
-```
-
-Then cd in the example folder and run
-```bash
-pip install -r requirements.txt
-```
-
-And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-
-### Cat toy example
-
-You need to accept the model license before downloading or using the weights. In this example we'll use model version `v1-5`, so you'll need to visit [its card](https://huggingface.co/runwayml/stable-diffusion-v1-5), read the license and tick the checkbox if you agree.
-
-You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
-
-Run the following command to authenticate your token
-
-```bash
-huggingface-cli login
-```
-
-If you have already cloned the repo, then you won't need to go through these steps.
-
-
-
-Now let's get our dataset.Download 3-4 images from [here](https://drive.google.com/drive/folders/1fmJMs25nxS_rSNqS5hTcRdLem_YQXbq5) and save them in a directory. This will be our training data.
-
-## Use ONNXRuntime to accelerate training
-In order to leverage onnxruntime to accelerate training, please use textual_inversion.py
-
-The command to train on custom data with onnxruntime:
-
-```bash
-export MODEL_NAME="runwayml/stable-diffusion-v1-5"
-export DATA_DIR="path-to-dir-containing-images"
-
-accelerate launch textual_inversion.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --train_data_dir=$DATA_DIR \
- --learnable_property="object" \
- --placeholder_token="" --initializer_token="toy" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=4 \
- --max_train_steps=3000 \
- --learning_rate=5.0e-04 --scale_lr \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --output_dir="textual_inversion_cat"
-```
-
-Please contact Prathik Rao (prathikr), Sunghoon Choi (hanbitmyths), Ashwini Khade (askhade), or Peng Wang (pengwa) on github with any questions.
\ No newline at end of file
diff --git a/spaces/deepset/search-all-the-docs/main.py b/spaces/deepset/search-all-the-docs/main.py
deleted file mode 100644
index 1b6db20504e3a6deaa2dce1bc719e985ddfe9c6b..0000000000000000000000000000000000000000
--- a/spaces/deepset/search-all-the-docs/main.py
+++ /dev/null
@@ -1,208 +0,0 @@
-from typing import List, Tuple
-from pathlib import Path
-import os
-import subprocess
-
-from dotenv import load_dotenv
-from haystack.preview import Pipeline
-from haystack.preview.dataclasses import GeneratedAnswer
-from haystack.preview.components.retrievers import MemoryBM25Retriever
-from haystack.preview.components.generators.openai.gpt import GPTGenerator
-from haystack.preview.components.builders.answer_builder import AnswerBuilder
-from haystack.preview.components.builders.prompt_builder import PromptBuilder
-from haystack.preview.components.preprocessors import (
- DocumentCleaner,
- TextDocumentSplitter,
-)
-from haystack.preview.components.writers import DocumentWriter
-from haystack.preview.components.file_converters import TextFileToDocument
-from haystack.preview.document_stores.memory import MemoryDocumentStore
-import streamlit as st
-
-# Load the environment variables, we're going to need it for OpenAI
-load_dotenv()
-
-# This is the list of documentation that we're going to fetch
-DOCUMENTATIONS = [
- (
- "DocArray",
- "https://github.com/docarray/docarray",
- "./docs/**/*.md",
- ),
- (
- "Streamlit",
- "https://github.com/streamlit/docs",
- "./content/**/*.md",
- ),
- (
- "Jinja",
- "https://github.com/pallets/jinja",
- "./docs/**/*.rst",
- ),
- (
- "Pandas",
- "https://github.com/pandas-dev/pandas",
- "./doc/source/**/*.rst",
- ),
- (
- "Elasticsearch",
- "https://github.com/elastic/elasticsearch",
- "./docs/**/*.asciidoc",
- ),
- (
- "NumPy",
- "https://github.com/numpy/numpy",
- "./doc/**/*.rst",
- ),
-]
-
-DOCS_PATH = Path(__file__).parent / "downloaded_docs"
-
-
-@st.cache_data(show_spinner=False)
-def fetch(documentations: List[Tuple[str, str, str]]):
- files = []
- # Create the docs path if it doesn't exist
- DOCS_PATH.mkdir(parents=True, exist_ok=True)
-
- for name, url, pattern in documentations:
- st.write(f"Fetching {name} repository")
- repo = DOCS_PATH / name
- # Attempt cloning only if it doesn't exist
- if not repo.exists():
- subprocess.run(["git", "clone", "--depth", "1", url, str(repo)], check=True)
- res = subprocess.run(
- ["git", "rev-parse", "--abbrev-ref", "HEAD"],
- check=True,
- capture_output=True,
- encoding="utf-8",
- cwd=repo,
- )
- branch = res.stdout.strip()
- for p in repo.glob(pattern):
- data = {
- "path": p,
- "metadata": {
- "url_source": f"{url}/tree/{branch}/{p.relative_to(repo)}",
- "suffix": p.suffix,
- },
- }
- files.append(data)
-
- return files
-
-
-@st.cache_resource(show_spinner=False)
-def document_store():
- # We're going to store the processed documents in here
- return MemoryDocumentStore()
-
-
-@st.cache_resource(show_spinner=False)
-def index_files(files):
- # We create some components
- text_converter = TextFileToDocument(progress_bar=False)
- document_cleaner = DocumentCleaner()
- document_splitter = TextDocumentSplitter()
- document_writer = DocumentWriter(
- document_store=document_store(), policy="overwrite"
- )
-
- # And our pipeline
- indexing_pipeline = Pipeline()
- indexing_pipeline.add_component("converter", text_converter)
- indexing_pipeline.add_component("cleaner", document_cleaner)
- indexing_pipeline.add_component("splitter", document_splitter)
- indexing_pipeline.add_component("writer", document_writer)
- indexing_pipeline.connect("converter", "cleaner")
- indexing_pipeline.connect("cleaner", "splitter")
- indexing_pipeline.connect("splitter", "writer")
-
- # And now we save the documentation in our MemoryDocumentStore
- paths = []
- metadata = []
- for f in files:
- paths.append(f["path"])
- metadata.append(f["metadata"])
- indexing_pipeline.run(
- {
- "converter": {
- "paths": paths,
- "metadata": metadata,
- }
- }
- )
-
-
-def search(question: str) -> GeneratedAnswer:
- retriever = MemoryBM25Retriever(document_store=document_store(), top_k=5)
-
- template = (
- "Take a deep breath and think then answer given the context"
- "Context: {{ documents|map(attribute='text')|replace('\n', ' ')|join(';') }}"
- "Question: {{ query }}"
- "Answer:"
- )
- prompt_builder = PromptBuilder(template)
-
- OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "")
- generator = GPTGenerator(api_key=OPENAI_API_KEY)
- answer_builder = AnswerBuilder()
-
- query_pipeline = Pipeline()
-
- query_pipeline.add_component("docs_retriever", retriever)
- query_pipeline.add_component("prompt_builder", prompt_builder)
- query_pipeline.add_component("gpt35", generator)
- query_pipeline.add_component("answer_builder", answer_builder)
-
- query_pipeline.connect("docs_retriever.documents", "prompt_builder.documents")
- query_pipeline.connect("prompt_builder.prompt", "gpt35.prompt")
- query_pipeline.connect("docs_retriever.documents", "answer_builder.documents")
- query_pipeline.connect("gpt35.replies", "answer_builder.replies")
- res = query_pipeline.run(
- {
- "docs_retriever": {"query": question},
- "prompt_builder": {"query": question},
- "answer_builder": {"query": question},
- }
- )
- return res["answer_builder"]["answers"][0]
-
-
-with st.status(
- "Downloading documentation files...",
- expanded=st.session_state.get("expanded", True),
-) as status:
- files = fetch(DOCUMENTATIONS)
- status.update(label="Indexing documentation...")
- index_files(files)
- status.update(
- label="Download and indexing complete!", state="complete", expanded=False
- )
- st.session_state["expanded"] = False
-
-
-st.header("🔎 Documentation finder", divider="rainbow")
-
-st.caption(
- f"Use this to search answers for {', '.join([d[0] for d in DOCUMENTATIONS])}"
-)
-
-if question := st.text_input(
- label="What do you need to know?", placeholder="What is a DataFrame?"
-):
- with st.spinner("Waiting"):
- answer = search(question)
-
- if not st.session_state.get("run_once", False):
- st.balloons()
- st.session_state["run_once"] = True
-
- st.markdown(answer.data)
- with st.expander("See sources:"):
- for document in answer.documents:
- url_source = document.metadata.get("url_source", "")
- st.write(url_source)
- st.text(document.text)
- st.divider()
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/actions/write_prd_review.py b/spaces/deepwisdom/MetaGPT/metagpt/actions/write_prd_review.py
deleted file mode 100644
index 5ff9624c5b14473667ea7ef246b321a76708bdc6..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/actions/write_prd_review.py
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/5/11 17:45
-@Author : alexanderwu
-@File : write_prd_review.py
-"""
-from metagpt.actions.action import Action
-
-
-class WritePRDReview(Action):
- def __init__(self, name, context=None, llm=None):
- super().__init__(name, context, llm)
- self.prd = None
- self.desc = "Based on the PRD, conduct a PRD Review, providing clear and detailed feedback"
- self.prd_review_prompt_template = """
- Given the following Product Requirement Document (PRD):
- {prd}
-
- As a project manager, please review it and provide your feedback and suggestions.
- """
-
- async def run(self, prd):
- self.prd = prd
- prompt = self.prd_review_prompt_template.format(prd=self.prd)
- review = await self._aask(prompt)
- return review
diff --git a/spaces/deinferno/Latent_Consistency_Model_OpenVino_CPU/lcm_scheduler.py b/spaces/deinferno/Latent_Consistency_Model_OpenVino_CPU/lcm_scheduler.py
deleted file mode 100644
index 73ca9671d8e3cd83f1c6f0e0e35df36ba446916b..0000000000000000000000000000000000000000
--- a/spaces/deinferno/Latent_Consistency_Model_OpenVino_CPU/lcm_scheduler.py
+++ /dev/null
@@ -1,529 +0,0 @@
-# Copyright 2023 Stanford University Team and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This code is strongly influenced by https://github.com/pesser/pytorch_diffusion
-# and https://github.com/hojonathanho/diffusion
-
-import math
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-
-from diffusers.configuration_utils import ConfigMixin, register_to_config
-from diffusers.utils import BaseOutput, logging
-from diffusers.utils.torch_utils import randn_tensor
-from diffusers.schedulers.scheduling_utils import SchedulerMixin
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-@dataclass
-class LCMSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's `step` function output.
-
- Args:
- prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
- `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: torch.FloatTensor
- denoised: Optional[torch.FloatTensor] = None
-
-
-# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar
-def betas_for_alpha_bar(
- num_diffusion_timesteps,
- max_beta=0.999,
- alpha_transform_type="cosine",
-):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
- alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar.
- Choose from `cosine` or `exp`
-
- Returns:
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
- """
- if alpha_transform_type == "cosine":
-
- def alpha_bar_fn(t):
- return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2
-
- elif alpha_transform_type == "exp":
-
- def alpha_bar_fn(t):
- return math.exp(t * -12.0)
-
- else:
- raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}")
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta))
- return torch.tensor(betas, dtype=torch.float32)
-
-
-# Copied from diffusers.schedulers.scheduling_ddim.rescale_zero_terminal_snr
-def rescale_zero_terminal_snr(betas: torch.FloatTensor) -> torch.FloatTensor:
- """
- Rescales betas to have zero terminal SNR Based on https://arxiv.org/pdf/2305.08891.pdf (Algorithm 1)
-
-
- Args:
- betas (`torch.FloatTensor`):
- the betas that the scheduler is being initialized with.
-
- Returns:
- `torch.FloatTensor`: rescaled betas with zero terminal SNR
- """
- # Convert betas to alphas_bar_sqrt
- alphas = 1.0 - betas
- alphas_cumprod = torch.cumprod(alphas, dim=0)
- alphas_bar_sqrt = alphas_cumprod.sqrt()
-
- # Store old values.
- alphas_bar_sqrt_0 = alphas_bar_sqrt[0].clone()
- alphas_bar_sqrt_T = alphas_bar_sqrt[-1].clone()
-
- # Shift so the last timestep is zero.
- alphas_bar_sqrt -= alphas_bar_sqrt_T
-
- # Scale so the first timestep is back to the old value.
- alphas_bar_sqrt *= alphas_bar_sqrt_0 / (alphas_bar_sqrt_0 - alphas_bar_sqrt_T)
-
- # Convert alphas_bar_sqrt to betas
- alphas_bar = alphas_bar_sqrt**2 # Revert sqrt
- alphas = alphas_bar[1:] / alphas_bar[:-1] # Revert cumprod
- alphas = torch.cat([alphas_bar[0:1], alphas])
- betas = 1 - alphas
-
- return betas
-
-
-class LCMScheduler(SchedulerMixin, ConfigMixin):
- """
- `LCMScheduler` extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with
- non-Markovian guidance.
-
- This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. [`~ConfigMixin`] takes care of storing all config
- attributes that are passed in the scheduler's `__init__` function, such as `num_train_timesteps`. They can be
- accessed via `scheduler.config.num_train_timesteps`. [`SchedulerMixin`] provides general loading and saving
- functionality via the [`SchedulerMixin.save_pretrained`] and [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`, defaults to 1000):
- The number of diffusion steps to train the model.
- beta_start (`float`, defaults to 0.0001):
- The starting `beta` value of inference.
- beta_end (`float`, defaults to 0.02):
- The final `beta` value.
- beta_schedule (`str`, defaults to `"linear"`):
- The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- trained_betas (`np.ndarray`, *optional*):
- Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- original_inference_steps (`int`, *optional*, defaults to 50):
- The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we
- will ultimately take `num_inference_steps` evenly spaced timesteps to form the final timestep schedule.
- clip_sample (`bool`, defaults to `True`):
- Clip the predicted sample for numerical stability.
- clip_sample_range (`float`, defaults to 1.0):
- The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- set_alpha_to_one (`bool`, defaults to `True`):
- Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
- there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
- otherwise it uses the alpha value at step 0.
- steps_offset (`int`, defaults to 0):
- An offset added to the inference steps. You can use a combination of `offset=1` and
- `set_alpha_to_one=False` to make the last step use step 0 for the previous alpha product like in Stable
- Diffusion.
- prediction_type (`str`, defaults to `epsilon`, *optional*):
- Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
- `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
- Video](https://imagen.research.google/video/paper.pdf) paper).
- thresholding (`bool`, defaults to `False`):
- Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
- as Stable Diffusion.
- dynamic_thresholding_ratio (`float`, defaults to 0.995):
- The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- sample_max_value (`float`, defaults to 1.0):
- The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- timestep_spacing (`str`, defaults to `"leading"`):
- The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
- Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- rescale_betas_zero_snr (`bool`, defaults to `False`):
- Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
- dark samples instead of limiting it to samples with medium brightness. Loosely related to
- [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).
- """
-
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.00085,
- beta_end: float = 0.012,
- beta_schedule: str = "scaled_linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- original_inference_steps: int = 50,
- clip_sample: bool = False,
- clip_sample_range: float = 1.0,
- set_alpha_to_one: bool = True,
- steps_offset: int = 0,
- prediction_type: str = "epsilon",
- thresholding: bool = False,
- dynamic_thresholding_ratio: float = 0.995,
- sample_max_value: float = 1.0,
- timestep_spacing: str = "leading",
- rescale_betas_zero_snr: bool = False,
- ):
- if trained_betas is not None:
- self.betas = torch.tensor(trained_betas, dtype=torch.float32)
- elif beta_schedule == "linear":
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = (
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
- )
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- # Rescale for zero SNR
- if rescale_betas_zero_snr:
- self.betas = rescale_zero_terminal_snr(self.betas)
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
-
- # At every step in ddim, we are looking into the previous alphas_cumprod
- # For the final step, there is no previous alphas_cumprod because we are already at 0
- # `set_alpha_to_one` decides whether we set this parameter simply to one or
- # whether we use the final alpha of the "non-previous" one.
- self.final_alpha_cumprod = torch.tensor(1.0) if set_alpha_to_one else self.alphas_cumprod[0]
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = 1.0
-
- # setable values
- self.num_inference_steps = None
- self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy().astype(np.int64))
-
- self._step_index = None
-
- # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._init_step_index
- def _init_step_index(self, timestep):
- if isinstance(timestep, torch.Tensor):
- timestep = timestep.to(self.timesteps.device)
-
- index_candidates = (self.timesteps == timestep).nonzero()
-
- # The sigma index that is taken for the **very** first `step`
- # is always the second index (or the last index if there is only 1)
- # This way we can ensure we don't accidentally skip a sigma in
- # case we start in the middle of the denoising schedule (e.g. for image-to-image)
- if len(index_candidates) > 1:
- step_index = index_candidates[1]
- else:
- step_index = index_candidates[0]
-
- self._step_index = step_index.item()
-
- @property
- def step_index(self):
- return self._step_index
-
- def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- sample (`torch.FloatTensor`):
- The input sample.
- timestep (`int`, *optional*):
- The current timestep in the diffusion chain.
- Returns:
- `torch.FloatTensor`:
- A scaled input sample.
- """
- return sample
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample
- def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor:
- """
- "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the
- prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by
- s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing
- pixels from saturation at each step. We find that dynamic thresholding results in significantly better
- photorealism as well as better image-text alignment, especially when using very large guidance weights."
-
- https://arxiv.org/abs/2205.11487
- """
- dtype = sample.dtype
- batch_size, channels, *remaining_dims = sample.shape
-
- if dtype not in (torch.float32, torch.float64):
- sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half
-
- # Flatten sample for doing quantile calculation along each image
- sample = sample.reshape(batch_size, channels * np.prod(remaining_dims))
-
- abs_sample = sample.abs() # "a certain percentile absolute pixel value"
-
- s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1)
- s = torch.clamp(
- s, min=1, max=self.config.sample_max_value
- ) # When clamped to min=1, equivalent to standard clipping to [-1, 1]
- s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0
- sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s"
-
- sample = sample.reshape(batch_size, channels, *remaining_dims)
- sample = sample.to(dtype)
-
- return sample
-
- def set_timesteps(
- self,
- num_inference_steps: int,
- device: Union[str, torch.device] = None,
- original_inference_steps: Optional[int] = None,
- ):
- """
- Sets the discrete timesteps used for the diffusion chain (to be run before inference).
-
- Args:
- num_inference_steps (`int`):
- The number of diffusion steps used when generating samples with a pre-trained model.
- device (`str` or `torch.device`, *optional*):
- The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- original_inference_steps (`int`, *optional*):
- The original number of inference steps, which will be used to generate a linearly-spaced timestep
- schedule (which is different from the standard `diffusers` implementation). We will then take
- `num_inference_steps` timesteps from this schedule, evenly spaced in terms of indices, and use that as
- our final timestep schedule. If not set, this will default to the `original_inference_steps` attribute.
- """
-
- if num_inference_steps > self.config.num_train_timesteps:
- raise ValueError(
- f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:"
- f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle"
- f" maximal {self.config.num_train_timesteps} timesteps."
- )
-
- self.num_inference_steps = num_inference_steps
- original_steps = (
- original_inference_steps if original_inference_steps is not None else self.original_inference_steps
- )
-
- if original_steps > self.config.num_train_timesteps:
- raise ValueError(
- f"`original_steps`: {original_steps} cannot be larger than `self.config.train_timesteps`:"
- f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle"
- f" maximal {self.config.num_train_timesteps} timesteps."
- )
-
- if num_inference_steps > original_steps:
- raise ValueError(
- f"`num_inference_steps`: {num_inference_steps} cannot be larger than `original_inference_steps`:"
- f" {original_steps} because the final timestep schedule will be a subset of the"
- f" `original_inference_steps`-sized initial timestep schedule."
- )
-
- # LCM Timesteps Setting
- # Currently, only linear spacing is supported.
- c = self.config.num_train_timesteps // original_steps
- # LCM Training Steps Schedule
- lcm_origin_timesteps = np.asarray(list(range(1, original_steps + 1))) * c - 1
- skipping_step = len(lcm_origin_timesteps) // num_inference_steps
- # LCM Inference Steps Schedule
- timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps]
-
- self.timesteps = torch.from_numpy(timesteps.copy()).to(device=device, dtype=torch.long)
-
- self._step_index = None
-
- def get_scalings_for_boundary_condition_discrete(self, t):
- self.sigma_data = 0.5 # Default: 0.5
-
- # By dividing 0.1: This is almost a delta function at t=0.
- c_skip = self.sigma_data**2 / ((t / 0.1) ** 2 + self.sigma_data**2)
- c_out = (t / 0.1) / ((t / 0.1) ** 2 + self.sigma_data**2) ** 0.5
- return c_skip, c_out
-
- def step(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- sample: torch.FloatTensor,
- generator: Optional[torch.Generator] = None,
- return_dict: bool = True,
- ) -> Union[LCMSchedulerOutput, Tuple]:
- """
- Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`):
- The direct output from learned diffusion model.
- timestep (`float`):
- The current discrete timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- A current instance of a sample created by the diffusion process.
- generator (`torch.Generator`, *optional*):
- A random number generator.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~schedulers.scheduling_lcm.LCMSchedulerOutput`] or `tuple`.
- Returns:
- [`~schedulers.scheduling_utils.LCMSchedulerOutput`] or `tuple`:
- If return_dict is `True`, [`~schedulers.scheduling_lcm.LCMSchedulerOutput`] is returned, otherwise a
- tuple is returned where the first element is the sample tensor.
- """
- if self.num_inference_steps is None:
- raise ValueError(
- "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
- )
-
- if self.step_index is None:
- self._init_step_index(timestep)
-
- # 1. get previous step value
- prev_step_index = self.step_index + 1
- if prev_step_index < len(self.timesteps):
- prev_timestep = self.timesteps[prev_step_index]
- else:
- prev_timestep = timestep
-
- # 2. compute alphas, betas
- alpha_prod_t = self.alphas_cumprod[timestep]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
-
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- # 3. Get scalings for boundary conditions
- c_skip, c_out = self.get_scalings_for_boundary_condition_discrete(timestep)
-
- # 4. Compute the predicted original sample x_0 based on the model parameterization
- if self.config.prediction_type == "epsilon": # noise-prediction
- predicted_original_sample = (sample - beta_prod_t.sqrt() * model_output) / alpha_prod_t.sqrt()
- elif self.config.prediction_type == "sample": # x-prediction
- predicted_original_sample = model_output
- elif self.config.prediction_type == "v_prediction": # v-prediction
- predicted_original_sample = alpha_prod_t.sqrt() * sample - beta_prod_t.sqrt() * model_output
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample` or"
- " `v_prediction` for `LCMScheduler`."
- )
-
- # 5. Clip or threshold "predicted x_0"
- if self.config.thresholding:
- predicted_original_sample = self._threshold_sample(predicted_original_sample)
- elif self.config.clip_sample:
- predicted_original_sample = predicted_original_sample.clamp(
- -self.config.clip_sample_range, self.config.clip_sample_range
- )
-
- # 6. Denoise model output using boundary conditions
- denoised = c_out * predicted_original_sample + c_skip * sample
-
- # 7. Sample and inject noise z ~ N(0, I) for MultiStep Inference
- # Noise is not used for one-step sampling.
- if len(self.timesteps) > 1:
- noise = randn_tensor(model_output.shape, generator=generator, device=model_output.device)
- prev_sample = alpha_prod_t_prev.sqrt() * denoised + beta_prod_t_prev.sqrt() * noise
- else:
- prev_sample = denoised
-
- # upon completion increase step index by one
- self._step_index += 1
-
- if not return_dict:
- return (prev_sample, denoised)
-
- return LCMSchedulerOutput(prev_sample=prev_sample, denoised=denoised)
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- timesteps: torch.IntTensor,
- ) -> torch.FloatTensor:
- # Make sure alphas_cumprod and timestep have same device and dtype as original_samples
- alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype)
- timesteps = timesteps.to(original_samples.device)
-
- sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
- while len(sqrt_alpha_prod.shape) < len(original_samples.shape):
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
-
- sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
- while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape):
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
-
- noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
- return noisy_samples
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.get_velocity
- def get_velocity(
- self, sample: torch.FloatTensor, noise: torch.FloatTensor, timesteps: torch.IntTensor
- ) -> torch.FloatTensor:
- # Make sure alphas_cumprod and timestep have same device and dtype as sample
- alphas_cumprod = self.alphas_cumprod.to(device=sample.device, dtype=sample.dtype)
- timesteps = timesteps.to(sample.device)
-
- sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
- while len(sqrt_alpha_prod.shape) < len(sample.shape):
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
-
- sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
- while len(sqrt_one_minus_alpha_prod.shape) < len(sample.shape):
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
-
- velocity = sqrt_alpha_prod * noise - sqrt_one_minus_alpha_prod * sample
- return velocity
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/devendergarg14/Paraphrasing_with_GPT_Neo/app.py b/spaces/devendergarg14/Paraphrasing_with_GPT_Neo/app.py
deleted file mode 100644
index 1c0370aeb501a7d4b56d832f692a8391273ea6c0..0000000000000000000000000000000000000000
--- a/spaces/devendergarg14/Paraphrasing_with_GPT_Neo/app.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import gradio as gr
-import requests
-import json
-import os
-API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B"
-apikey=os.environ.get('api_key')
-headers = {"Authorization": f"Bearer {apikey}"}
-def query(input_sentence,num,start):
- paraphrase_final=[]
- for i in range(0,num):
- intial="""These are the few examples of converting original sentences into paraphrased sentences.\n original: The gray clouds were a warning of an approaching storm.\n paraphrase: The coming storm was foretold by the dark clouds.\n original: Giraffes like Acacia leaves and hay, and they can consume 75 pounds of food a day.\n paraphrase: A giraffe can eat up to 75 pounds of Acacia leaves and hay daily.\n """
- full_input=intial+"original:"+input_sentence + "\n paraphrase:"+start
- data = json.dumps({"inputs":full_input,"parameters":{"max_length":len(full_input.split())+70,"min_length":len(full_input.split())+70},"temperature":0.650+0.05*i})
- response = requests.request("POST", API_URL, headers=headers, data=data)
- output=json.loads(response.content.decode("utf-8"))[0]['generated_text']
- paraphrase=output.split('paraphrase:',3)[-1]
- paraphrase_text=paraphrase.split('original:',1)[0]
- paraphrase_final.append( paraphrase_text.split('.',1)[0]+".")
- return '\n\n'.join([i for i in paraphrase_final[0:]])
-title = "Paraphrasing with GPT-NEO"
-description = "Gradio Demo for Paraphrasing with GPT-NEO. Simply add one line sentence in the Input. It is possible to control the start of output paraphrased sentences using optional Starting Point Input. If outputs are not satisfactory try to increase number of outputs"
-article = "
"
-examples=[['The sky, at sunset, looked like a carnivorous flower.',4,'The coloured reddish'],['Inside us there is something that has no name, that something is what we are.',4,'']]
-gr.Interface(fn=query, inputs=[gr.inputs.Textbox(lines=4, label="Input Text (Single Sentence)"),
-gr.inputs.Slider( minimum=1, maximum=10, step=1, default=4, label="Numbers of Outputs"),
-gr.inputs.Textbox(lines=1, label="Starting Point (optional)")],
-outputs=["text"],
-title=title,description=description,
-article= article,
-examples=examples,
-allow_flagging='never').launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/dfurman/chat-all-in/README.md b/spaces/dfurman/chat-all-in/README.md
deleted file mode 100644
index cba738f108e5d6564d32e6b379bd5717bf891ce7..0000000000000000000000000000000000000000
--- a/spaces/dfurman/chat-all-in/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Chat-All-In
-emoji: 👔
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at
diff --git a/spaces/diacanFperku/AutoGPT/AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key BEST Keygen.md b/spaces/diacanFperku/AutoGPT/AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key BEST Keygen.md
deleted file mode 100644
index e598b27ad51dcaeeaef3ebcec8d7a0cbf9022e36..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key BEST Keygen.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
How to Download and Use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen
-
-
If you are looking for a powerful and easy-to-use software to create stunning slideshows from your photos, videos, and texts, you might want to try AquaSoft SlideShow 10 Ultimate. This software lets you add thousands of effects, animations, and transitions to your slideshows, as well as configure smart templates and export them in 4K-UHD quality. You can also use the SlideShow-Master feature to create a new task from predefined templates and your photos with just a few clicks.
However, if you don't have the license key or you want to use the software on a different PC, you might need a crack to bypass the activation and run the software without any limitations or problems. In this article, we will show you how to download and use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen, which is one of the most reliable and working cracks available online.
-
-
What is AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?
-
-
AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is a file that replaces the original software executable (SlideShow.exe) with a modified one that removes the need for a valid license key or an online activation. This way, you can use the software without having to enter a license key every time you launch it.
-
-
This crack was created by Steve Phillips, a well-known hacker who specializes in cracking PC software. It is compatible with the multi-language version of the software released in February 2017. It also fixes some bugs and errors that might occur in the original software.
-
-
How to Download AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?
-
-
There are many websites that offer AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen for download, but not all of them are safe or trustworthy. Some of them might contain viruses, malware, or fake files that can harm your PC or steal your personal information. Therefore, you should be careful when choosing where to download the crack from.
-
-
One of the best places to download AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is OvaGames.net, a website that provides free and full version PC software with cracks. You can find the link to download the crack in their post about AquaSoft SlideShow 10 Ultimate-SKIDROW. The file size is about 5.6 GB and it is split into 6 parts of 990 MB each. You can use Mega.nz, GDrive, Direct FTP Link, Uptobox, or Upfile.Mobi to download the parts.
-
-
Alternatively, you can also use torrent sites like The Pirate Bay or Kickass Torrents to download AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. However, you will need a torrent client like uTorrent or BitTorrent to do so. You should also use a VPN service to protect your privacy and avoid any legal issues.
-
-
-
How to Install and Use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?
-
-
After downloading AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen, you will need to extract it using a software like WinRAR or 7-Zip. You will get an ISO file that contains all the software files and the crack. You will need to mount this ISO file using a software like Daemon Tools or PowerISO.
-
-
Then, you will need to install the software by following these steps:
-
-
-
Run setup.exe from the mounted ISO file.
-
Select your preferred language and destination folder.
-
Wait for the installation to complete.
-
Copy all files from SKIDROW folder (located inside ISO file) to your installation folder (where SlideShow.exe is located).
-
Replace existing files when prompted.
-
-
-
Congratulations! You have successfully installed AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. Now you can use the software by running SlideShow.exe from your installation folder.
-
What are the Benefits of Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?
-
-
Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen has many benefits for users who want to create stunning slideshows from their photos, videos, and texts. Here are some of them:
-
-
-
You can save money by not buying the license key or paying for a subscription service.
-
You can use the software on any PC that meets the minimum system requirements, regardless of the region or language.
-
You can use the software offline without needing an internet connection or an online activation.
-
You can enjoy the software without any interruptions, errors, or glitches that might occur in the original software.
-
You can access all the features, modes, and content of the software without any restrictions or limitations.
-
-
-
Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is a great way to create stunning slideshows from your photos, videos, and texts in a fast and easy way. You will be able to add thousands of effects, animations, and transitions to your slideshows, as well as configure smart templates and export them in 4K-UHD quality. You will also be able to use the SlideShow-Master feature to create a new task from predefined templates and your photos with just a few clicks.
-
-
Is it Safe and Legal to Use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?
-
-
Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is not entirely safe or legal. There are some risks and consequences that you should be aware of before downloading and using the crack. Here are some of them:
-
-
-
You might download a fake or corrupted file that can damage your PC or infect it with viruses or malware.
-
You might violate the copyright laws or terms of service of AquaSoft or other parties involved in the production and distribution of the software.
-
You might face legal actions or penalties from AquaSoft or other parties involved in the production and distribution of the software.
-
You might lose your access to online features, updates, or support from AquaSoft or other parties involved in the production and distribution of the software.
-
You might have a poor performance or quality due to bugs, crashes, or compatibility issues that are not fixed by the crack.
-
-
-
Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is not recommended for users who want to use the software safely and legally. You should always buy the license key or use a legitimate service to use AquaSoft SlideShow 10 Ultimate. This way, you will support the developers and publishers who worked hard to create this amazing software. You will also enjoy a better performance and quality with more features, updates, and support.
-
What are the Features of AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?
-
-
AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is not just a simple file that lets you use the software without a license key or an online activation. It also has some features that enhance your software experience and make it more enjoyable. Here are some of them:
-
-
-
You can create slideshows from your photos, videos, and texts with thousands of effects, animations, and transitions. You can also add music, narration, captions, and logos to your slideshows.
-
You can configure smart templates that automatically adjust to your content and style preferences. You can also use the SlideShow-Master feature to create a new task from predefined templates and your photos with just a few clicks.
-
You can export your slideshows in 4K-UHD quality and various formats, such as MP4, AVI, MKV, MOV, WMV, and more. You can also burn your slideshows to DVD or Blu-ray discs or upload them to YouTube or Facebook.
-
You can edit your slideshows with advanced tools, such as timeline, storyboard, layout designer, image editor, video editor, and audio editor. You can also use keyframes, masks, chroma keying, and motion paths to create stunning effects.
-
You can preview your slideshows in real-time and adjust them according to your needs. You can also use the live output feature to display your slideshows on a second monitor or a projector.
-
-
-
AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is a file that adds more fun and excitement to your software. You will be able to create stunning slideshows from your photos, videos, and texts in a fast and easy way. You will also be able to appreciate the software's design and production more.
-
-
What are the Reviews of AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?
-
-
AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen has received positive reviews from users who have used it. Most of them praised it for being a powerful and easy-to-use software to create stunning slideshows from their photos, videos, and texts. They also liked the software's features, performance, quality, and compatibility. They said that using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen was a satisfying and enjoyable experience.
-
-
However, some of them criticized it for being too expensive, complex, or unstable. They also disliked the software's bugs, glitches, errors, or limitations. They said that using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen was a disappointing and frustrating experience.
-
-
Here are some examples of reviews from users who have used AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen:
-
-
-
"This software is amazing! It allows me to create stunning slideshows from my photos, videos, and texts with ease. It has thousands of effects, animations, and transitions to choose from. It also has smart templates and SlideShow-Master feature that make my work easier and faster. It exports my slideshows in 4K-UHD quality and various formats that I can share with my friends and family. I love using this software!"
-
-
-
-
"This software is terrible! It is too expensive for what it offers. It is also too complex and hard to use for beginners like me. It has many bugs, glitches, errors, and limitations that ruin my slideshows. It exports my slideshows in low quality and formats that I can't play on my devices or platforms. I hate using this software!"
-
-
-
AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen has received positive reviews from users who have used it. Most of them enjoyed it while some of them hated it. It depends on your personal taste and expectations whether you will like or dislike using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen.
-
Conclusion
-
-
AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is a file that allows you to use AquaSoft SlideShow 10 Ultimate without needing a valid license key or an online activation. It is one of the most reliable and working cracks available online. However, it is not safe or legal to use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. You should always buy the license key or use a legitimate service to use AquaSoft SlideShow 10 Ultimate. This way, you will support the developers and publishers who worked hard to create this amazing software. You will also enjoy a better performance and quality with more features, updates, and support.
-
-
In this article, we have shown you how to download, install, and use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. We have also discussed the benefits, features, reviews, risks, and consequences of using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below.
-
-
Thank you for reading this article. We hope that you have learned something new and useful about AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. We also hope that you have enjoyed using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. Have a great day!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Dosprn.1.78.REPACK Full.Version.109.md b/spaces/diacanFperku/AutoGPT/Dosprn.1.78.REPACK Full.Version.109.md
deleted file mode 100644
index 56f2a021969da03f8ad1ade66cfb979a4c64c293..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Dosprn.1.78.REPACK Full.Version.109.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
-.1717172117.17218.31.59.3147.81723.12.34.0[^1]
-
-The median follow-up time was 10 years (interquartile range, 5--19). Of the 129 patients, 114 had metastatic disease at diagnosis. The median time from diagnosis to death for all patients was 3 years (95% CI, 1--7). The median OS for all patients was 14.8 years (95% CI, 11.4--18.2). The patients were treated with a wide variety of chemotherapy regimens ([Table 2](#t0010)ref-type="table"), and most of them were treated with a combination of platinum and anthracycline-based drugs. In this study, 9 patients (7.0%) were treated with five drugs or more as a first-line chemotherapy regimen, such as epirubicin, paclitaxel, capecitabine, carboplatin, and gemcitabine. The patients received a median number of 12 chemotherapy courses (interquartile range, 9--13). The patients were treated with a median of seven cycles (interquartile range, 4--10) of chemotherapy. Of the 129 patients, 103 (80.5%) had recurrence and 69 (53.5%) had died at the time of the analysis. Of the 69 patients who died, 16 patients (23.2%) had MBC and 43 patients (64.2%) had OC.
-
-Efficacy #s0040
-
---------
-
-The EFS rates at 3 and 5 years for all patients were 40% and 33%, respectively. The median EFS was 7 years (95% CI, 5.8--8.2). The median time to recurrence was 2 years (95% CI, 1--3). The median OS was 14.8 years (95% CI, 11.4--18.2). The median time from recurrence to death was 4 years (95% CI, 3--6). The median OS for patients with MBC and OC was 8.4 and 14.4 years, respectively. The median EFS was 3.8 years (95% CI, 3.5--4.1) for patients with MBC and 14.7 years (95% CI, 10.7--17.6) for patients with OC. [Figure 1](#f0005){ref-type 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Ex4 To Mq4 Decompiler 229 145.md b/spaces/diacanFperku/AutoGPT/Ex4 To Mq4 Decompiler 229 145.md
deleted file mode 100644
index b5609fda2c30a853d8938c71e64cb0076efbd372..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Ex4 To Mq4 Decompiler 229 145.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
How to Convert Ex4 Files to Mq4 Files Using a Decompiler
-
Ex4 files are compiled programs for MetaTrader 4, a popular platform for forex trading and algorithmic trading. Mq4 files are the source code files that can be edited and modified using the MetaEditor tool. If you want to access the source code of an ex4 file, you might need to use a decompiler software that can convert ex4 to mq4.
However, decompiling ex4 files is not a simple or legal process. According to some sources[^1^] [^2^], decompilers were available in the past, but they could only work with older versions of ex4 files. Nowadays, most ex4 files are protected and encrypted, making them impossible to decompile. Moreover, decompiling ex4 files might violate the intellectual property rights of the original developers, who distribute their programs without source code for a reason.
-
Therefore, before attempting to decompile an ex4 file, you should first contact its developer and ask for permission or access to the source code. This is the most ethical and respectful way to get the mq4 file you want. If the developer agrees, you can use their mq4 file for educational or personal purposes only. You should not modify, distribute, or sell their code without their consent.
-
If the developer does not agree or does not respond, you should respect their decision and look for other alternatives. You can try to find similar programs or indicators that have open source code, or you can learn how to code your own using the MetaEditor tool and the MQL4 language. You can also hire a professional programmer to create a custom program or indicator for you.
-
If you still insist on decompiling an ex4 file, you should be aware of the risks and challenges involved. You will need to find a reliable and updated decompiler software that can handle the latest versions of ex4 files[^3^] [^4^]. You will also need to have some knowledge of cryptography and binary decompilation[^1^], as well as MQL4 syntax and logic. Even if you manage to decompile an ex4 file, you will likely get an obfuscated and unreadable code that will be hard to understand and modify[^1^]. You will also be liable for any legal consequences that might arise from your actions.
-
-
In conclusion, converting ex4 files to mq4 files using a decompiler is not a recommended or easy task. It is better to respect the developers' rights and wishes, and look for other ways to achieve your goals. If you want to learn more about MetaTrader 4, MQL4, and forex trading, you can visit the official website of MetaTrader or browse online forums and tutorials.
-
-
MetaTrader 4 is one of the most popular and widely used platforms for forex trading and algorithmic trading. It allows traders to access the global financial markets, analyze price movements, execute orders, and create automated trading strategies using expert advisors (EAs) and custom indicators. MetaTrader 4 also provides a built-in programming environment called MetaEditor, where users can write and edit their own code using the MQL4 language.
-
MQL4 stands for MetaQuotes Language 4, and it is a high-level object-oriented programming language that is based on C++. It is designed specifically for developing trading applications for MetaTrader 4. MQL4 allows users to create EAs, indicators, scripts, libraries, and other programs that can interact with the MetaTrader 4 terminal and perform various trading tasks. MQL4 also supports graphical objects, mathematical functions, technical analysis tools, network functions, and more.
-
Ex4 files are the result of compiling MQL4 source code files (mq4 files) using the MetaEditor tool. Compiling is the process of transforming human-readable code into machine-readable code that can be executed by the MetaTrader 4 terminal. Ex4 files are faster and more efficient than mq4 files, but they cannot be modified or edited by the user. Ex4 files are usually distributed by developers who want to protect their intellectual property and prevent unauthorized copying or modification of their code.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Political Law Reviewer Ateneo Pdf Extra Quality Download.md b/spaces/diacanFperku/AutoGPT/Political Law Reviewer Ateneo Pdf Extra Quality Download.md
deleted file mode 100644
index fb51a5206ae84ac4fb7061e3ce5a83b42be1fd66..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Political Law Reviewer Ateneo Pdf Extra Quality Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-I assume you are referring to the solutions given in Solid State Physics by Ashcroft and Mermin. I doubt that the authors have provided ...... consistently state and support their point of view. But you mean that this is solid state physics. If so, then it is difficult for me to answer, because. I do not know that. In this regard, I don't know much about solid state physics, but I do know physics and mathematics. I would say that both physics and mathematics are parts of physics. Physics and mathematics are different branches of physics, but they are not different sciences. Physics is the science that studies the structure and behavior of matter. 8a78ff9644
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Comtekk Tone Generator Serial Number.md b/spaces/falterWliame/Face_Mask_Detection/Comtekk Tone Generator Serial Number.md
deleted file mode 100644
index f19ae7af69da5e07587dad31db92781f6c1da8c9..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Comtekk Tone Generator Serial Number.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-ComTekk Multi Decoder will listen for any sustained tone signal and display the ... Write a program that calculates and prints the number of minutes in a year python ... Our free VIN decoder can be used to determine everything from vehicle trim level ... To align the frequency of the tone generator, use another transceiver with ... 1fdad05405
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Ih8sn0w Ireb V3 1.2 For Windows English Download BEST.md b/spaces/falterWliame/Face_Mask_Detection/Ih8sn0w Ireb V3 1.2 For Windows English Download BEST.md
deleted file mode 100644
index 804d8e5bfb2ad72de41a73ece797fe1f4e929e4c..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Ih8sn0w Ireb V3 1.2 For Windows English Download BEST.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-taig 1.2.1 EN (Tethered jailbreak iOS 8.0-8.1.2 for all devices: iPhone . Sn0wBreeze 2.9.3 (pwnagetool for Windows, supports iOS tethered jailbreak . Nill - jiayu s3g (AOKP) download.
-Furious v2 for Android.
-Sn0wbreeze 2.9.3 (PwnageTool) For iOS 8.4.1.
-Sn0wbreeze 2.9.3 (PwnageTool) For iOS 8.4.1 - 8 days
-9 hours ago.
-Sn0wbreeze 2.9.3 (PwnageTool) For iOS 8.4.1 - 8 days back .
-Sn0wbreeze 2.9.3 (PwnageTool) For iOS 8.4.1 - 8 days back . 8a78ff9644
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Intervideo Windvr 6.1 For Windows 7 Free [WORK] 17.md b/spaces/falterWliame/Face_Mask_Detection/Intervideo Windvr 6.1 For Windows 7 Free [WORK] 17.md
deleted file mode 100644
index 72f98eec59b7611497c5a921e2a27cff527916f7..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Intervideo Windvr 6.1 For Windows 7 Free [WORK] 17.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
An unusual feature about this antivirus software is its'sleep timer' option, which allows the software to cease recording automatically. The software works quite well, but can be difficult to setup. Nevertheless, the overall system is secure, and installation of the add-on for MacOS or the professional software Intervideo WinDVR 6 is easy and quick. Definitely worth every penny. Furthermore, the website provides great support.
-
Using the disc, burn software, you may transfer and burn files to disc for backup, archive, and distribution. The software is simple to use and requires little or no knowledge of burning. A certified Windows Live Mail can be downloaded from the website for free.
Adore the benefits of a magnetic media once more. The program offers you the best and most simplified VCD/S-VCD/DVD recorder. Key features are: an easy-to-use interface, fast conversion speed, an audio tool, surround sound support, and a vast number of disc options. Intervideo WinDVD Pro 8 makes a great DVD authoring tool with features such as: Dolby Digital Audio, Dolby Pro Logic surround sound, multiple language tracks, Smart Region and Protection with the DVD-compliant CSS encryption. Intervideo WinDVD Platinum 8 is a high-performance DVD writing program for the Windows operating system and Mac OS X. It can burn up to 18 hours of DVD video while burning, and you can even use it as a virtual DVD drive for Windows. Intervideo WinDVD Pro 8 is an intelligent and complete DVD software package that enables you to burn, rip, encode, and convert DVD discs and videos. The software automatically detects and corrects DVD errors such as defects, scratches, and missing audio tracks.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/MarceloBielsaCoachingBuildUpPlayAgainstHighPressingTeamsbookspdffile.md b/spaces/falterWliame/Face_Mask_Detection/MarceloBielsaCoachingBuildUpPlayAgainstHighPressingTeamsbookspdffile.md
deleted file mode 100644
index d4c898004769ea38fb9a4889d27e21bf1c06dba4..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/MarceloBielsaCoachingBuildUpPlayAgainstHighPressingTeamsbookspdffile.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Marcelo BielsaCoachingBuildUpPlayAgainstHighPressingTeamsbookspdffile. Container. OverviewTags. Tags. Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview We recommend. Container. Review Container. 8a78ff9644
-
-
-
diff --git a/spaces/fatiXbelha/sd/Download Scratch 2.0 Offline Editor for Windows Mac and Linux.md b/spaces/fatiXbelha/sd/Download Scratch 2.0 Offline Editor for Windows Mac and Linux.md
deleted file mode 100644
index 8bc54189f6b18941deb9e893ab350bd16c8c5cad..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Scratch 2.0 Offline Editor for Windows Mac and Linux.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
How to Download Scratch 2.0 for Free
-
Do you want to learn how to code, create your own interactive projects, and join a creative community of millions of users? If so, you should download Scratch 2.0 for free. Scratch 2.0 is a free programming language and online platform that lets you imagine, program, and share your own games, animations, and stories. In this article, we will show you what Scratch 2.0 is, why you should download it, and how to download it for free.
Scratch 2.0 is the second version of Scratch, a programming language and online platform developed by the MIT Media Lab. Scratch 2.0 was released in May 2013 and introduced many new features and improvements over the previous version, such as:
-
-
A redesigned user interface that makes it easier to access different tools and options
-
A new paint editor that allows you to draw and edit your own sprites and backgrounds
-
A new sound editor that lets you record and edit your own sounds and music
-
A new vector graphics mode that enables you to create smoother and more scalable graphics
-
A new backpack feature that lets you store and reuse your favorite sprites, costumes, sounds, and scripts
-
A new cloud data feature that allows you to store and share data across different projects
-
A new extension feature that lets you connect Scratch with external devices and services
-
-
With Scratch 2.0, you can create anything you can imagine using a simple drag-and-drop interface that lets you snap together different blocks of code. You can also remix and modify existing projects from the online community, or share your own projects with others.
-
Why Download Scratch 2.0?
-
Learn Programming Skills
-
Scratch 2.0 is a great way to learn programming skills, especially for beginners and young learners. Scratch 2.0 teaches you the basic concepts and logic of coding, such as variables, loops, conditionals, events, operators, lists, procedures, etc. You can also learn more advanced topics such as recursion, cloning, parallelism, synchronization, etc.
-
Scratch 2.0 also helps you develop computational thinking skills, such as abstraction, decomposition, pattern recognition, algorithm design, debugging, etc. These skills are essential for solving problems in any domain of science, technology, engineering, art, or math.
-
How to download scratch 2.0 offline editor for free
-Download scratch 2.0 app for Windows 10
-Scratch 2.0 download for Mac OS X
-Scratch 2.0 download for Linux
-Scratch 2.0 download for Android tablets
-Scratch 2.0 download for ChromeOS
-Scratch 2.0 download for older versions of Windows and Mac
-Scratch 2.0 download without Adobe AIR
-Scratch 2.0 download with Scratch Link
-Scratch 2.0 download and installation guide
-Scratch 2.0 download and update instructions
-Scratch 2.0 download and support materials
-Scratch 2.0 download and starter projects
-Scratch 2.0 download and getting started guide
-Scratch 2.0 download and scratch cards
-Scratch 2.0 download and online community
-Scratch 2.0 download and sharing projects
-Scratch 2.0 download and hardware devices
-Scratch 2.0 download and net energy gain experiment
-Scratch 2.0 download and holy grail fusion experiment
-Scratch 2.0 download and mini sun experiment
-Scratch 2.0 download and nuclear fusion reaction
-Scratch 2.0 download and Korea Superconducting Tokamak Advanced Research facility
-Scratch 2.0 download and Korea Institute of Fusion Energy
-Scratch 2.0 download and physics problem to engineering one
-Scratch 2.0 download and temperature hotter than the sun core
-Scratch 2.0 download and temperature in kelvin
-Scratch 2.0 download and solar core density
-Scratch 2.0 download and solar core composition
-Scratch 2.0 download and solar core plasma
-Scratch 2.0 download and solar core fusion process
-Scratch 2.0 download and solar core energy output
-Scratch 2.0 download and solar core pressure
-Scratch 2.0 download and solar core radius
-Scratch 2.0 download and solar atmosphere layers
-Scratch 2.0 download and photosphere temperature
-Scratch 2.0 download and chromosphere thickness
-Scratch 2.0 download and sun spot cycle
-Scratch 2.0 download and sun fact sheet
-Scratch 2.0 download and sun wikipedia page
-
Create Interactive Projects
-
Scratch 2.0 lets you create your own interactive projects using a variety of media elements, such as sprites, costumes, backgrounds, sounds, music, text, etc. You can also use different types of blocks to control the behavior and appearance of your projects, such as motion blocks, looks blocks, sound blocks, pen blocks, data blocks, events blocks, control blocks, sensing blocks, operators blocks, and more blocks.
-
You can make any kind of project you want with Scratch 2.0, such as games, animations, stories, simulations, quizzes, art, music, etc. You can also add interactivity to your projects using different inputs, such as keyboard, mouse, microphone, camera, etc. You can also use different outputs, such as sound, speech, text-to-speech, etc.
-
Join a Creative Community
-
Scratch 2.0 connects you with a creative community of millions of users from all over the world. You can explore and play with thousands of projects from other Scratchers, or share your own projects with them. You can also give and receive feedback, comments, likes, favorites, and follows. You can also join different studios, groups, and challenges that match your interests and goals.
-
Scratch 2.0 also supports online learning and collaboration. You can use Scratch 2.0 to create and join online courses, tutorials, guides, and resources that teach you different skills and topics. You can also use Scratch 2.0 to work on projects with your friends, classmates, teachers, or mentors.
-
How to Download Scratch 2.0 for Free
-
Requirements
-
To download Scratch 2.0 for free, you need the following requirements:
-
-
A computer with Windows (XP or later), Mac OS X (10.6 or later), or Linux (Ubuntu 12.04 or later)
-
An internet connection to download the Scratch 2.0 offline editor
-
Adobe AIR (version 20 or later) to run the Scratch 2.0 offline editor
-
-
If you don't have these requirements, you can still use Scratch 2.0 online by visiting https://scratch.mit.edu/
-
Steps
-
To download Scratch 2.0 for free, follow these steps:
-
-
Go to https://scratch.mit.edu/download and click on the "Download" button for your operating system (Windows, Mac OS X, or Linux)
-
Save the Scratch 2.0 installer file to your computer and run it
-
Follow the instructions on the screen to install Scratch 2.0 offline editor and Adobe AIR (if you don't have it already)
-
Launch the Scratch 2.0 offline editor from your desktop or start menu
-
Enjoy creating and sharing your projects with Scratch 2.0!
-
-
Tips and Tricks
-
Here are some tips and tricks to make the most of Scratch 2.0 offline editor:
-
-
You can open and save your projects locally on your computer or online on your Scratch account
-
You can import and export your projects as .sb2 files that you can share with others or use in other applications
-
You can use the "File" menu to access different options such as "New", "Open", "Save", "Save As", "Upload to Scratch", "Download to your computer", "Import", and "Export"
-
You can use the "Edit" menu to access different options such as "Undo", "Redo", "Cut", "Copy", "Paste", "Delete", "Select All", and "Turbo Mode"
-
You can use the "Tips" button to access different tutorials, guides, and resources that help you learn and use Scratch 2.0
-
You can use the "Help" menu to access different options such as "About Scratch", "Check for Updates", "Report a Problem", and "Scratch Website"
-
You can use the green flag button to start your project, the red stop button to stop your project, and the full screen button to view your project in full screen mode
-
You can use the stage area to see your project in action, the sprite list to add, delete, or select sprites, the scripts area to add, edit, or delete blocks of code, the costumes tab to add, edit, or delete costumes for your sprites, the sounds tab to add, edit, or delete sounds for your sprites, and the backpack to store and reuse your favorite sprites, costumes, sounds, and scripts
-
You can use the zoom buttons to zoom in or out of your scripts area, the clean up button to organize your blocks neatly, and the ? button to get help on any block
-
You can right-click on any sprite, costume, sound, script, or block to access different options such as "duplicate", "delete", "rename", "edit", "save to local file", etc.
-
-
Conclusion
-
Scratch 2.0 is a free programming language and online platform that lets you imagine, program, and share your own games, animations, and stories. You can download Scratch 2.0 for free and use it offline on your computer. You just need to follow the steps we showed you in this article. You can also learn more about Scratch 2.0 by exploring the online community and the tips and tricks we shared with you. Scratch 2.0 is a fun and easy way to learn programming skills, create interactive projects, and join a creative community. So what are you waiting for? Download Scratch 2.0 for free today and start scratching!
-
FAQs
-
Here are some frequently asked questions and answers about Scratch 2.0:
-
-
What is the difference between Scratch 2.0 and Scratch 3.0? Scratch 3.0 is the latest version of Scratch that was released in January 2019. It has some new features and improvements over Scratch 2.0, such as:
-
A new user interface that adapts to different screen sizes and devices
-
A new sound editor that supports more sound formats and effects
-
A new extension system that allows you to add more blocks and functionalities from external sources
-
A new video sensing feature that lets you use your webcam as an input device
-
A new text-to-speech feature that lets you convert text into speech
-
A new translation feature that lets you translate your projects into different languages
-
A new compatibility with HTML5 that makes it easier to run Scratch on any browser without Adobe Flash Player
Can I use Scratch 2.0 online? Yes, you can use Scratch 2.0 online by visiting https://scratch.mit.edu/projects/editor/?tip_bar=getStarted. However, this version of Scratch 2.0 is no longer updated or supported by the Scratch team. You may encounter some bugs or issues when using it online. We recommend you to use Scratch 3.0 online instead.
-
Can I use Scratch 2.0 on a tablet or a smartphone? No, you cannot use Scratch 2.0 on a tablet or a smartphone. Scratch 2.0 offline editor only works on computers with Windows, Mac OS X, or Linux operating systems. If you want to use Scratch on a tablet or a smartphone, you can use Scratch 3.0 online instead.
-
How can I update Scratch 2.0 offline editor? To update Scratch 2.0 offline editor, you need to download and install the latest version of Scratch 2.0 offline editor from https://scratch.mit.edu/download. You may also need to update Adobe AIR from https://get.adobe.com/air/. You can also check for updates from the "Help" menu in the Scratch 2.0 offline editor.
-
Where can I find more help and support for Scratch 2.0? You can find more help and support for Scratch 2.0 by visiting the following websites:
Lokicraft Helper City Download Mediafıre: How to Get the Best City in Lokicraft
-
If you are a fan of sandbox games, you might have heard of Lokicraft, a game inspired by Minecraft that lets you create and explore your own world. But did you know that you can also download a city map from Mediafıre using a mod app called Lokicraft Helper? In this article, we will show you how to do that, and give you some tips and tricks for playing in the city.
-
What is Lokicraft?
-
A sandbox game inspired by Minecraft
-
Lokicraft is a sandbox game that was released in 2019 by lokidev. It is similar to Minecraft, but with some differences in graphics, mechanics, and features. You can build anything you want using blocks of different materials, colors, and shapes. You can also explore different biomes, such as forests, deserts, mountains, and oceans. You can play in survival mode, where you have to gather resources, craft tools, fight enemies, and manage your hunger and health. Or you can play in creative mode, where you have unlimited resources and no threats.
Lokicraft has many features that make it fun and engaging. Some of them are:
-
-
You can choose from different skins and outfits for your character.
-
You can tame animals and ride them.
-
You can craft weapons, armor, potions, and other items.
-
You can use redstone to create circuits and machines.
-
You can join multiplayer servers and play with other people online.
-
You can share your creations with other players and download their worlds.
-
-
What is Lokicraft Helper?
-
A mod app that adds more content to Lokicraft
-
Lokicraft Helper is a mod app that was created by Herbert Saikia. It is not an official app from lokidev, but it works well with Lokicraft. It adds more content to the game, such as new blocks, items, mobs, maps, textures, and sounds. You can also use it to edit your world, change your game mode, teleport to different locations, and more.
-
Benefits and drawbacks of using Lokicraft Helper
-
Using Lokicraft Helper has some benefits and drawbacks. Some of them are:
-
-
You can access more content and features that are not available in the original game.
-
You can customize your game according to your preferences.
-
You can enhance your gaming experience and have more fun.
-
-
However,
-
-
You need to download the app from Mediafıre or other third-party sources, which may not be safe or reliable.
-
You need to allow unknown sources on your device settings, which may expose your device to malware or viruses.
-
You may encounter some bugs or glitches when using the app or playing the game.
-
-
How to download Lokicraft Helper City from Mediafıre?
-
Step 1: Download the Lok
Step 1: Download the Lokicraft Helper app from Mediafıre
-
To download the Lokicraft Helper app, you need to visit the Mediafıre link that is provided by the developer. You can find the link on his YouTube channel, where he also posts videos about the app and the game. The link is: https://www.mediafire.com/file/9w0k8y9z7x4x9v4/Lokicraft_Helper.apk/file. Click on the link and then click on the green download button. The app file will be downloaded to your device.
-
Step 2: Install the app and launch it
-
After downloading the app file, you need to install it on your device. To do that, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on. Then, locate the app file in your device storage and tap on it. Follow the instructions on the screen to install the app. Once the app is installed, launch it by tapping on its icon.
-
Step 3: Choose the city option and download the city file
-
When you launch the app, you will see a menu with different options, such as blocks, items, mobs, maps, textures, and sounds. To download the city map, you need to choose the maps option. Then, you will see a list of different maps that you can download, such as skyblock, castle, parkour, and city. To download the city map, tap on the city option. You will see a preview of the city and a download button. Tap on the download button and wait for the city file to be downloaded to your device.
-
lokicraft helper new update 1.18 download mediafıre
-lokicraft helper best city zip file download mediafıre
-how to download lokicraft 1.17 from mediafıre
-lokicraft helper city map download mediafıre
-lokicraft helper apk file download mediafıre
-how to install lokicraft helper city on android
-lokicraft helper city mod download mediafıre
-lokicraft helper latest version download mediafıre
-how to build a city in lokicraft helper
-lokicraft helper city texture pack download mediafıre
-lokicraft helper city tutorial video download mediafıre
-lokicraft helper city cheats and hacks download mediafıre
-how to play lokicraft helper city online with friends
-lokicraft helper city free download mediafıre for pc
-lokicraft helper city review and rating
-lokicraft helper city gameplay and features
-lokicraft helper city tips and tricks download mediafıre
-how to update lokicraft helper city to the latest version
-lokicraft helper city skins and costumes download mediafıre
-how to create your own city in lokicraft helper
-lokicraft helper city challenges and quests download mediafıre
-how to backup and restore your lokicraft helper city data
-lokicraft helper city screenshots and wallpapers download mediafıre
-how to customize your lokicraft helper city settings and options
-lokicraft helper city best buildings and structures download mediafıre
-how to uninstall and reinstall your lokicraft helper city app
-lokicraft helper city bugs and errors fix download mediafıre
-how to share your lokicraft helper city with other players
-lokicraft helper city alternatives and similar games download mediafıre
-how to contact the developers of lokicraft helper city for feedback and support
-
Step 4: Import the city file to Lokicraft and enjoy
-
After downloading the city file, you need to import it to Lokicraft. To do that, open Lokicraft and tap on the play button. Then, tap on the import world button at the bottom of the screen. You will see a list of files in your device storage. Locate the city file that you downloaded from Mediafıre and tap on it. The file will be imported to Lokicraft and you will see it in your world list. Tap on the city world and start playing.
-
Tips and tricks for playing in the city
-
Explore the buildings and landmarks
-
The city map is very detailed and realistic. You can explore different buildings and landmarks, such as skyscrapers, hotels, restaurants, shops, museums, parks, and more. You can also find hidden chests and secrets in some of them. You can admire the architecture and design of the city and discover new things every time you play.
-
Customize your own house and shop
-
The city map also gives you a chance to customize your own house and shop. You can choose from different types of houses and shops that are available in the city. You can also use blocks and items from Lokicraft Helper to decorate them according to your style and taste. You can make your house cozy and comfortable, and your shop attractive and profitable.
-
Interact with other players and NPCs
-
The city map is more fun when you play with other players online. You can join multiplayer servers that have the city map installed and meet new friends or foes. You can chat with them, trade with them, compete with them, or cooperate with them. You can also interact with NPCs that are in the city, such as villagers, guards, shopkeepers, and more. They can offer you quests, rewards, information, or services.
-
Conclusion
-
Lokicraft Helper City Download Mediafıre is a great way to get the best city in Lokicraft. It is a mod app that adds more content and features to Lokicraft, such as new blocks, items, mobs, maps, textures, and sounds. You can download it from Mediafıre using a link provided by the developer. You can also install it on your device easily by enabling unknown sources and following some simple steps. You can then import the city file to Lokicraft and enjoy playing in it.
-
The city map is very impressive and realistic. It has many buildings and landmarks that you can explore and discover. It also allows you to customize your own house and shop using blocks and items from Lokicraft Helper. You can also interact with other players online or NPCs in-game for more fun and excitement.
-
If you are looking for a new challenge and adventure in Lok
If you are looking for a new challenge and adventure in Lokicraft, you should definitely try the city map from Mediafıre. It will give you a whole new perspective and experience of the game. You will not regret it.
-
FAQs
-
Q: Is Lokicraft Helper safe to use?
-
A: Lokicraft Helper is not an official app from lokidev, so it may not be 100% safe or reliable. You should download it from Mediafıre or other third-party sources at your own risk. You should also enable unknown sources on your device settings, which may expose your device to malware or viruses. You should also backup your Lokicraft data before using the app, in case something goes wrong.
-
Q: Is Lokicraft Helper free to use?
-
A: Yes, Lokicraft Helper is free to use. You do not need to pay anything to download or use the app. However, you may see some ads or pop-ups when using the app, which may be annoying or intrusive. You can also support the developer by donating or subscribing to his YouTube channel.
-
Q: How can I update Lokicraft Helper?
-
A: The developer of Lokicraft Helper usually posts updates and new versions of the app on his YouTube channel. You can check his channel regularly for any news or announcements. You can also follow him on social media platforms, such as Facebook, Twitter, or Instagram. To update the app, you need to download the latest version from Mediafıre or other sources and install it on your device.
-
Q: How can I uninstall Lokicraft Helper?
-
A: If you want to uninstall Lokicraft Helper, you can do it easily by following these steps:
-
-
Go to Settings > Apps > Lokicraft Helper and tap on it.
-
Tap on the uninstall button and confirm your action.
-
The app will be uninstalled from your device.
-
-
Q: How can I contact the developer of Lokicraft Helper?
-
A: If you have any questions, feedback, suggestions, or complaints about Lokicraft Helper, you can contact the developer by using these methods:
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/inamXcontru/PoeticTTS/Dil Ka Rishta Movie Full Hd 1080p Free Download.md b/spaces/inamXcontru/PoeticTTS/Dil Ka Rishta Movie Full Hd 1080p Free Download.md
deleted file mode 100644
index 0af5d85a6df29c3f8a02f4618c9de02e5556e1c5..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Dil Ka Rishta Movie Full Hd 1080p Free Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Free Dil Ka Rishta Bada Pyara Hai Hd mp3 download from mp3such ... Hai Full Song Lyrics Full Hd 1080p Kumar Sanu Alka Yagnik Udit Ji Free Download. 4d29de3e1b
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Catholic Hymn Book Nigeria Pdf Download Free.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Catholic Hymn Book Nigeria Pdf Download Free.md
deleted file mode 100644
index b0d72ab7523bcb0dd538213f9f3994b8351177fb..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Catholic Hymn Book Nigeria Pdf Download Free.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Catholic Hymn Book is a lightweight app with a collection of hymns in the Catholic Hymn Book used in Nigeria and all over the world. ... 910387aaf Online PDF Ebook Epub Library HYMNAL ANCIENT HYMNS AND SPIRITUAL ... Listen to Traditional Irish Songs and Music, Download the MP3 or Midi Files, and get ... 4d29de3e1b
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/EVEREST Ultimate Edition V5.02.1750 Portable Serial Key VERIFIED.md b/spaces/inplisQlawa/anything-midjourney-v4-1/EVEREST Ultimate Edition V5.02.1750 Portable Serial Key VERIFIED.md
deleted file mode 100644
index 30e7c7f4e48d4bce8033edecf580a2b2fd937a85..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/EVEREST Ultimate Edition V5.02.1750 Portable Serial Key VERIFIED.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
EVEREST Ultimate Edition V5.02.1750 Portable Serial Key
Today, March 22, we were contacted by a reporter that received an order for our Darkscandal Pack; the order was not made through us or by our company. She knew that the time period between the written delivery and the actual receipt of the package was on March 12 and this also included the delivery time to Spain. Since her order was placed on Tuesday, we said that she would have the package by March 24. However, the package was received on March 26, which is 2 business days later. We knew that we were responsible because we are the only people that send the packages to our customers, so we apologize for this delay in the delivery. We will be in contact with the customer and provide all information that we know regarding the shipment.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Hazrat Umar Quotes In Urdu Pdf Download TOPl.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Hazrat Umar Quotes In Urdu Pdf Download TOPl.md
deleted file mode 100644
index 0170c7314b1612e3e625b049ed2692996da8e6f0..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Hazrat Umar Quotes In Urdu Pdf Download TOPl.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-096. 6. 28. ; Raaz Full Full Movie quarnell096. 6. 27. ; Raaz 2.5 Pc. Самые древние арки сборных обычно были сделаны из динамичных и такой подготовительной работы. Перед соответствующим предзаготовкой их были размножены художниками, начиная с самых старых монахов и мастеров, заканчивая людьми, назвавшимися арками. Арки начинались при богослужениях, но с тех пор они восходят к советским летающим памятникам и получают названия, звучащие как капитальные начала последовательной обучающей программы артисто� 4fefd39f24
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Virtual Wifi Miniport Adapter Driver.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Virtual Wifi Miniport Adapter Driver.md
deleted file mode 100644
index ab799d8ce8df174c3e814bbc0632739051cbfe9f..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Virtual Wifi Miniport Adapter Driver.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Here is the list of Microsoft Virtual WiFi Miniport Adapter drivers, Download & update Microsoft Virtual WiFi Miniport Adapter drivers from professional Microsoft ... 4d29de3e1b
-
-
-
diff --git a/spaces/luckwill/chiakicc/text/sanskrit.py b/spaces/luckwill/chiakicc/text/sanskrit.py
deleted file mode 100644
index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000
--- a/spaces/luckwill/chiakicc/text/sanskrit.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import re
-from indic_transliteration import sanscript
-
-
-# List of (iast, ipa) pairs:
-_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('a', 'ə'),
- ('ā', 'aː'),
- ('ī', 'iː'),
- ('ū', 'uː'),
- ('ṛ', 'ɹ`'),
- ('ṝ', 'ɹ`ː'),
- ('ḷ', 'l`'),
- ('ḹ', 'l`ː'),
- ('e', 'eː'),
- ('o', 'oː'),
- ('k', 'k⁼'),
- ('k⁼h', 'kʰ'),
- ('g', 'g⁼'),
- ('g⁼h', 'gʰ'),
- ('ṅ', 'ŋ'),
- ('c', 'ʧ⁼'),
- ('ʧ⁼h', 'ʧʰ'),
- ('j', 'ʥ⁼'),
- ('ʥ⁼h', 'ʥʰ'),
- ('ñ', 'n^'),
- ('ṭ', 't`⁼'),
- ('t`⁼h', 't`ʰ'),
- ('ḍ', 'd`⁼'),
- ('d`⁼h', 'd`ʰ'),
- ('ṇ', 'n`'),
- ('t', 't⁼'),
- ('t⁼h', 'tʰ'),
- ('d', 'd⁼'),
- ('d⁼h', 'dʰ'),
- ('p', 'p⁼'),
- ('p⁼h', 'pʰ'),
- ('b', 'b⁼'),
- ('b⁼h', 'bʰ'),
- ('y', 'j'),
- ('ś', 'ʃ'),
- ('ṣ', 's`'),
- ('r', 'ɾ'),
- ('l̤', 'l`'),
- ('h', 'ɦ'),
- ("'", ''),
- ('~', '^'),
- ('ṃ', '^')
-]]
-
-
-def devanagari_to_ipa(text):
- text = text.replace('ॐ', 'ओम्')
- text = re.sub(r'\s*।\s*$', '.', text)
- text = re.sub(r'\s*।\s*', ', ', text)
- text = re.sub(r'\s*॥', '.', text)
- text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST)
- for regex, replacement in _iast_to_ipa:
- text = re.sub(regex, replacement, text)
- text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0)
- [:-1]+'h'+x.group(1)+'*', text)
- return text
diff --git a/spaces/luxuedong/lxd/src/components/chat-scroll-anchor.tsx b/spaces/luxuedong/lxd/src/components/chat-scroll-anchor.tsx
deleted file mode 100644
index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000
--- a/spaces/luxuedong/lxd/src/components/chat-scroll-anchor.tsx
+++ /dev/null
@@ -1,29 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { useInView } from 'react-intersection-observer'
-
-import { useAtBottom } from '@/lib/hooks/use-at-bottom'
-
-interface ChatScrollAnchorProps {
- trackVisibility?: boolean
-}
-
-export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) {
- const isAtBottom = useAtBottom()
- const { ref, entry, inView } = useInView({
- trackVisibility,
- delay: 100,
- rootMargin: '0px 0px -150px 0px'
- })
-
- React.useEffect(() => {
- if (isAtBottom && trackVisibility && !inView) {
- entry?.target.scrollIntoView({
- block: 'start'
- })
- }
- }, [inView, entry, isAtBottom, trackVisibility])
-
- return
-}
diff --git a/spaces/ma-xu/LIVE/pybind11/tests/constructor_stats.h b/spaces/ma-xu/LIVE/pybind11/tests/constructor_stats.h
deleted file mode 100644
index abfaf9161406798eeaa79a0d6c22e023de893495..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/pybind11/tests/constructor_stats.h
+++ /dev/null
@@ -1,275 +0,0 @@
-#pragma once
-/*
- tests/constructor_stats.h -- framework for printing and tracking object
- instance lifetimes in example/test code.
-
- Copyright (c) 2016 Jason Rhinelander
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-
-This header provides a few useful tools for writing examples or tests that want to check and/or
-display object instance lifetimes. It requires that you include this header and add the following
-function calls to constructors:
-
- class MyClass {
- MyClass() { ...; print_default_created(this); }
- ~MyClass() { ...; print_destroyed(this); }
- MyClass(const MyClass &c) { ...; print_copy_created(this); }
- MyClass(MyClass &&c) { ...; print_move_created(this); }
- MyClass(int a, int b) { ...; print_created(this, a, b); }
- MyClass &operator=(const MyClass &c) { ...; print_copy_assigned(this); }
- MyClass &operator=(MyClass &&c) { ...; print_move_assigned(this); }
-
- ...
- }
-
-You can find various examples of these in several of the existing testing .cpp files. (Of course
-you don't need to add any of the above constructors/operators that you don't actually have, except
-for the destructor).
-
-Each of these will print an appropriate message such as:
-
- ### MyClass @ 0x2801910 created via default constructor
- ### MyClass @ 0x27fa780 created 100 200
- ### MyClass @ 0x2801910 destroyed
- ### MyClass @ 0x27fa780 destroyed
-
-You can also include extra arguments (such as the 100, 200 in the output above, coming from the
-value constructor) for all of the above methods which will be included in the output.
-
-For testing, each of these also keeps track the created instances and allows you to check how many
-of the various constructors have been invoked from the Python side via code such as:
-
- from pybind11_tests import ConstructorStats
- cstats = ConstructorStats.get(MyClass)
- print(cstats.alive())
- print(cstats.default_constructions)
-
-Note that `.alive()` should usually be the first thing you call as it invokes Python's garbage
-collector to actually destroy objects that aren't yet referenced.
-
-For everything except copy and move constructors and destructors, any extra values given to the
-print_...() function is stored in a class-specific values list which you can retrieve and inspect
-from the ConstructorStats instance `.values()` method.
-
-In some cases, when you need to track instances of a C++ class not registered with pybind11, you
-need to add a function returning the ConstructorStats for the C++ class; this can be done with:
-
- m.def("get_special_cstats", &ConstructorStats::get, py::return_value_policy::reference)
-
-Finally, you can suppress the output messages, but keep the constructor tracking (for
-inspection/testing in python) by using the functions with `print_` replaced with `track_` (e.g.
-`track_copy_created(this)`).
-
-*/
-
-#include "pybind11_tests.h"
-#include
-#include
-#include
-#include
-
-class ConstructorStats {
-protected:
- std::unordered_map _instances; // Need a map rather than set because members can shared address with parents
- std::list _values; // Used to track values (e.g. of value constructors)
-public:
- int default_constructions = 0;
- int copy_constructions = 0;
- int move_constructions = 0;
- int copy_assignments = 0;
- int move_assignments = 0;
-
- void copy_created(void *inst) {
- created(inst);
- copy_constructions++;
- }
-
- void move_created(void *inst) {
- created(inst);
- move_constructions++;
- }
-
- void default_created(void *inst) {
- created(inst);
- default_constructions++;
- }
-
- void created(void *inst) {
- ++_instances[inst];
- }
-
- void destroyed(void *inst) {
- if (--_instances[inst] < 0)
- throw std::runtime_error("cstats.destroyed() called with unknown "
- "instance; potential double-destruction "
- "or a missing cstats.created()");
- }
-
- static void gc() {
- // Force garbage collection to ensure any pending destructors are invoked:
-#if defined(PYPY_VERSION)
- PyObject *globals = PyEval_GetGlobals();
- PyObject *result = PyRun_String(
- "import gc\n"
- "for i in range(2):"
- " gc.collect()\n",
- Py_file_input, globals, globals);
- if (result == nullptr)
- throw py::error_already_set();
- Py_DECREF(result);
-#else
- py::module::import("gc").attr("collect")();
-#endif
- }
-
- int alive() {
- gc();
- int total = 0;
- for (const auto &p : _instances)
- if (p.second > 0)
- total += p.second;
- return total;
- }
-
- void value() {} // Recursion terminator
- // Takes one or more values, converts them to strings, then stores them.
- template void value(const T &v, Tmore &&...args) {
- std::ostringstream oss;
- oss << v;
- _values.push_back(oss.str());
- value(std::forward(args)...);
- }
-
- // Move out stored values
- py::list values() {
- py::list l;
- for (const auto &v : _values) l.append(py::cast(v));
- _values.clear();
- return l;
- }
-
- // Gets constructor stats from a C++ type index
- static ConstructorStats& get(std::type_index type) {
- static std::unordered_map all_cstats;
- return all_cstats[type];
- }
-
- // Gets constructor stats from a C++ type
- template static ConstructorStats& get() {
-#if defined(PYPY_VERSION)
- gc();
-#endif
- return get(typeid(T));
- }
-
- // Gets constructor stats from a Python class
- static ConstructorStats& get(py::object class_) {
- auto &internals = py::detail::get_internals();
- const std::type_index *t1 = nullptr, *t2 = nullptr;
- try {
- auto *type_info = internals.registered_types_py.at((PyTypeObject *) class_.ptr()).at(0);
- for (auto &p : internals.registered_types_cpp) {
- if (p.second == type_info) {
- if (t1) {
- t2 = &p.first;
- break;
- }
- t1 = &p.first;
- }
- }
- }
- catch (const std::out_of_range&) {}
- if (!t1) throw std::runtime_error("Unknown class passed to ConstructorStats::get()");
- auto &cs1 = get(*t1);
- // If we have both a t1 and t2 match, one is probably the trampoline class; return whichever
- // has more constructions (typically one or the other will be 0)
- if (t2) {
- auto &cs2 = get(*t2);
- int cs1_total = cs1.default_constructions + cs1.copy_constructions + cs1.move_constructions + (int) cs1._values.size();
- int cs2_total = cs2.default_constructions + cs2.copy_constructions + cs2.move_constructions + (int) cs2._values.size();
- if (cs2_total > cs1_total) return cs2;
- }
- return cs1;
- }
-};
-
-// To track construction/destruction, you need to call these methods from the various
-// constructors/operators. The ones that take extra values record the given values in the
-// constructor stats values for later inspection.
-template void track_copy_created(T *inst) { ConstructorStats::get().copy_created(inst); }
-template void track_move_created(T *inst) { ConstructorStats::get().move_created(inst); }
-template void track_copy_assigned(T *, Values &&...values) {
- auto &cst = ConstructorStats::get();
- cst.copy_assignments++;
- cst.value(std::forward(values)...);
-}
-template void track_move_assigned(T *, Values &&...values) {
- auto &cst = ConstructorStats::get();
- cst.move_assignments++;
- cst.value(std::forward(values)...);
-}
-template void track_default_created(T *inst, Values &&...values) {
- auto &cst = ConstructorStats::get();
- cst.default_created(inst);
- cst.value(std::forward(values)...);
-}
-template void track_created(T *inst, Values &&...values) {
- auto &cst = ConstructorStats::get();
- cst.created(inst);
- cst.value(std::forward(values)...);
-}
-template void track_destroyed(T *inst) {
- ConstructorStats::get().destroyed(inst);
-}
-template void track_values(T *, Values &&...values) {
- ConstructorStats::get().value(std::forward(values)...);
-}
-
-/// Don't cast pointers to Python, print them as strings
-inline const char *format_ptrs(const char *p) { return p; }
-template
-py::str format_ptrs(T *p) { return "{:#x}"_s.format(reinterpret_cast(p)); }
-template
-auto format_ptrs(T &&x) -> decltype(std::forward(x)) { return std::forward(x); }
-
-template
-void print_constr_details(T *inst, const std::string &action, Output &&...output) {
- py::print("###", py::type_id(), "@", format_ptrs(inst), action,
- format_ptrs(std::forward