diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audioease Altiverb 7 Xl 7 2 6 Vst Aax X86 X64 2016 46 How to Create Realistic Acoustic Spaces with Ease.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audioease Altiverb 7 Xl 7 2 6 Vst Aax X86 X64 2016 46 How to Create Realistic Acoustic Spaces with Ease.md
deleted file mode 100644
index 587a5711a6ac41f564e867caa2c3d77d67baa8f4..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audioease Altiverb 7 Xl 7 2 6 Vst Aax X86 X64 2016 46 How to Create Realistic Acoustic Spaces with Ease.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
Audioease Altiverb 7 XL: The Industry Standard Convolution Reverb Plug-in
-
If you are looking for a reverb plug-in that can create realistic and natural sounding reverbs from real spaces, then you should consider Audioease Altiverb 7 XL. Altiverb 7 XL is the industry standard convolution reverb plug-in for music and sound professionals. It uses top quality samples of real spaces to create reverb, ranging from Sydney Opera House to the cockpit of a Jumbo Jet. In this article, we will review the features, benefits, drawbacks, and tips on how to use Altiverb 7 XL.
High quality samples of real spaces to create reverb
-
Altiverb 7 XL uses convolution reverb technology, which means that it captures the sound of a real space and applies it to your audio signal. This way, you can recreate the acoustics of any location you want, without having to go there or record it yourself. You can choose from hundreds of impulse responses (IRs) that are included with Altiverb 7 XL, or download new ones for free from the Audioease website. You can also create your own IRs using a microphone or a sweep tone generator.
-
Efficient on the CPU and total recall automatable
-
Altiverb 7 XL is designed to be efficient on the CPU, so you can use multiple instances of it without slowing down your system. It also supports total recall automation, which means that you can save and recall all the settings of your reverb plug-in with your DAW project. You can also use snapshots to switch between different reverb settings quickly.
-
Supports up to 5.1 surround input and output and up to 384 kHz sampling rates
-
Altiverb 7 XL is not only suitable for stereo tracks, but also for surround sound projects. It supports up to 5.1 surround input and output, so you can apply reverb to each channel separately or together. It also supports up to 384 kHz sampling rates, which means that you can use it with high-resolution audio formats.
-
Compatible with various plug-in formats on Windows and Mac OS X
-
Altiverb 7 XL is compatible with various plug-in formats on Windows and Mac OS X, so you can use it with your preferred DAW software. On Windows, it supports AAX Native and VST formats. On Mac OS X, it supports AAX Native, AudioUnit, MAS, VST, RTAS, and TDM formats. However, note that the TDM plug-in is only available for Pro Tools HD users.
-
Impulse Responses Library of Altiverb 7 XL
-
The most sought after spaces for music and audio post
-
The Impulse Responses Library of Altiverb 7 XL contains the most sought after spaces for music and audio post production. You can find the main concert halls of the cities of Berlin, Los Angeles Vienna and Amsterdam for your orchestral work. Or legendary rock studios from New York or Paris. Or French Cathedrals, the Gol Gumbaz of India or London's Wembley stadium. You can also find IRs for specific applications such as car interiors, phone booths, bathrooms, closets, etc.
-
Vintage reverb gear and echo chambers
-
If you are looking for a classic reverb sound, you can also find IRs of vintage reverb gear and purpose built echo chambers in Altiverb 7 XL. You'll find all the EMT plates you want, spring reverbs, classic digital gear like the Lexicon 480L, Lexicon PCM70, the AMS RMX16 or the EMT250. You can also add the Frank Sinatra and Beach Boys echo chambers and you have everything you need to recreate those classic sounds.
-
Audioease Altiverb 7 XL convolution reverb plugin
-How to install Audioease Altiverb 7 XL on Windows 10
-Audioease Altiverb 7 XL review and tutorial
-Best settings for Audioease Altiverb 7 XL in FL Studio
-Audioease Altiverb 7 XL vs Waves IR1
-Download Audioease Altiverb 7 XL full version free
-Audioease Altiverb 7 XL presets and impulse responses
-Audioease Altiverb 7 XL compatibility with Pro Tools
-Audioease Altiverb 7 XL crack and serial key
-Audioease Altiverb 7 XL features and specifications
-Audioease Altiverb 7 XL price and discount
-Audioease Altiverb 7 XL system requirements and performance
-Audioease Altiverb 7 XL demo and trial version
-Audioease Altiverb 7 XL user manual and guide
-Audioease Altiverb 7 XL support and customer service
-Audioease Altiverb 7 XL alternatives and competitors
-Audioease Altiverb 7 XL testimonials and feedback
-Audioease Altiverb 7 XL tips and tricks
-Audioease Altiverb 7 XL updates and upgrades
-Audioease Altiverb 7 XL license and activation
-Audioease Altiverb 7 XL refund policy and guarantee
-Audioease Altiverb 7 XL forum and community
-Audioease Altiverb 7 XL video and audio examples
-Audioease Altiverb 7 XL comparison with other Audioease products
-Audioease Altiverb 7 XL benefits and advantages
-Audioease Altiverb 7 XL drawbacks and limitations
-Audioease Altiverb 7 XL history and development
-Audioease Altiverb 7 XL awards and recognition
-Audioease Altiverb 7 XL FAQ and troubleshooting
-Audioease Altiverb 7 XL blog and news
-How to use Audioease Altiverb 7 XL with Ableton Live
-How to create realistic spaces with Audioease Altiverb 7 XL
-How to optimize CPU usage with Audioease Altiverb 7 XL
-How to customize impulse responses with Audioease Altiverb 7 XL
-How to automate parameters with Audioease Altiverb 7 XL
-How to mix vocals with Audioease Altiverb 7 XL
-How to add depth and dimension with Audioease Altiverb 7 XL
-How to enhance drums with Audioease Altiverb 7 XL
-How to emulate classic reverbs with Audioease Altiverb 7 XL
-How to achieve spatial effects with Audioease Altiverb 7 XL
-How to blend dry and wet signals with Audioease Altiverb 7 XL
-How to control reverb tail and decay with Audioease Altiverb 7 XL
-How to adjust reverb size and distance with Audioease Altiverb 7 XL
-How to apply EQ and modulation with Audioease Altiverb 7 XL
-How to use the snapshot feature of Audioease Altiverb 7 XL
-How to import and export impulse responses with Audioease Altiverb 7 XL
-How to access the online library of impulse responses with Audioease Altiverb 7 XL
-How to use the surround mode of Audioease Altiverb 7 XL
-How to solve common problems with Audioease Altiverb 7 XL
-
New impulse responses added regularly and available for free
-
One of the best things about Altiverb 7 XL is that you can always get new impulse responses for free from Audioease. Every month they add new IRs to their library based on their travels around the world or requests from users. You can download them directly from within the plug-in using the visual browser or the keyword search field.
-
How to Use Altiverb 7 XL
-
The visual browser and the keyword search field
-
The visual browser is a feature that makes it easy to find and select impulse responses in Altiverb 7 XL. You can browse through IRs by clicking photos of rooms or categories. You can also sort them by size or name or use filters to narrow down your choices. If you know what you are looking for, you can also use the keyword search field to type in a name or a tag.
-
The parameters to tweak the reverb sound
-
Once you have selected an impulse response, you can tweak its sound using various parameters in Altiverb 7 XL. You can adjust the wet/dry mix, pre-delay, reverb time, early reflections, late reflections, EQ, damping, modulation, and more. You can also reverse or invert the IR or use stereo width controls to change its spatial characteristics.
-
The automation and presets options
-
Altiverb 7 XL allows you to automate any parameter using your DAW's automation features. You can also use snapshots to store and recall different settings within a single instance of Altiverb 7 XL. Additionally, you can save your own presets or load presets from other users or Audioease.
-
Pros and Cons of Altiverb 7 XL
-
Pros:
-
-
Sound quality: Altiverb 7 XL delivers realistic and natural sounding reverbs that are hard to achieve with other plug-ins.
-
Flexibility: Altiverb 7 XL offers a wide range of impulse responses for different genres and applications.
-
Ease of operation: Altiverb 7 XL has a user-friendly interface that makes it easy to find and tweak impulse responses.
-
Support: Audioease provides excellent customer support and regular updates for Altiverb 7 XL.
-
Updates: Audioease adds new impulse responses every month and makes them available for free for Altiverb 7 XL users.
-
-
Cons:
-
-
Price: Altiverb 7 XL is not cheap compared to other reverb plug-ins. It costs €849 ($995) for the full version and €499 ($585) for an upgrade from previous versions.
-
iLok key requirement: Altiverb requires an iLok key (2nd generation or up) to run which adds an extra cost and inconvenience for some users.
-
TDM plug-in only for Pro Tools HD: Altiverb only supports TDM plug-in format for Pro Tools HD users which limits its compatibility
Conclusion
-
Altiverb 7 XL is a convolution reverb plug-in that can create realistic and natural sounding reverbs from real spaces. It has many features and benefits that make it the industry standard for music and sound professionals. It also has some drawbacks that you should consider before buying it. However, if you are looking for a reverb plug-in that can give you the sound of any location you want, then Altiverb 7 XL is a great choice.
-
FAQs
-
Q: How can I get Altiverb 7 XL?
-
A: You can buy Altiverb 7 XL from the Audioease website or from a local store. You can also download a demo version to try it out before buying.
-
Q: How can I create my own impulse responses for Altiverb 7 XL?
-
A: You can create your own impulse responses using a microphone or a sweep tone generator. You can find detailed instructions on how to do this on the Audioease website or in the Altiverb 7 manual.
-
Q: How can I get more impulse responses for Altiverb 7 XL?
-
A: You can download new impulse responses for free from the Audioease website every month. You can also buy additional IRs from third-party developers or exchange them with other users.
-
Q: How can I use Altiverb 7 XL with surround sound?
-
A: Altiverb 7 XL supports up to 5.1 surround input and output. You can use it with surround sound tracks or buses in your DAW. You can also use the surround panner to position your sound source in the surround field.
-
Q: How can I get support for Altiverb 7 XL?
-
A: You can get support for Altiverb 7 XL by contacting Audioease via email or phone. You can also find answers to common questions and issues on their website or in the Altiverb 7 manual.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Get Reaction Mechanism in Organic Chemistry by Mukul C Ray PDF Download and Boost Your Organic Chemistry Skills.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Get Reaction Mechanism in Organic Chemistry by Mukul C Ray PDF Download and Boost Your Organic Chemistry Skills.md
deleted file mode 100644
index fa948b89888498aa3477ef7a7f7859059d9066ad..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Get Reaction Mechanism in Organic Chemistry by Mukul C Ray PDF Download and Boost Your Organic Chemistry Skills.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Reaction Mechanisms in Organic Chemistry by Mukul C. Ray PDF Download
-
If you are looking for a comprehensive and accessible book on reaction mechanisms in organic chemistry, you might be interested in Reaction Mechanisms in Organic Chemistry by Mukul C. Ray. In this article, we will give you an overview of the book, its contents, features, target audience and level, and how to download it as a PDF file.
-
Introduction
-
What are reaction mechanisms?
-
Reaction mechanisms are the detailed steps that show how a chemical reaction occurs at the molecular level. They involve the breaking and forming of bonds, the movement of electrons, the formation and disappearance of intermediates, and the role of catalysts and reagents. Reaction mechanisms help us understand how and why a reaction happens, what factors affect its rate and selectivity, and what products are formed.
-
reaction mechanism in organic chemistry by mukul c ray pdf download
Reaction mechanisms are important for several reasons. First, they provide a logical framework for learning and applying organic chemistry. By knowing the common patterns and principles of reaction mechanisms, we can predict the outcome of new reactions, design synthetic routes for desired compounds, and explain experimental observations. Second, they reveal the underlying connections between different reactions and functional groups. By comparing and contrasting different reaction mechanisms, we can appreciate the similarities and differences among various organic compounds and their reactivities. Third, they enable us to explore the frontiers of organic chemistry. By proposing and testing new reaction mechanisms, we can discover novel reactions, synthesize complex molecules, and develop new theories and concepts.
-
Overview of the book
-
Author and publication details
-
The author of the book is Mukul C. Ray, who is a professor of chemistry at the Indian Institute of Technology (IIT) Delhi. He has over 30 years of teaching and research experience in organic chemistry, with special interests in synthetic methodology, natural products, heterocyclic chemistry, and organometallic chemistry. He has published more than 100 research papers in reputed journals and has received several awards and honors for his contributions to the field.
-
The book was first published in 2015 by MTG Learning Media Pvt Ltd, which is a leading publisher of books for competitive exams in India. The book has 608 pages and is divided into 16 chapters. The ISBN of the book is 978-9385966350.
-
Contents and features
-
The book covers all the major topics of reaction mechanisms in organic chemistry, such as nucleophiles, bases, leaving groups, reaction intermediates, nucleophilic substitution reactions, elimination reactions, free radical reactions, electrophilic and nucleophilic addition reactions, substitution on aromatic rings, reactions of acid derivatives, pericyclic reactions, photochemical reactions, oxidation-reduction reactions, rearrangements, named reactions, reagents etc.
-
The book has several features that make it useful for learning and revision. Some of these features are:
-
-
Each chapter begins with an introduction that summarizes the main concepts and objectives.
-
The theory is explained in a clear and concise manner with examples and illustrations.
-
The mechanisms are shown with curved arrows that indicate the movement of electrons.
-
The intermediates are highlighted with boxes that show their structure and stability.
-
The factors that influence the rate and selectivity of reactions are discussed with relevant examples.
-
The common errors and misconceptions are pointed out with warnings.
-
The end of each chapter contains a summary that reviews the key points.
-
The exercises include multiple choice questions (MCQs), short answer questions (SAQs), long answer questions (LAQs), assertion-reason questions (ARQs), matching questions (MQs), fill in the blanks (FIBs), true-false questions (TFQs), etc.
-
The answers and solutions to all the exercises are given at the end of the book.
-
The appendices include tables of common reagents, functional groups, named reactions etc.
-
-
Target audience and level
-
The book is intended for students who are preparing for various competitive exams in India such as NEET/JEE Main & Advanced/PETs/GATE/JAM/CSIR-UGC/NET etc. The book is also suitable for undergraduate students who are studying organic chemistry as part of their curriculum. The book assumes that the readers have some basic knowledge of organic chemistry such as nomenclature, structure, bonding etc. The book covers both basic and advanced topics of reaction mechanisms in organic chemistry with appropriate depth and difficulty.
-
How to download the book
-
Online sources and links
-
If you want to download the book as a PDF file for free or at a low cost, you can try some online sources that offer this service. However, you should be careful about the quality and legality of these sources as they may not be authorized by the author or publisher. Some possible online sources are:
-
reaction mechanism in organic chemistry by mukul c ray ebook free
-download reaction mechanism in organic chemistry by mukul c ray pdf online
-reaction mechanism in organic chemistry by mukul c ray book review
-how to get reaction mechanism in organic chemistry by mukul c ray pdf for free
-reaction mechanism in organic chemistry by mukul c ray solutions manual pdf
-reaction mechanism in organic chemistry by mukul c ray pdf google drive
-reaction mechanism in organic chemistry by mukul c ray pdf reddit
-best books on reaction mechanism in organic chemistry like mukul c ray
-reaction mechanism in organic chemistry by mukul c ray lecture notes pdf
-reaction mechanism in organic chemistry by mukul c ray pdf quora
-reaction mechanism in organic chemistry by mukul c ray pdf 4shared
-reaction mechanism in organic chemistry by mukul c ray pdf scribd
-reaction mechanism in organic chemistry by mukul c ray pdf slideshare
-reaction mechanism in organic chemistry by mukul c ray pdf libgen
-reaction mechanism in organic chemistry by mukul c ray pdf torrent
-reaction mechanism in organic chemistry by mukul c ray pdf flipkart
-reaction mechanism in organic chemistry by mukul c ray pdf amazon
-buy reaction mechanism in organic chemistry by mukul c ray hardcover
-reaction mechanism in organic chemistry by mukul c ray summary and analysis
-reaction mechanism in organic chemistry by mukul c ray pdf chapter wise
-reaction mechanism in organic chemistry by mukul c ray mcq pdf
-reaction mechanism in organic chemistry by mukul c ray objective questions pdf
-reaction mechanism in organic chemistry by mukul c ray previous year questions pdf
-reaction mechanism in organic chemistry by mukul c ray practice problems pdf
-reaction mechanism in organic chemistry by mukul c ray video lectures
-reaction mechanism in organic chemistry by mukul c ray youtube playlist
-reaction mechanism in organic chemistry by mukul c ray course syllabus
-reaction mechanism in organic chemistry by mukul c ray reference books
-reaction mechanism in organic chemistry by mukul c ray recommended books
-reaction mechanism in organic chemistry by mukul c ray related books
-compare and contrast reaction mechanism in organic chemistry by mukul c ray and other books
-advantages and disadvantages of reaction mechanism in organic chemistry by mukul c ray book
-pros and cons of reaction mechanism in organic chemistry by mukul c ray book
-benefits and drawbacks of reaction mechanism in organic chemistry by mukul c ray book
-strengths and weaknesses of reaction mechanism in organic chemistry by mukul c ray book
-features and specifications of reaction mechanism in organic chemistry by mukul c ray book
-feedback and testimonials of reaction mechanism in organic chemistry by mukul c ray book
-ratings and reviews of reaction mechanism in organic chemistry by mukul c ray book
-popularity and demand of reaction mechanism in organic chemistry by mukul c ray book
-quality and reliability of reaction mechanism in organic chemistry by mukul c ray book
-price and value of reaction mechanism in organic chemistry by mukul c ray book
-availability and accessibility of reaction mechanism in organic chemistry by mukul c ray book
-edition and publication of reaction mechanism in organic chemistry by mukul c ray book
-author and publisher of reaction mechanism in organic chemistry by mukul c ray book
-biography and achievements of mukul c ray author of reaction mechanism in organic chemistry book
-awards and recognition of mukul c ray author of reaction mechanism in organic chemistry book
-interviews and articles of mukul c ray author of reaction mechanism in organic chemistry book
-quotes and insights of mukul c ray author of reaction mechanism in organic chemistry book
-contact and social media of mukul c ray author of reaction mechanism in organic chemistry book
-
-
Goodreads: This is a popular website where you can find reviews, ratings, summaries etc. of various books. You can also join groups or forums where you can discuss books with other readers. Sometimes you can find links to download books as PDF files from other websites or users.
-
MTG Learning Media: This is the official website of the publisher where you can find information about the book such as price, availability etc. You can also order the book online or find a nearby bookstore where you can buy it.
-
PDF Drive: This is a website where you can search for PDF files of various books or documents. You can download them for free or at a low cost depending on the source. However, you should be aware that some files may be incomplete or inaccurate or may contain viruses or malware.
-
-
Advantages and disadvantages of downloading
-
Downloading the book as a PDF file has some advantages and disadvantages that you should consider before doing so. Some advantages are:
-
-
You can access the book anytime anywhere without carrying a physical copy.
-
You can save money by not buying a printed copy.
-
You can highlight or annotate parts of the book using software tools.
-
You can search for keywords or phrases within the book using software tools.
-
-
Some disadvantages are:
-
-
You may not get the latest or correct version of the book as it may be outdated or modified by unauthorized sources.
-
You may violate the intellectual property rights of the author or publisher by downloading or sharing an illegal copy.
-
You may harm your device or data by downloading files that contain viruses or malware.
-
You may lose your concentration or interest by reading on a screen rather than on paper.
-
-
Alternatives and recommendations
-
If you do not want to download the book as a PDF file or if you cannot find a reliable source to do so, you can try some alternatives or recommendations that may help you learn reaction mechanisms in organic chemistry better. Some alternatives or recommendations are:
-
-
Borrowing or buying a physical copy: You can borrow or buy a physical copy of the book from your library or bookstore if it is available. This way you can enjoy reading on paper without worrying about quality or legality issues.
-
-
Online courses or lectures: You can enroll in online courses or watch online lectures on reaction mechanisms in organic chemistry from reputable sources such as Khan Academy, IIT Delhi Chemistry Department, NPTEL Organic Chemistry II, Harvard University Principles of Organic Chemistry etc.
-
Books or textbooks: You can read books or textbooks on reaction mechanisms in organic chemistry that cover the subject in depth and detail. Some examples are: Reaction Mechanisms in Organic Chemistry by Mukul C. Ray, Organic Chemistry by Clayden, Greeves, Warren and Wothers, Organic Chemistry by Bruice, Organic Chemistry by Carey and Sundberg, Advanced Organic Chemistry by March etc.
-
Websites or blogs: You can visit websites or blogs that provide information, tutorials, tips, tricks etc. on reaction mechanisms in organic chemistry. Some examples are: Master Organic Chemistry, Chemguide, Organic Chemistry Portal, The Curious Wavefunction etc.
-
YouTube videos or podcasts: You can watch YouTube videos or listen to podcasts that explain or demonstrate reaction mechanisms in organic chemistry in an engaging and entertaining way. Some examples are: Leah4Sci, The Organic Chemistry Tutor, Professor Dave Explains, The Skeptics Guide to the Universe etc.
-
Online forums or communities: You can join online forums or communities where you can discuss, ask questions, share ideas etc. on reaction mechanisms in organic chemistry with other students or experts. Some examples are: Reddit r/organicchemistry, Stack Exchange Chemistry, Quora Organic Chemistry etc.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Corel.PDF.Fusion.v1.10.Bilingual.Incl.Keymaker-CORE Full PATCHED Version.md b/spaces/1gistliPinn/ChatGPT4/Examples/Corel.PDF.Fusion.v1.10.Bilingual.Incl.Keymaker-CORE Full PATCHED Version.md
deleted file mode 100644
index 043a43021434860ec3b17d5815afb94ff8410552..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Corel.PDF.Fusion.v1.10.Bilingual.Incl.Keymaker-CORE Full PATCHED Version.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
In sum, health care facilities seem to have more bilingual staff than translation services [ 33 ], but most of these employees remain untrained in interpreting [ 34, 35 ], and the quality of interpreting is often below international standards [ 26, 36 ].
-
Corel.PDF.Fusion.v1.10.Bilingual.Incl.Keymaker-CORE full version
Insufficient use of professional interpreters can lead to various situations with potentially negative consequences on quality of care (see Fig. 2, values given for overall participants). Participants were therefore presented with a list of situations potentially arising as a consequence of unaddressed language barriers and asked to indicate all those they had encountered at least once over the past year that could have been mitigated if a professional interpreter had been present. Of the 504 respondents answering these questions on potential consequences 4/5 of PCP and of FD felt they had not been able to provide appropriate care for patient and family due to the language barrier (total 77.6%, FD: 75.5%, PCP: 80.2%, p=0.215; overall participants: 65.3%,). Nearly 2/3 of respondents (62.3%) reported difficulties determining the right diagnoses due to difficulties in obtaining a full patient history, with FD more often concerned hereof (FD: 74.8%, PCP: 58.1% of respondents, p=0.001). It is therefore not surprising that they tend to confirm having ordered additional exams more often (FD: 38.5 vs. PCP: 28.6% of respondents, p=0.021) due to an insufficient patient history. Extrapolated to all questionnaire participants, this would represent 28.9% of physicians having ordered extra exams at least once a year due to the language barrier.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FULL Stellar Phoenix Video Repair 3.0.0.0 Crack [CracksNow].md b/spaces/1gistliPinn/ChatGPT4/Examples/FULL Stellar Phoenix Video Repair 3.0.0.0 Crack [CracksNow].md
deleted file mode 100644
index 2686ad5d7d3458f3a3046643361ba6f20fd1e048..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/FULL Stellar Phoenix Video Repair 3.0.0.0 Crack [CracksNow].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
FULL Stellar Phoenix Video Repair 3.0.0.0 Crack [CracksNow]
-
-CRACK Ntuit QuickBooks Enterprise 19.2.1 R3 License Key ... FULL Stellar Phoenix Video Repair 3.0.0.0 Crack [CracksNow]. 2d Text Preset ... 1fdad05405
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Cmo Descargar e Instalar Universal Truck Simulator APK MOD con Dinero Infinito y Acceder a los Camiones ms Espectaculares.md b/spaces/1phancelerku/anime-remove-background/Cmo Descargar e Instalar Universal Truck Simulator APK MOD con Dinero Infinito y Acceder a los Camiones ms Espectaculares.md
deleted file mode 100644
index bda233079a39204fb2350e50098911e5114e8b4c..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Cmo Descargar e Instalar Universal Truck Simulator APK MOD con Dinero Infinito y Acceder a los Camiones ms Espectaculares.md
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
Universal Truck Simulator APK Mod Dinero Infinito: ¿Qué es y cómo descargarlo?
-
Introducción
-
¿Te gustan los juegos de simulación de conducción? ¿Te gustaría conducir un camión por diferentes escenarios y realizar todo tipo de misiones? Si la respuesta es sí, entonces te encantará Universal Truck Simulator, un juego de simulación de camiones para dispositivos Android que te ofrece una experiencia realista e inmersiva.
-
Pero si quieres disfrutar aún más del juego, quizás te interese saber que existe un mod dinero infinito que te permite tener todo el dinero que quieras en el juego. Así podrás comprar y mejorar todos los camiones que desees, personalizarlos a tu gusto y acceder a todas las funciones del juego sin limitaciones.
En este artículo te vamos a explicar qué es y cómo descargar e instalar el mod dinero infinito de Universal Truck Simulator. También te vamos a contar las principales características del juego y algunas preguntas frecuentes que pueden surgirte. ¡Sigue leyendo y descubre todo lo que necesitas saber sobre este increíble juego!
-
Características del juego
-
Universal Truck Simulator es un juego de simulación de camiones que te permite conducir una gran variedad de camiones por diferentes escenarios y realizar todo tipo de misiones. El juego cuenta con unas características que lo hacen muy atractivo y divertido:
-
Gráficos realistas y detallados
-
El juego tiene unos gráficos impresionantes que te harán sentir como si estuvieras conduciendo un camión de verdad. Podrás apreciar los detalles de los camiones, los remolques, los paisajes, el clima, el tráfico y mucho más. Además, el juego tiene una física realista que hace que la conducción sea más desafiante y emocionante.
-
Variedad de camiones y remolques
-
El juego te ofrece una gran variedad de camiones y remolques para elegir. Podrás conducir desde camiones clásicos hasta modernos, desde pequeños hasta gigantes, desde americanos hasta europeos. Cada camión tiene sus propias características y prestaciones, así como su propio sonido y manejo. También podrás elegir entre diferentes tipos de remolques, como cajas, tanques, contenedores, maderas, coches y más.
-
Modos de juego y misiones
-
El juego tiene varios modos de juego para que nunca te aburras. Podrás jugar en modo libre, donde podrás explorar los escenarios a tu antojo; en modo carrera, donde tendrás que cumplir con diferentes misiones y ganar dinero; o en modo multijugador, donde podrás competir o cooperar con otros jugadores en línea. Las misiones son variadas y pueden consistir en transportar cargas, entregar paquetes, remolcar vehículos averiados o participar en carreras.
-
Personalización y mejoras
Personalización y mejoras
-
El juego te permite personalizar y mejorar tus camiones a tu gusto. Podrás cambiar el color, las llantas, los faros, los espejos, los parachoques, las pegatinas y más. También podrás mejorar el motor, la transmisión, los frenos, la suspensión y otros aspectos técnicos. Con el mod dinero infinito podrás hacer todo esto sin gastar ni un centavo.
-
Cómo descargar e instalar el mod dinero infinito
-
Si quieres descargar e instalar el mod dinero infinito de Universal Truck Simulator, tendrás que seguir unos sencillos pasos. Pero antes, debes tener en cuenta algunos requisitos y precauciones:
-
descargar universal truck simulator apk mod dinero ilimitado
-universal truck simulator apk mod unlimited money and gems
-como instalar universal truck simulator apk mod con dinero infinito
-universal truck simulator apk mod hack version download
-universal truck simulator apk mod latest version 2021
-universal truck simulator apk mod free shopping and upgrades
-universal truck simulator apk mod mega mod menu
-universal truck simulator apk mod gameplay and review
-universal truck simulator apk mod offline mode
-universal truck simulator apk mod all trucks unlocked
-universal truck simulator apk mod android 1 com
-universal truck simulator apk mod rexdl com
-universal truck simulator apk mod revdl com
-universal truck simulator apk mod happymod com
-universal truck simulator apk mod apkpure com
-universal truck simulator apk mod mediafire com
-universal truck simulator apk mod mega nz
-universal truck simulator apk mod google drive
-universal truck simulator apk mod zippyshare com
-universal truck simulator apk mod 4shared com
-universal truck simulator apk mod no root required
-universal truck simulator apk mod anti ban protection
-universal truck simulator apk mod no ads or popups
-universal truck simulator apk mod easy installation guide
-universal truck simulator apk mod full hd graphics
-universal truck simulator apk mod realistic physics and sounds
-universal truck simulator apk mod custom skins and decals
-universal truck simulator apk mod new maps and routes
-universal truck simulator apk mod best settings and tips
-universal truck simulator apk mod cheats and tricks
-universal truck simulator apk mod online multiplayer mode
-universal truck simulator apk mod support for gamepad and steering wheel
-universal truck simulator apk mod high fps and smooth performance
-universal truck simulator apk mod low mb and battery saver
-universal truck simulator apk mod compatible with all devices and android versions
-universal truck simulator apk mod update and patch notes
-universal truck simulator apk mod bug fixes and improvements
-universal truck simulator apk mod features and benefits
-universal truck simulator apk mod pros and cons
-universal truck simulator apk mod ratings and reviews
-
Requisitos y precauciones
-
-
Necesitas tener un dispositivo Android con al menos 4 GB de RAM y 500 MB de espacio libre.
-
Necesitas tener una conexión a internet estable para descargar el mod y jugar en línea.
-
Necesitas desinstalar la versión original del juego si la tienes instalada.
-
Necesitas activar la opción de "Orígenes desconocidos" en los ajustes de seguridad de tu dispositivo para poder instalar el mod.
-
Necesitas tener cuidado con los posibles virus o malware que puedan contener algunos sitios web que ofrecen el mod. Te recomendamos que uses un antivirus o que descargues el mod desde una fuente confiable.
-
-
Pasos para descargar e instalar el mod dinero infinito
-
-
Busca en internet un sitio web que ofrezca el mod dinero infinito de Universal Truck Simulator. Puedes usar el buscador de Bing para encontrarlo.
-
Descarga el archivo APK del mod dinero infinito en tu dispositivo. Asegúrate de que sea compatible con la versión más reciente del juego.
-
Abre el archivo APK y sigue las instrucciones para instalar el mod dinero infinito en tu dispositivo.
-
Ejecuta el juego y disfruta de tener todo el dinero que quieras.
-
-
Conclusión
-
Universal Truck Simulator es un juego de simulación de camiones que te ofrece una experiencia realista e inmersiva. Podrás conducir una gran variedad de camiones por diferentes escenarios y realizar todo tipo de misiones. El juego tiene unos gráficos impresionantes, una física realista, varios modos de juego y muchas opciones de personalización y mejoras. Si quieres disfrutar aún más del juego, puedes descargar e instalar el mod dinero infinito que te permite tener todo el dinero que quieras en el juego. Así podrás comprar y mejorar todos los camiones que desees, personalizarlos a tu gusto y acceder a todas las funciones del juego sin limitaciones. Solo tienes que seguir unos sencillos pasos y tener en cuenta algunos requisitos y precauciones. Esperamos que este artículo te haya sido útil y que disfrutes de este increíble juego.
-
Preguntas frecuentes
-
A continuación te presentamos algunas preguntas frecuentes que pueden surgirte sobre el juego y el mod dinero infinito:
-
¿Es seguro usar el mod dinero infinito?
-
El uso del mod dinero infinito puede implicar algunos riesgos, como la posibilidad de infectar tu dispositivo con virus o malware, o la posibilidad de ser baneado del juego por usar trucos. Por eso, te recomendamos que uses un antivirus o que descargues el mod desde una fuente confiable, y que no abuses del mod dinero infinito para no llamar la atención de los desarrolladores o de otros jugadores.
-
¿Qué otras ventajas tiene el mod dinero infinito?
-
Además de tener todo el dinero que quieras en el juego, el mod dinero infinito también te permite desbloquear todos los camiones, remolques, piezas y accesorios que hay en el juego. Así podrás probar todos los elementos disponibles y elegir los que más te gusten.
-
¿Qué otros mods existen para Universal Truck Simulator?
-
Existen otros mods para Universal Truck Simulator que pueden mejorar tu experiencia de juego. Algunos ejemplos son: el mod combustible infinito, que te permite conducir sin preocuparte por el nivel de gasolina; el mod velocidad ilimitada, que te permite ir tan rápido como quier as; el mod daño reducido, que te permite evitar los daños en tu camión y remolque; o el mod mapa completo, que te permite ver todos los lugares y misiones disponibles en el juego. Puedes buscar estos mods en internet y descargarlos siguiendo los mismos pasos que para el mod dinero infinito.
-
¿Qué otros juegos de simulación de camiones hay para Android?
-
Si te gustan los juegos de simulación de camiones, hay otros juegos que puedes probar en tu dispositivo Android. Algunos ejemplos son: Truck Simulator 2018: Europe, que te permite conducir por las carreteras de Europa; World Truck Driving Simulator, que te ofrece una gran variedad de camiones y escenarios; o Grand Truck Simulator 2, que tiene un estilo más arcade y divertido.
-
¿Qué otros juegos de simulación hay para Android?
-
Si te gustan los juegos de simulación en general, hay muchos otros juegos que puedes disfrutar en tu dispositivo Android. Algunos ejemplos son: SimCity BuildIt, que te permite crear y gestionar tu propia ciudad; Farming Simulator 20, que te permite cultivar y cuidar de tu granja; o Flight Pilot Simulator 3D, que te permite pilotar diferentes aviones y realizar misiones aéreas.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2ndelement/voicevox/voicevox_engine/metas/__init__.py b/spaces/2ndelement/voicevox/voicevox_engine/metas/__init__.py
deleted file mode 100644
index 4907fdf38d604dc7949dd361812938afd9db0abb..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/voicevox_engine/metas/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from . import Metas, MetasStore
-
-__all__ = [
- "Metas",
- "MetasStore",
-]
diff --git a/spaces/4Taps/SadTalker/src/face3d/util/util.py b/spaces/4Taps/SadTalker/src/face3d/util/util.py
deleted file mode 100644
index 0d689ca138fc0fbf5bec794511ea0f9e638f9ea9..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/face3d/util/util.py
+++ /dev/null
@@ -1,208 +0,0 @@
-"""This script contains basic utilities for Deep3DFaceRecon_pytorch
-"""
-from __future__ import print_function
-import numpy as np
-import torch
-from PIL import Image
-import os
-import importlib
-import argparse
-from argparse import Namespace
-import torchvision
-
-
-def str2bool(v):
- if isinstance(v, bool):
- return v
- if v.lower() in ('yes', 'true', 't', 'y', '1'):
- return True
- elif v.lower() in ('no', 'false', 'f', 'n', '0'):
- return False
- else:
- raise argparse.ArgumentTypeError('Boolean value expected.')
-
-
-def copyconf(default_opt, **kwargs):
- conf = Namespace(**vars(default_opt))
- for key in kwargs:
- setattr(conf, key, kwargs[key])
- return conf
-
-def genvalconf(train_opt, **kwargs):
- conf = Namespace(**vars(train_opt))
- attr_dict = train_opt.__dict__
- for key, value in attr_dict.items():
- if 'val' in key and key.split('_')[0] in attr_dict:
- setattr(conf, key.split('_')[0], value)
-
- for key in kwargs:
- setattr(conf, key, kwargs[key])
-
- return conf
-
-def find_class_in_module(target_cls_name, module):
- target_cls_name = target_cls_name.replace('_', '').lower()
- clslib = importlib.import_module(module)
- cls = None
- for name, clsobj in clslib.__dict__.items():
- if name.lower() == target_cls_name:
- cls = clsobj
-
- assert cls is not None, "In %s, there should be a class whose name matches %s in lowercase without underscore(_)" % (module, target_cls_name)
-
- return cls
-
-
-def tensor2im(input_image, imtype=np.uint8):
- """"Converts a Tensor array into a numpy image array.
-
- Parameters:
- input_image (tensor) -- the input image tensor array, range(0, 1)
- imtype (type) -- the desired type of the converted numpy array
- """
- if not isinstance(input_image, np.ndarray):
- if isinstance(input_image, torch.Tensor): # get the data from a variable
- image_tensor = input_image.data
- else:
- return input_image
- image_numpy = image_tensor.clamp(0.0, 1.0).cpu().float().numpy() # convert it into a numpy array
- if image_numpy.shape[0] == 1: # grayscale to RGB
- image_numpy = np.tile(image_numpy, (3, 1, 1))
- image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0 # post-processing: tranpose and scaling
- else: # if it is a numpy array, do nothing
- image_numpy = input_image
- return image_numpy.astype(imtype)
-
-
-def diagnose_network(net, name='network'):
- """Calculate and print the mean of average absolute(gradients)
-
- Parameters:
- net (torch network) -- Torch network
- name (str) -- the name of the network
- """
- mean = 0.0
- count = 0
- for param in net.parameters():
- if param.grad is not None:
- mean += torch.mean(torch.abs(param.grad.data))
- count += 1
- if count > 0:
- mean = mean / count
- print(name)
- print(mean)
-
-
-def save_image(image_numpy, image_path, aspect_ratio=1.0):
- """Save a numpy image to the disk
-
- Parameters:
- image_numpy (numpy array) -- input numpy array
- image_path (str) -- the path of the image
- """
-
- image_pil = Image.fromarray(image_numpy)
- h, w, _ = image_numpy.shape
-
- if aspect_ratio is None:
- pass
- elif aspect_ratio > 1.0:
- image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC)
- elif aspect_ratio < 1.0:
- image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC)
- image_pil.save(image_path)
-
-
-def print_numpy(x, val=True, shp=False):
- """Print the mean, min, max, median, std, and size of a numpy array
-
- Parameters:
- val (bool) -- if print the values of the numpy array
- shp (bool) -- if print the shape of the numpy array
- """
- x = x.astype(np.float64)
- if shp:
- print('shape,', x.shape)
- if val:
- x = x.flatten()
- print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % (
- np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x)))
-
-
-def mkdirs(paths):
- """create empty directories if they don't exist
-
- Parameters:
- paths (str list) -- a list of directory paths
- """
- if isinstance(paths, list) and not isinstance(paths, str):
- for path in paths:
- mkdir(path)
- else:
- mkdir(paths)
-
-
-def mkdir(path):
- """create a single empty directory if it didn't exist
-
- Parameters:
- path (str) -- a single directory path
- """
- if not os.path.exists(path):
- os.makedirs(path)
-
-
-def correct_resize_label(t, size):
- device = t.device
- t = t.detach().cpu()
- resized = []
- for i in range(t.size(0)):
- one_t = t[i, :1]
- one_np = np.transpose(one_t.numpy().astype(np.uint8), (1, 2, 0))
- one_np = one_np[:, :, 0]
- one_image = Image.fromarray(one_np).resize(size, Image.NEAREST)
- resized_t = torch.from_numpy(np.array(one_image)).long()
- resized.append(resized_t)
- return torch.stack(resized, dim=0).to(device)
-
-
-def correct_resize(t, size, mode=Image.BICUBIC):
- device = t.device
- t = t.detach().cpu()
- resized = []
- for i in range(t.size(0)):
- one_t = t[i:i + 1]
- one_image = Image.fromarray(tensor2im(one_t)).resize(size, Image.BICUBIC)
- resized_t = torchvision.transforms.functional.to_tensor(one_image) * 2 - 1.0
- resized.append(resized_t)
- return torch.stack(resized, dim=0).to(device)
-
-def draw_landmarks(img, landmark, color='r', step=2):
- """
- Return:
- img -- numpy.array, (B, H, W, 3) img with landmark, RGB order, range (0, 255)
-
-
- Parameters:
- img -- numpy.array, (B, H, W, 3), RGB order, range (0, 255)
- landmark -- numpy.array, (B, 68, 2), y direction is opposite to v direction
- color -- str, 'r' or 'b' (red or blue)
- """
- if color =='r':
- c = np.array([255., 0, 0])
- else:
- c = np.array([0, 0, 255.])
-
- _, H, W, _ = img.shape
- img, landmark = img.copy(), landmark.copy()
- landmark[..., 1] = H - 1 - landmark[..., 1]
- landmark = np.round(landmark).astype(np.int32)
- for i in range(landmark.shape[1]):
- x, y = landmark[:, i, 0], landmark[:, i, 1]
- for j in range(-step, step):
- for k in range(-step, step):
- u = np.clip(x + j, 0, W - 1)
- v = np.clip(y + k, 0, H - 1)
- for m in range(landmark.shape[0]):
- img[m, v[m], u[m]] = c
- return img
diff --git a/spaces/801artistry/RVC801/train/utils.py b/spaces/801artistry/RVC801/train/utils.py
deleted file mode 100644
index aae833b08acc24b848aa70114fd9b7aad8b1a6ad..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/train/utils.py
+++ /dev/null
@@ -1,500 +0,0 @@
-import os, traceback
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def load_checkpoint_d(checkpoint_path, combd, sbd, optimizer=None, load_opt=1):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
-
- ##################
- def go(model, bkey):
- saved_state_dict = checkpoint_dict[bkey]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items(): # 模型需要的shape
- try:
- new_state_dict[k] = saved_state_dict[k]
- if saved_state_dict[k].shape != state_dict[k].shape:
- print(
- "shape-%s-mismatch|need-%s|get-%s"
- % (k, state_dict[k].shape, saved_state_dict[k].shape)
- ) #
- raise KeyError
- except:
- # logger.info(traceback.format_exc())
- logger.info("%s is not in the checkpoint" % k) # pretrain缺失的
- new_state_dict[k] = v # 模型自带的随机值
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict, strict=False)
- else:
- model.load_state_dict(new_state_dict, strict=False)
-
- go(combd, "combd")
- go(sbd, "sbd")
- #############
- logger.info("Loaded model weights")
-
- iteration = checkpoint_dict["iteration"]
- learning_rate = checkpoint_dict["learning_rate"]
- if (
- optimizer is not None and load_opt == 1
- ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
- # try:
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
- # except:
- # traceback.print_exc()
- logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-# def load_checkpoint(checkpoint_path, model, optimizer=None):
-# assert os.path.isfile(checkpoint_path)
-# checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
-# iteration = checkpoint_dict['iteration']
-# learning_rate = checkpoint_dict['learning_rate']
-# if optimizer is not None:
-# optimizer.load_state_dict(checkpoint_dict['optimizer'])
-# # print(1111)
-# saved_state_dict = checkpoint_dict['model']
-# # print(1111)
-#
-# if hasattr(model, 'module'):
-# state_dict = model.module.state_dict()
-# else:
-# state_dict = model.state_dict()
-# new_state_dict= {}
-# for k, v in state_dict.items():
-# try:
-# new_state_dict[k] = saved_state_dict[k]
-# except:
-# logger.info("%s is not in the checkpoint" % k)
-# new_state_dict[k] = v
-# if hasattr(model, 'module'):
-# model.module.load_state_dict(new_state_dict)
-# else:
-# model.load_state_dict(new_state_dict)
-# logger.info("Loaded checkpoint '{}' (epoch {})" .format(
-# checkpoint_path, iteration))
-# return model, optimizer, learning_rate, iteration
-def load_checkpoint(checkpoint_path, model, optimizer=None, load_opt=1):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
-
- saved_state_dict = checkpoint_dict["model"]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items(): # 模型需要的shape
- try:
- new_state_dict[k] = saved_state_dict[k]
- if saved_state_dict[k].shape != state_dict[k].shape:
- print(
- "shape-%s-mismatch|need-%s|get-%s"
- % (k, state_dict[k].shape, saved_state_dict[k].shape)
- ) #
- raise KeyError
- except:
- # logger.info(traceback.format_exc())
- logger.info("%s is not in the checkpoint" % k) # pretrain缺失的
- new_state_dict[k] = v # 模型自带的随机值
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict, strict=False)
- else:
- model.load_state_dict(new_state_dict, strict=False)
- logger.info("Loaded model weights")
-
- iteration = checkpoint_dict["iteration"]
- learning_rate = checkpoint_dict["learning_rate"]
- if (
- optimizer is not None and load_opt == 1
- ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
- # try:
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
- # except:
- # traceback.print_exc()
- logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info(
- "Saving model and optimizer state at epoch {} to {}".format(
- iteration, checkpoint_path
- )
- )
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save(
- {
- "model": state_dict,
- "iteration": iteration,
- "optimizer": optimizer.state_dict(),
- "learning_rate": learning_rate,
- },
- checkpoint_path,
- )
-
-
-def save_checkpoint_d(combd, sbd, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info(
- "Saving model and optimizer state at epoch {} to {}".format(
- iteration, checkpoint_path
- )
- )
- if hasattr(combd, "module"):
- state_dict_combd = combd.module.state_dict()
- else:
- state_dict_combd = combd.state_dict()
- if hasattr(sbd, "module"):
- state_dict_sbd = sbd.module.state_dict()
- else:
- state_dict_sbd = sbd.state_dict()
- torch.save(
- {
- "combd": state_dict_combd,
- "sbd": state_dict_sbd,
- "iteration": iteration,
- "optimizer": optimizer.state_dict(),
- "learning_rate": learning_rate,
- },
- checkpoint_path,
- )
-
-
-def summarize(
- writer,
- global_step,
- scalars={},
- histograms={},
- images={},
- audios={},
- audio_sampling_rate=22050,
-):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats="HWC")
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(
- alignment.transpose(), aspect="auto", origin="lower", interpolation="none"
- )
- fig.colorbar(im, ax=ax)
- xlabel = "Decoder timestep"
- if info is not None:
- xlabel += "\n\n" + info
- plt.xlabel(xlabel)
- plt.ylabel("Encoder timestep")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- filepaths_and_text = [item for item in filepaths_and_text if len(item) == 5] # ensure there are 5 items.
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- """
- todo:
- 结尾七人组:
- 保存频率、总epoch done
- bs done
- pretrainG、pretrainD done
- 卡号:os.en["CUDA_VISIBLE_DEVICES"] done
- if_latest done
- 模型:if_f0 done
- 采样率:自动选择config done
- 是否缓存数据集进GPU:if_cache_data_in_gpu done
-
- -m:
- 自动决定training_files路径,改掉train_nsf_load_pretrain.py里的hps.data.training_files done
- -c不要了
- """
- parser = argparse.ArgumentParser()
- # parser.add_argument('-c', '--config', type=str, default="configs/40k.json",help='JSON file for configuration')
- parser.add_argument(
- "-se",
- "--save_every_epoch",
- type=int,
- required=True,
- help="checkpoint save frequency (epoch)",
- )
- parser.add_argument(
- "-te", "--total_epoch", type=int, required=True, help="total_epoch"
- )
- parser.add_argument(
- "-pg", "--pretrainG", type=str, default="", help="Pretrained Discriminator path"
- )
- parser.add_argument(
- "-pd", "--pretrainD", type=str, default="", help="Pretrained Generator path"
- )
- parser.add_argument("-g", "--gpus", type=str, default="0", help="split by -")
- parser.add_argument(
- "-bs", "--batch_size", type=int, required=True, help="batch size"
- )
- parser.add_argument(
- "-e", "--experiment_dir", type=str, required=True, help="experiment dir"
- ) # -m
- parser.add_argument(
- "-sr", "--sample_rate", type=str, required=True, help="sample rate, 32k/40k/48k"
- )
- parser.add_argument(
- "-sw",
- "--save_every_weights",
- type=str,
- default="0",
- help="save the extracted model in weights directory when saving checkpoints",
- )
- parser.add_argument(
- "-v", "--version", type=str, required=True, help="model version"
- )
- parser.add_argument(
- "-f0",
- "--if_f0",
- type=int,
- required=True,
- help="use f0 as one of the inputs of the model, 1 or 0",
- )
- parser.add_argument(
- "-l",
- "--if_latest",
- type=int,
- required=True,
- help="if only save the latest G/D pth file, 1 or 0",
- )
- parser.add_argument(
- "-c",
- "--if_cache_data_in_gpu",
- type=int,
- required=True,
- help="if caching the dataset in GPU memory, 1 or 0",
- )
- parser.add_argument(
- "-li", "--log_interval", type=int, required=True, help="log interval"
- )
-
- args = parser.parse_args()
- name = args.experiment_dir
- experiment_dir = os.path.join("./logs", args.experiment_dir)
-
- if not os.path.exists(experiment_dir):
- os.makedirs(experiment_dir)
-
- if args.version == "v1" or args.sample_rate == "40k":
- config_path = "configs/%s.json" % args.sample_rate
- else:
- config_path = "configs/%s_v2.json" % args.sample_rate
- config_save_path = os.path.join(experiment_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = hparams.experiment_dir = experiment_dir
- hparams.save_every_epoch = args.save_every_epoch
- hparams.name = name
- hparams.total_epoch = args.total_epoch
- hparams.pretrainG = args.pretrainG
- hparams.pretrainD = args.pretrainD
- hparams.version = args.version
- hparams.gpus = args.gpus
- hparams.train.batch_size = args.batch_size
- hparams.sample_rate = args.sample_rate
- hparams.if_f0 = args.if_f0
- hparams.if_latest = args.if_latest
- hparams.save_every_weights = args.save_every_weights
- hparams.if_cache_data_in_gpu = args.if_cache_data_in_gpu
- hparams.data.training_files = "%s/filelist.txt" % experiment_dir
-
- hparams.train.log_interval = args.log_interval
-
- # Update log_interval in the 'train' section of the config dictionary
- config["train"]["log_interval"] = args.log_interval
-
- # Save the updated config back to the config_save_path
- with open(config_save_path, "w") as f:
- json.dump(config, f, indent=4)
-
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn(
- "{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- )
- )
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn(
- "git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]
- )
- )
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams:
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/A00001/bingothoo/src/lib/bots/bing/sr.ts b/spaces/A00001/bingothoo/src/lib/bots/bing/sr.ts
deleted file mode 100644
index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/lib/bots/bing/sr.ts
+++ /dev/null
@@ -1,106 +0,0 @@
-// @ts-ignore
-const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? (
- // @ts-ignore
- window.SpeechRecognition ||
- window.webkitSpeechRecognition ||
- // @ts-ignore
- window.mozSpeechRecognition ||
- // @ts-ignore
- window.msSpeechRecognition ||
- // @ts-ignore
- window.oSpeechRecognition
-) as typeof webkitSpeechRecognition : undefined
-
-type subscriber = (msg: string, command?: string) => void
-
-export class SR {
- recognition?: SpeechRecognition
- onchange?: subscriber
- transcript: boolean = false
- listening: boolean = false
- private commandsRe?: RegExp
- constructor(commands: string[]) {
- this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined
- if (!this.recognition) {
- return
- }
- this.configuration('zh-CN')
- if (commands.length) {
- this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`)
- }
- this.recognition.onresult = this.speechRecognition
- this.recognition.onerror = (err) => {
- console.log('err', err.error)
- this.stop()
- }
- this.recognition.onend = () => {
- if (this.recognition && this.listening) {
- this.recognition.start()
- }
- }
- }
-
- speechRecognition = (event: SpeechRecognitionEvent) => {
- if (!this.listening) return
- for (var i = event.resultIndex; i < event.results.length; i++) {
- let result = event.results[i]
- if (result.isFinal) {
- var alt = result[0]
- const text = alt.transcript.trim()
- if (this.commandsRe && this.commandsRe.test(text)) {
- return this.onchange?.('', RegExp.$1)
- }
- if (!this.transcript) return
- this.onchange?.(text)
- }
- }
- }
-
- private configuration = async (lang: string = 'zh-CN') => {
- return new Promise((resolve) => {
- if (this.recognition) {
- this.recognition.continuous = true
- this.recognition.lang = lang
- this.recognition.onstart = resolve
- }
- })
- }
-
- start = async () => {
- if (this.recognition && !this.listening) {
- await this.recognition.start()
- this.transcript = true
- this.listening = true
- }
- }
-
- stop = () => {
- if (this.recognition) {
- this.recognition.stop()
- this.transcript = false
- this.listening = false
- }
- }
-
-
- pause = () => {
- if (this.recognition) {
- this.transcript = false
- }
- }
-
- resume = () => {
- if (this.recognition) {
- this.transcript = true
- }
- }
-
- abort = () => {
- if (this.recognition && this.transcript) {
- this.recognition.abort()
- this.transcript = false
- this.listening = false
- }
- }
-}
-
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/train/losses.py b/spaces/AI-Hobbyist/Hoyo-RVC/train/losses.py
deleted file mode 100644
index b89038f14d06d7fae43628183e9ffb465e4edafd..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/train/losses.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1 - dr) ** 2)
- g_loss = torch.mean(dg**2)
- loss += r_loss + g_loss
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1 - dg) ** 2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
-
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/logger.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/logger.py
deleted file mode 100644
index c9737cc165654ce51bb2204636ce78f34acd0a9e..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/logger.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import logging
-import os
-import time
-from shutil import copytree, ignore_patterns
-
-import torch
-from omegaconf import OmegaConf
-from torch.utils.tensorboard import SummaryWriter, summary
-
-
-class LoggerWithTBoard(SummaryWriter):
-
- def __init__(self, cfg):
- # current time stamp and experiment log directory
- self.start_time = time.strftime('%y-%m-%dT%H-%M-%S', time.localtime())
- self.logdir = os.path.join(cfg.logdir, self.start_time)
- # init tboard
- super().__init__(self.logdir)
- # backup the cfg
- OmegaConf.save(cfg, os.path.join(self.log_dir, 'cfg.yaml'))
- # backup the code state
- if cfg.log_code_state:
- dest_dir = os.path.join(self.logdir, 'code')
- copytree(os.getcwd(), dest_dir, ignore=ignore_patterns(*cfg.patterns_to_ignore))
-
- # init logger which handles printing and logging mostly same things to the log file
- self.print_logger = logging.getLogger('main')
- self.print_logger.setLevel(logging.INFO)
- msgfmt = '[%(levelname)s] %(asctime)s - %(name)s \n %(message)s'
- datefmt = '%d %b %Y %H:%M:%S'
- formatter = logging.Formatter(msgfmt, datefmt)
- # stdout
- sh = logging.StreamHandler()
- sh.setLevel(logging.DEBUG)
- sh.setFormatter(formatter)
- self.print_logger.addHandler(sh)
- # log file
- fh = logging.FileHandler(os.path.join(self.log_dir, 'log.txt'))
- fh.setLevel(logging.INFO)
- fh.setFormatter(formatter)
- self.print_logger.addHandler(fh)
-
- self.print_logger.info(f'Saving logs and checkpoints @ {self.logdir}')
-
- def log_param_num(self, model):
- param_num = sum(p.numel() for p in model.parameters() if p.requires_grad)
- self.print_logger.info(f'The number of parameters: {param_num/1e+6:.3f} mil')
- self.add_scalar('num_params', param_num, 0)
- return param_num
-
- def log_iter_loss(self, loss, iter, phase):
- self.add_scalar(f'{phase}/loss_iter', loss, iter)
-
- def log_epoch_loss(self, loss, epoch, phase):
- self.add_scalar(f'{phase}/loss', loss, epoch)
- self.print_logger.info(f'{phase} ({epoch}): loss {loss:.3f};')
-
- def log_epoch_metrics(self, metrics_dict, epoch, phase):
- for metric, val in metrics_dict.items():
- self.add_scalar(f'{phase}/{metric}', val, epoch)
- metrics_dict = {k: round(v, 4) for k, v in metrics_dict.items()}
- self.print_logger.info(f'{phase} ({epoch}) metrics: {metrics_dict};')
-
- def log_test_metrics(self, metrics_dict, hparams_dict, best_epoch):
- allowed_types = (int, float, str, bool, torch.Tensor)
- hparams_dict = {k: v for k, v in hparams_dict.items() if isinstance(v, allowed_types)}
- metrics_dict = {f'test/{k}': round(v, 4) for k, v in metrics_dict.items()}
- exp, ssi, sei = summary.hparams(hparams_dict, metrics_dict)
- self.file_writer.add_summary(exp)
- self.file_writer.add_summary(ssi)
- self.file_writer.add_summary(sei)
- for k, v in metrics_dict.items():
- self.add_scalar(k, v, best_epoch)
- self.print_logger.info(f'test ({best_epoch}) metrics: {metrics_dict};')
-
- def log_best_model(self, model, loss, epoch, optimizer, metrics_dict):
- model_name = model.__class__.__name__
- self.best_model_path = os.path.join(self.logdir, f'{model_name}-{self.start_time}.pt')
- checkpoint = {
- 'loss': loss,
- 'metrics': metrics_dict,
- 'epoch': epoch,
- 'optimizer': optimizer.state_dict(),
- 'model': model.state_dict(),
- }
- torch.save(checkpoint, self.best_model_path)
- self.print_logger.info(f'Saved model in {self.best_model_path}')
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/util.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/util.py
deleted file mode 100644
index a952e6c40308c33edd422da0ce6a60f47e73661b..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/util.py
+++ /dev/null
@@ -1,267 +0,0 @@
-# adopted from
-# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
-# and
-# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-# and
-# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py
-#
-# thanks!
-
-
-import os
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-from einops import repeat
-
-from ldm.util import instantiate_from_config
-
-
-def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if schedule == "linear":
- betas = (
- torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2
- )
-
- elif schedule == "cosine":
- timesteps = (
- torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s
- )
- alphas = timesteps / (1 + cosine_s) * np.pi / 2
- alphas = torch.cos(alphas).pow(2)
- alphas = alphas / alphas[0]
- betas = 1 - alphas[1:] / alphas[:-1]
- betas = np.clip(betas, a_min=0, a_max=0.999)
-
- elif schedule == "sqrt_linear":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)
- elif schedule == "sqrt":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5
- else:
- raise ValueError(f"schedule '{schedule}' unknown.")
- return betas.numpy()
-
-
-def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):
- if ddim_discr_method == 'uniform':
- c = num_ddpm_timesteps // num_ddim_timesteps
- ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))
- elif ddim_discr_method == 'quad':
- ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)
- else:
- raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"')
-
- # assert ddim_timesteps.shape[0] == num_ddim_timesteps
- # add one to get the final alpha values right (the ones from first scale to data during sampling)
- steps_out = ddim_timesteps + 1
- if verbose:
- print(f'Selected timesteps for ddim sampler: {steps_out}')
- return steps_out
-
-
-def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):
- # select alphas for computing the variance schedule
- alphas = alphacums[ddim_timesteps]
- alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())
-
- # according the the formula provided in https://arxiv.org/abs/2010.02502
- sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))
- if verbose:
- print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')
- print(f'For the chosen value of eta, which is {eta}, '
- f'this results in the following sigma_t schedule for ddim sampler {sigmas}')
- return sigmas, alphas, alphas_prev
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function,
- which defines the cumulative product of (1-beta) over time from t = [0,1].
- :param num_diffusion_timesteps: the number of betas to produce.
- :param alpha_bar: a lambda that takes an argument t from 0 to 1 and
- produces the cumulative product of (1-beta) up to that
- part of the diffusion process.
- :param max_beta: the maximum beta to use; use values lower than 1 to
- prevent singularities.
- """
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return np.array(betas)
-
-
-def extract_into_tensor(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def checkpoint(func, inputs, params, flag):
- """
- Evaluate a function without caching intermediate activations, allowing for
- reduced memory at the expense of extra compute in the backward pass.
- :param func: the function to evaluate.
- :param inputs: the argument sequence to pass to `func`.
- :param params: a sequence of parameters `func` depends on but does not
- explicitly take as arguments.
- :param flag: if False, disable gradient checkpointing.
- """
- if flag:
- args = tuple(inputs) + tuple(params)
- return CheckpointFunction.apply(func, len(inputs), *args)
- else:
- return func(*inputs)
-
-
-class CheckpointFunction(torch.autograd.Function):
- @staticmethod
- def forward(ctx, run_function, length, *args):
- ctx.run_function = run_function
- ctx.input_tensors = list(args[:length])
- ctx.input_params = list(args[length:])
-
- with torch.no_grad():
- output_tensors = ctx.run_function(*ctx.input_tensors)
- return output_tensors
-
- @staticmethod
- def backward(ctx, *output_grads):
- ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
- with torch.enable_grad():
- # Fixes a bug where the first op in run_function modifies the
- # Tensor storage in place, which is not allowed for detach()'d
- # Tensors.
- shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
- output_tensors = ctx.run_function(*shallow_copies)
- input_grads = torch.autograd.grad(
- output_tensors,
- ctx.input_tensors + ctx.input_params,
- output_grads,
- allow_unused=True,
- )
- del ctx.input_tensors
- del ctx.input_params
- del output_tensors
- return (None, None) + input_grads
-
-
-def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
- """
- Create sinusoidal timestep embeddings.
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- if not repeat_only:
- half = dim // 2
- freqs = torch.exp(
- -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
- if dim % 2:
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
- else:
- embedding = repeat(timesteps, 'b -> b d', d=dim)
- return embedding
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def scale_module(module, scale):
- """
- Scale the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().mul_(scale)
- return module
-
-
-def mean_flat(tensor):
- """
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def normalization(channels):
- """
- Make a standard normalization layer.
- :param channels: number of input channels.
- :return: an nn.Module for normalization.
- """
- return GroupNorm32(32, channels)
-
-
-# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
-class SiLU(nn.Module):
- def forward(self, x):
- return x * torch.sigmoid(x)
-
-
-class GroupNorm32(nn.GroupNorm):
- def forward(self, x):
- return super().forward(x.float()).type(x.dtype)
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def linear(*args, **kwargs):
- """
- Create a linear module.
- """
- return nn.Linear(*args, **kwargs)
-
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-class HybridConditioner(nn.Module):
-
- def __init__(self, c_concat_config, c_crossattn_config):
- super().__init__()
- self.concat_conditioner = instantiate_from_config(c_concat_config)
- self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
-
- def forward(self, c_concat, c_crossattn):
- c_concat = self.concat_conditioner(c_concat)
- c_crossattn = self.crossattn_conditioner(c_crossattn)
- return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
\ No newline at end of file
diff --git a/spaces/AP123/CerealBoxMaker/app.py b/spaces/AP123/CerealBoxMaker/app.py
deleted file mode 100644
index 0bd9cd68f4b0ff3c896c43e4f1345ac953914cb3..0000000000000000000000000000000000000000
--- a/spaces/AP123/CerealBoxMaker/app.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import gradio as gr
-import torch
-import numpy as np
-from PIL import Image
-import random
-from diffusers import DiffusionPipeline
-
-pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16")
-pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora")
-pipeline.to("cuda:0")
-
-MAX_SEED = np.iinfo(np.int32).max
-
-def text_to_image(prompt):
- seed = random.randint(0, MAX_SEED)
- negative_prompt = "ugly, blurry, nsfw, gore, blood"
- output = pipeline(prompt=prompt, negative_prompt=negative_prompt, width=1024, height=1024, guidance_scale=7.0, num_inference_steps=25, generator=torch.Generator().manual_seed(seed))
- generated_img = output.images[0]
- generated_img_array = np.array(generated_img)
- return generated_img_array
-
-def create_cereal_box(input_image):
- cover_img = Image.fromarray(input_image.astype('uint8'), 'RGB')
- template_img = Image.open("template.jpeg")
- scaling_factor = 1.5
- rect_height = int(template_img.height * 0.32)
- new_width = int(rect_height * 0.70)
- cover_resized = cover_img.resize((new_width, rect_height), Image.LANCZOS)
- new_width_scaled = int(new_width * scaling_factor)
- new_height_scaled = int(rect_height * scaling_factor)
- cover_resized_scaled = cover_resized.resize((new_width_scaled, new_height_scaled), Image.LANCZOS)
- left_x = int(template_img.width * 0.085)
- left_y = int((template_img.height - new_height_scaled) // 2 + template_img.height * 0.012)
- left_position = (left_x, left_y)
- right_x = int(template_img.width * 0.82) - new_width_scaled
- right_y = left_y
- right_position = (right_x, right_y)
- template_copy = template_img.copy()
- template_copy.paste(cover_resized_scaled, left_position)
- template_copy.paste(cover_resized_scaled, right_position)
- template_copy_array = np.array(template_copy)
- return template_copy_array
-
-def combined_function(prompt):
- generated_img_array = text_to_image(prompt)
- final_img = create_cereal_box(generated_img_array)
- return final_img
-
-with gr.Blocks() as app:
- gr.HTML("
Cereal Box Maker 🥣
")
- gr.HTML("
This application uses StableDiffusion XL to create any cereal box you could ever imagine!
")
- gr.HTML("
Instructions:
Describe the cereal box you want to create and hit generate!
Print it out, cut the outside, fold the lines, and then tape!
")
- gr.HTML("
A space by AP 🐧, follow me on Twitter! H/T to OstrisAI for their Cereal Box LoRA!
")
-
- with gr.Row():
- textbox = gr.Textbox(label="Describe your cereal box: Ex: 'Avengers Cereal'")
- btn_generate = gr.Button("Generate", label="Generate")
-
- with gr.Row():
- output_img = gr.Image(label="Your Custom Cereal Box")
-
- btn_generate.click(
- combined_function,
- inputs=[textbox],
- outputs=[output_img]
- )
-
-app.queue(max_size=20, api_open=False)
-app.launch()
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet152_cifar.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet152_cifar.py
deleted file mode 100644
index 55c0cc6c66dbde26bebe6d99d791c3e3f28e4e27..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet152_cifar.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# model settings
-model = dict(
- type='ImageClassifier',
- backbone=dict(
- type='ResNet_CIFAR',
- depth=152,
- num_stages=4,
- out_indices=(3, ),
- style='pytorch'),
- neck=dict(type='GlobalAveragePooling'),
- head=dict(
- type='LinearClsHead',
- num_classes=10,
- in_channels=2048,
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
- ))
diff --git a/spaces/Abhilashvj/planogram-compliance/utils/downloads.py b/spaces/Abhilashvj/planogram-compliance/utils/downloads.py
deleted file mode 100644
index 3adef59fa19042fdd24fa7ed6dbc93d21aeaae59..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/utils/downloads.py
+++ /dev/null
@@ -1,139 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Download utils
-"""
-
-import logging
-import os
-import subprocess
-import urllib
-from pathlib import Path
-
-import requests
-import torch
-
-
-def is_url(url, check=True):
- # Check if string is URL and check if URL exists
- try:
- url = str(url)
- result = urllib.parse.urlparse(url)
- assert all([result.scheme, result.netloc]) # check if is url
- return (
- (urllib.request.urlopen(url).getcode() == 200) if check else True
- ) # check if exists online
- except (AssertionError, urllib.request.HTTPError):
- return False
-
-
-def gsutil_getsize(url=""):
- # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
- s = subprocess.check_output(f"gsutil du {url}", shell=True).decode("utf-8")
- return eval(s.split(" ")[0]) if len(s) else 0 # bytes
-
-
-def url_getsize(url="https://ultralytics.com/images/bus.jpg"):
- # Return downloadable file size in bytes
- response = requests.head(url, allow_redirects=True)
- return int(response.headers.get("content-length", -1))
-
-
-def safe_download(file, url, url2=None, min_bytes=1e0, error_msg=""):
- # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes
- from utils.general import LOGGER
-
- file = Path(file)
- assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}"
- try: # url1
- LOGGER.info(f"Downloading {url} to {file}...")
- torch.hub.download_url_to_file(
- url, str(file), progress=LOGGER.level <= logging.INFO
- )
- assert (
- file.exists() and file.stat().st_size > min_bytes
- ), assert_msg # check
- except Exception as e: # url2
- if file.exists():
- file.unlink() # remove partial downloads
- LOGGER.info(f"ERROR: {e}\nRe-attempting {url2 or url} to {file}...")
- os.system(
- f"curl -# -L '{url2 or url}' -o '{file}' --retry 3 -C -"
- ) # curl download, retry and resume on fail
- finally:
- if not file.exists() or file.stat().st_size < min_bytes: # check
- if file.exists():
- file.unlink() # remove partial downloads
- LOGGER.info(f"ERROR: {assert_msg}\n{error_msg}")
- LOGGER.info("")
-
-
-def attempt_download(file, repo="ultralytics/yolov5", release="v7.0"):
- # Attempt file download from GitHub release assets if not found locally. release = 'latest', 'v7.0', etc.
- from utils.general import LOGGER
-
- def github_assets(repository, version="latest"):
- # Return GitHub repo tag (i.e. 'v7.0') and assets (i.e. ['yolov5s.pt', 'yolov5m.pt', ...])
- if version != "latest":
- version = f"tags/{version}" # i.e. tags/v7.0
- response = requests.get(
- f"https://api.github.com/repos/{repository}/releases/{version}"
- ).json() # github api
- return response["tag_name"], [
- x["name"] for x in response["assets"]
- ] # tag, assets
-
- file = Path(str(file).strip().replace("'", ""))
- if not file.exists():
- # URL specified
- name = Path(
- urllib.parse.unquote(str(file))
- ).name # decode '%2F' to '/' etc.
- if str(file).startswith(("http:/", "https:/")): # download
- url = str(file).replace(":/", "://") # Pathlib turns :// -> :/
- file = name.split("?")[
- 0
- ] # parse authentication https://url.com/file.txt?auth...
- if Path(file).is_file():
- LOGGER.info(
- f"Found {url} locally at {file}"
- ) # file already exists
- else:
- safe_download(file=file, url=url, min_bytes=1e5)
- return file
-
- # GitHub assets
- assets = [
- f"yolov5{size}{suffix}.pt"
- for size in "nsmlx"
- for suffix in ("", "6", "-cls", "-seg")
- ] # default
- try:
- tag, assets = github_assets(repo, release)
- except Exception:
- try:
- tag, assets = github_assets(repo) # latest release
- except Exception:
- try:
- tag = (
- subprocess.check_output(
- "git tag", shell=True, stderr=subprocess.STDOUT
- )
- .decode()
- .split()[-1]
- )
- except Exception:
- tag = release
-
- file.parent.mkdir(
- parents=True, exist_ok=True
- ) # make parent dir (if required)
- if name in assets:
- url3 = "https://drive.google.com/drive/folders/1EFQTEUeXWSFww0luse2jB9M1QNZQGwNl" # backup gdrive mirror
- safe_download(
- file,
- url=f"https://github.com/{repo}/releases/download/{tag}/{name}",
- min_bytes=1e5,
- error_msg=f"{file} missing, try downloading from https://github.com/{repo}/releases/{tag} or {url3}",
- )
-
- return str(file)
diff --git a/spaces/Adapting/TrendFlow/mypages/sidebar.py b/spaces/Adapting/TrendFlow/mypages/sidebar.py
deleted file mode 100644
index acd1c57dee79b2ca2e490f6c755ab3529dd8e6d5..0000000000000000000000000000000000000000
--- a/spaces/Adapting/TrendFlow/mypages/sidebar.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import streamlit as st
-import datetime
-# from .utils import PACKAGE_ROOT
-# from lrt.utils.functions import template
-
-APP_VERSION = 'v0.1.0'
-
-
-def render_sidebar():
- icons = f'''
-
-
-
- '''
-
- sidebar_markdown = f'''
-
-
-
- TrendFlow
-
-
-
-
- {APP_VERSION}
-
-
-
-
-
-
- {icons}
-
- ---
-
- ## Choose the Paper Search Platforms'''
- st.sidebar.markdown(sidebar_markdown, unsafe_allow_html=True)
- # elvsier = st.sidebar.checkbox('Elvsier',value=True)
- # IEEE = st.sidebar.checkbox('IEEE',value=False)
- # google = st.sidebar.checkbox('Google Scholar')
- platforms = st.sidebar.multiselect('Platforms', options=
- [
- # 'Elvsier',
- 'IEEE',
- # 'Google Scholar',
- 'Arxiv',
- 'Paper with Code'
- ], default=[
- # 'Elvsier',
- 'IEEE',
- # 'Google Scholar',
- 'Arxiv',
- 'Paper with Code'
- ])
-
- st.sidebar.markdown('## Choose the max number of papers to search')
- number_papers = st.sidebar.slider('number', 10, 100, 20, 5)
-
- st.sidebar.markdown('## Choose the start year of publication')
- this_year = datetime.date.today().year
- start_year = st.sidebar.slider('year start:', 2000, this_year, 2010, 1)
-
- st.sidebar.markdown('## Choose the end year of publication')
- end_year = st.sidebar.slider('year end:', 2000, this_year, this_year, 1)
-
- with st.sidebar:
- st.markdown('## Adjust hyperparameters')
- with st.expander('Clustering Options'):
- standardization = st.selectbox('1) Standardization before clustering', options=['no', 'yes'], index=0)
- dr = st.selectbox('2) Dimension reduction', options=['none', 'pca'], index=0)
- tmp = min(number_papers, 15)
- max_k = st.slider('3) Max number of clusters', 2, tmp, tmp // 2)
- cluster_model = st.selectbox('4) Clustering model', options=['Gaussian Mixture Model', 'K-means'], index=0)
-
- with st.expander('Keyphrases Generation Options'):
- model_cpt = st.selectbox(label='Model checkpoint', options=['KeyBart', 'KeyBartAdapter', 'keyphrase-transformer'], index=0)
-
- st.markdown('---')
- st.markdown(icons, unsafe_allow_html=True)
- st.markdown(f'''
"
-
-gr.Interface(
- filter_removal,
- gr.inputs.Image(shape=(256, 256)),
- gr.outputs.Image(),
- title=title,
- description=description,
- article=article,
- allow_flagging=False,
- examples_per_page=17,
- enable_queue=True,
- examples=[
- ["images/examples/98_He-Fe.jpg"],
- ["images/examples/2_Brannan.jpg"],
- ["images/examples/12_Toaster.jpg"],
- ["images/examples/18_Gingham.jpg"],
- ["images/examples/11_Sutro.jpg"],
- ["images/examples/9_Lo-Fi.jpg"],
- ["images/examples/3_Mayfair.jpg"],
- ["images/examples/4_Hudson.jpg"],
- ["images/examples/5_Amaro.jpg"],
- ["images/examples/6_1977.jpg"],
- ["images/examples/8_Valencia.jpg"],
- ["images/examples/16_Lo-Fi.jpg"],
- ["images/examples/10_Nashville.jpg"],
- ["images/examples/15_X-ProII.jpg"],
- ["images/examples/14_Willow.jpg"],
- ["images/examples/30_Perpetua.jpg"],
- ["images/examples/1_Clarendon.jpg"],
- ]
-).launch()
diff --git a/spaces/Amrrs/DragGan-Inversion/gui_utils/imgui_utils.py b/spaces/Amrrs/DragGan-Inversion/gui_utils/imgui_utils.py
deleted file mode 100644
index 6dbc8baaa2b9d1b23a7d9d6bb0cf11465bd236ec..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/gui_utils/imgui_utils.py
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import contextlib
-import imgui
-
-# ----------------------------------------------------------------------------
-
-
-def set_default_style(color_scheme='dark', spacing=9, indent=23, scrollbar=27):
- s = imgui.get_style()
- s.window_padding = [spacing, spacing]
- s.item_spacing = [spacing, spacing]
- s.item_inner_spacing = [spacing, spacing]
- s.columns_min_spacing = spacing
- s.indent_spacing = indent
- s.scrollbar_size = scrollbar
- s.frame_padding = [4, 3]
- s.window_border_size = 1
- s.child_border_size = 1
- s.popup_border_size = 1
- s.frame_border_size = 1
- s.window_rounding = 0
- s.child_rounding = 0
- s.popup_rounding = 3
- s.frame_rounding = 3
- s.scrollbar_rounding = 3
- s.grab_rounding = 3
-
- getattr(imgui, f'style_colors_{color_scheme}')(s)
- c0 = s.colors[imgui.COLOR_MENUBAR_BACKGROUND]
- c1 = s.colors[imgui.COLOR_FRAME_BACKGROUND]
- s.colors[imgui.COLOR_POPUP_BACKGROUND] = [
- x * 0.7 + y * 0.3 for x, y in zip(c0, c1)][:3] + [1]
-
-# ----------------------------------------------------------------------------
-
-
-@contextlib.contextmanager
-def grayed_out(cond=True):
- if cond:
- s = imgui.get_style()
- text = s.colors[imgui.COLOR_TEXT_DISABLED]
- grab = s.colors[imgui.COLOR_SCROLLBAR_GRAB]
- back = s.colors[imgui.COLOR_MENUBAR_BACKGROUND]
- imgui.push_style_color(imgui.COLOR_TEXT, *text)
- imgui.push_style_color(imgui.COLOR_CHECK_MARK, *grab)
- imgui.push_style_color(imgui.COLOR_SLIDER_GRAB, *grab)
- imgui.push_style_color(imgui.COLOR_SLIDER_GRAB_ACTIVE, *grab)
- imgui.push_style_color(imgui.COLOR_FRAME_BACKGROUND, *back)
- imgui.push_style_color(imgui.COLOR_FRAME_BACKGROUND_HOVERED, *back)
- imgui.push_style_color(imgui.COLOR_FRAME_BACKGROUND_ACTIVE, *back)
- imgui.push_style_color(imgui.COLOR_BUTTON, *back)
- imgui.push_style_color(imgui.COLOR_BUTTON_HOVERED, *back)
- imgui.push_style_color(imgui.COLOR_BUTTON_ACTIVE, *back)
- imgui.push_style_color(imgui.COLOR_HEADER, *back)
- imgui.push_style_color(imgui.COLOR_HEADER_HOVERED, *back)
- imgui.push_style_color(imgui.COLOR_HEADER_ACTIVE, *back)
- imgui.push_style_color(imgui.COLOR_POPUP_BACKGROUND, *back)
- yield
- imgui.pop_style_color(14)
- else:
- yield
-
-# ----------------------------------------------------------------------------
-
-
-@contextlib.contextmanager
-def item_width(width=None):
- if width is not None:
- imgui.push_item_width(width)
- yield
- imgui.pop_item_width()
- else:
- yield
-
-# ----------------------------------------------------------------------------
-
-
-def scoped_by_object_id(method):
- def decorator(self, *args, **kwargs):
- imgui.push_id(str(id(self)))
- res = method(self, *args, **kwargs)
- imgui.pop_id()
- return res
- return decorator
-
-# ----------------------------------------------------------------------------
-
-
-def button(label, width=0, enabled=True):
- with grayed_out(not enabled):
- clicked = imgui.button(label, width=width)
- clicked = clicked and enabled
- return clicked
-
-# ----------------------------------------------------------------------------
-
-
-def collapsing_header(text, visible=None, flags=0, default=False, enabled=True, show=True):
- expanded = False
- if show:
- if default:
- flags |= imgui.TREE_NODE_DEFAULT_OPEN
- if not enabled:
- flags |= imgui.TREE_NODE_LEAF
- with grayed_out(not enabled):
- expanded, visible = imgui.collapsing_header(
- text, visible=visible, flags=flags)
- expanded = expanded and enabled
- return expanded, visible
-
-# ----------------------------------------------------------------------------
-
-
-def popup_button(label, width=0, enabled=True):
- if button(label, width, enabled):
- imgui.open_popup(label)
- opened = imgui.begin_popup(label)
- return opened
-
-# ----------------------------------------------------------------------------
-
-
-def input_text(label, value, buffer_length, flags, width=None, help_text=''):
- old_value = value
- color = list(imgui.get_style().colors[imgui.COLOR_TEXT])
- if value == '':
- color[-1] *= 0.5
- with item_width(width):
- imgui.push_style_color(imgui.COLOR_TEXT, *color)
- value = value if value != '' else help_text
- changed, value = imgui.input_text(label, value, buffer_length, flags)
- value = value if value != help_text else ''
- imgui.pop_style_color(1)
- if not flags & imgui.INPUT_TEXT_ENTER_RETURNS_TRUE:
- changed = (value != old_value)
- return changed, value
-
-# ----------------------------------------------------------------------------
-
-
-def drag_previous_control(enabled=True):
- dragging = False
- dx = 0
- dy = 0
- if imgui.begin_drag_drop_source(imgui.DRAG_DROP_SOURCE_NO_PREVIEW_TOOLTIP):
- if enabled:
- dragging = True
- dx, dy = imgui.get_mouse_drag_delta()
- imgui.reset_mouse_drag_delta()
- imgui.end_drag_drop_source()
- return dragging, dx, dy
-
-# ----------------------------------------------------------------------------
-
-
-def drag_button(label, width=0, enabled=True):
- clicked = button(label, width=width, enabled=enabled)
- dragging, dx, dy = drag_previous_control(enabled=enabled)
- return clicked, dragging, dx, dy
-
-# ----------------------------------------------------------------------------
-
-
-def drag_hidden_window(label, x, y, width, height, enabled=True):
- imgui.push_style_color(imgui.COLOR_WINDOW_BACKGROUND, 0, 0, 0, 0)
- imgui.push_style_color(imgui.COLOR_BORDER, 0, 0, 0, 0)
- imgui.set_next_window_position(x, y)
- imgui.set_next_window_size(width, height)
- imgui.begin(label, closable=False, flags=(
- imgui.WINDOW_NO_TITLE_BAR | imgui.WINDOW_NO_RESIZE | imgui.WINDOW_NO_MOVE))
- dragging, dx, dy = drag_previous_control(enabled=enabled)
- imgui.end()
- imgui.pop_style_color(2)
- return dragging, dx, dy
-
-# ----------------------------------------------------------------------------
-
-
-def click_hidden_window(label, x, y, width, height, img_w, img_h, enabled=True):
- imgui.push_style_color(imgui.COLOR_WINDOW_BACKGROUND, 0, 0, 0, 0)
- imgui.push_style_color(imgui.COLOR_BORDER, 0, 0, 0, 0)
- imgui.set_next_window_position(x, y)
- imgui.set_next_window_size(width, height)
- imgui.begin(label, closable=False, flags=(
- imgui.WINDOW_NO_TITLE_BAR | imgui.WINDOW_NO_RESIZE | imgui.WINDOW_NO_MOVE))
- clicked, down = False, False
- img_x, img_y = 0, 0
- if imgui.is_mouse_down():
- posx, posy = imgui.get_mouse_pos()
- if posx >= x and posx < x + width and posy >= y and posy < y + height:
- if imgui.is_mouse_clicked():
- clicked = True
- down = True
- img_x = round((posx - x) / (width - 1) * (img_w - 1))
- img_y = round((posy - y) / (height - 1) * (img_h - 1))
- imgui.end()
- imgui.pop_style_color(2)
- return clicked, down, img_x, img_y
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/Amrrs/DragGan-Inversion/training/dataset.py b/spaces/Amrrs/DragGan-Inversion/training/dataset.py
deleted file mode 100644
index f04842155f754b0aac49b91b1de1de6db017a776..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/training/dataset.py
+++ /dev/null
@@ -1,252 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Streaming images and labels from datasets created with dataset_tool.py."""
-
-import os
-import numpy as np
-import zipfile
-import PIL.Image
-import json
-import torch
-import dnnlib
-
-try:
- import pyspng
-except ImportError:
- pyspng = None
-
-# ----------------------------------------------------------------------------
-
-
-class Dataset(torch.utils.data.Dataset):
- def __init__(self,
- name, # Name of the dataset.
- raw_shape, # Shape of the raw image data (NCHW).
- # Artificially limit the size of the dataset. None = no limit. Applied before xflip.
- max_size=None,
- # Enable conditioning labels? False = label dimension is zero.
- use_labels=False,
- # Artificially double the size of the dataset via x-flips. Applied after max_size.
- xflip=False,
- # Random seed to use when applying max_size.
- random_seed=0,
- ):
- self._name = name
- self._raw_shape = list(raw_shape)
- self._use_labels = use_labels
- self._raw_labels = None
- self._label_shape = None
-
- # Apply max_size.
- self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64)
- if (max_size is not None) and (self._raw_idx.size > max_size):
- np.random.RandomState(random_seed).shuffle(self._raw_idx)
- self._raw_idx = np.sort(self._raw_idx[:max_size])
-
- # Apply xflip.
- self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8)
- if xflip:
- self._raw_idx = np.tile(self._raw_idx, 2)
- self._xflip = np.concatenate(
- [self._xflip, np.ones_like(self._xflip)])
-
- def _get_raw_labels(self):
- if self._raw_labels is None:
- self._raw_labels = self._load_raw_labels() if self._use_labels else None
- if self._raw_labels is None:
- self._raw_labels = np.zeros(
- [self._raw_shape[0], 0], dtype=np.float32)
- assert isinstance(self._raw_labels, np.ndarray)
- assert self._raw_labels.shape[0] == self._raw_shape[0]
- assert self._raw_labels.dtype in [np.float32, np.int64]
- if self._raw_labels.dtype == np.int64:
- assert self._raw_labels.ndim == 1
- assert np.all(self._raw_labels >= 0)
- return self._raw_labels
-
- def close(self): # to be overridden by subclass
- pass
-
- def _load_raw_image(self, raw_idx): # to be overridden by subclass
- raise NotImplementedError
-
- def _load_raw_labels(self): # to be overridden by subclass
- raise NotImplementedError
-
- def __getstate__(self):
- return dict(self.__dict__, _raw_labels=None)
-
- def __del__(self):
- try:
- self.close()
- except:
- pass
-
- def __len__(self):
- return self._raw_idx.size
-
- def __getitem__(self, idx):
- image = self._load_raw_image(self._raw_idx[idx])
- assert isinstance(image, np.ndarray)
- assert list(image.shape) == self.image_shape
- assert image.dtype == np.uint8
- if self._xflip[idx]:
- assert image.ndim == 3 # CHW
- image = image[:, :, ::-1]
- return image.copy(), self.get_label(idx)
-
- def get_label(self, idx):
- label = self._get_raw_labels()[self._raw_idx[idx]]
- if label.dtype == np.int64:
- onehot = np.zeros(self.label_shape, dtype=np.float32)
- onehot[label] = 1
- label = onehot
- return label.copy()
-
- def get_details(self, idx):
- d = dnnlib.EasyDict()
- d.raw_idx = int(self._raw_idx[idx])
- d.xflip = (int(self._xflip[idx]) != 0)
- d.raw_label = self._get_raw_labels()[d.raw_idx].copy()
- return d
-
- @property
- def name(self):
- return self._name
-
- @property
- def image_shape(self):
- return list(self._raw_shape[1:])
-
- @property
- def num_channels(self):
- assert len(self.image_shape) == 3 # CHW
- return self.image_shape[0]
-
- @property
- def resolution(self):
- assert len(self.image_shape) == 3 # CHW
- assert self.image_shape[1] == self.image_shape[2]
- return self.image_shape[1]
-
- @property
- def label_shape(self):
- if self._label_shape is None:
- raw_labels = self._get_raw_labels()
- if raw_labels.dtype == np.int64:
- self._label_shape = [int(np.max(raw_labels)) + 1]
- else:
- self._label_shape = raw_labels.shape[1:]
- return list(self._label_shape)
-
- @property
- def label_dim(self):
- assert len(self.label_shape) == 1
- return self.label_shape[0]
-
- @property
- def has_labels(self):
- return any(x != 0 for x in self.label_shape)
-
- @property
- def has_onehot_labels(self):
- return self._get_raw_labels().dtype == np.int64
-
-# ----------------------------------------------------------------------------
-
-
-class ImageFolderDataset(Dataset):
- def __init__(self,
- path, # Path to directory or zip.
- # Ensure specific resolution, None = highest available.
- resolution=None,
- # Additional arguments for the Dataset base class.
- **super_kwargs,
- ):
- self._path = path
- self._zipfile = None
-
- if os.path.isdir(self._path):
- self._type = 'dir'
- self._all_fnames = {os.path.relpath(os.path.join(
- root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files}
- elif self._file_ext(self._path) == '.zip':
- self._type = 'zip'
- self._all_fnames = set(self._get_zipfile().namelist())
- else:
- raise IOError('Path must point to a directory or zip')
-
- PIL.Image.init()
- self._image_fnames = sorted(
- fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION)
- if len(self._image_fnames) == 0:
- raise IOError('No image files found in the specified path')
-
- name = os.path.splitext(os.path.basename(self._path))[0]
- raw_shape = [len(self._image_fnames)] + \
- list(self._load_raw_image(0).shape)
- if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution):
- raise IOError('Image files do not match the specified resolution')
- super().__init__(name=name, raw_shape=raw_shape, **super_kwargs)
-
- @staticmethod
- def _file_ext(fname):
- return os.path.splitext(fname)[1].lower()
-
- def _get_zipfile(self):
- assert self._type == 'zip'
- if self._zipfile is None:
- self._zipfile = zipfile.ZipFile(self._path)
- return self._zipfile
-
- def _open_file(self, fname):
- if self._type == 'dir':
- return open(os.path.join(self._path, fname), 'rb')
- if self._type == 'zip':
- return self._get_zipfile().open(fname, 'r')
- return None
-
- def close(self):
- try:
- if self._zipfile is not None:
- self._zipfile.close()
- finally:
- self._zipfile = None
-
- def __getstate__(self):
- return dict(super().__getstate__(), _zipfile=None)
-
- def _load_raw_image(self, raw_idx):
- fname = self._image_fnames[raw_idx]
- with self._open_file(fname) as f:
- if pyspng is not None and self._file_ext(fname) == '.png':
- image = pyspng.load(f.read())
- else:
- image = np.array(PIL.Image.open(f))
- if image.ndim == 2:
- image = image[:, :, np.newaxis] # HW => HWC
- image = image.transpose(2, 0, 1) # HWC => CHW
- return image
-
- def _load_raw_labels(self):
- fname = 'dataset.json'
- if fname not in self._all_fnames:
- return None
- with self._open_file(fname) as f:
- labels = json.load(f)['labels']
- if labels is None:
- return None
- labels = dict(labels)
- labels = [labels[fname.replace('\\', '/')]
- for fname in self._image_fnames]
- labels = np.array(labels)
- labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim])
- return labels
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/decoder/vgg.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/decoder/vgg.py
deleted file mode 100644
index 30bfdc9932fbdd92ae51baec4984312b3dbb9cb8..0000000000000000000000000000000000000000
--- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/decoder/vgg.py
+++ /dev/null
@@ -1,225 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from collections import namedtuple
-import torchvision.models as models
-
-
-# pytorch pretrained vgg
-class Encoder(nn.Module):
- def __init__(self):
- super().__init__()
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- # pretrained vgg19
- vgg19 = models.vgg19(weights='DEFAULT').features.to(device)
-
- self.relu1_1 = vgg19[:2]
- self.relu2_1 = vgg19[2:7]
- self.relu3_1 = vgg19[7:12]
- self.relu4_1 = vgg19[12:21]
-
- # fix parameters
- self.requires_grad_(False)
-
- def forward(self, x):
- _output = namedtuple('output', ['relu1_1', 'relu2_1', 'relu3_1', 'relu4_1'])
- # print("Data; ", x)
- # print("Relu: ", self.relu1_1)
- relu1_1 = self.relu1_1(x)
-
- relu2_1 = self.relu2_1(relu1_1)
- relu3_1 = self.relu3_1(relu2_1)
- relu4_1 = self.relu4_1(relu3_1)
- output = _output(relu1_1, relu2_1, relu3_1, relu4_1)
-
- return output
-
-
-class Decoder(nn.Module):
- """
- starting from relu 4_1
- """
-
- def __init__(self, ckpt_path=None):
- super().__init__()
-
- self.layers = nn.Sequential(
- # nn.Conv2d(512, 256, 3, padding=1, padding_mode='reflect'),
- # nn.ReLU(),
- # nn.Upsample(scale_factor=2, mode='nearest'), # relu4-1
- nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu3-4
- nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu3-3
- nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu3-2
- nn.Conv2d(256, 128, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(),
- nn.Upsample(scale_factor=2, mode='nearest'), # relu3-1
- nn.Conv2d(128, 128, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu2-2
- nn.Conv2d(128, 64, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(),
- nn.Upsample(scale_factor=2, mode='nearest'), # relu2-1
- nn.Conv2d(64, 64, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu1-2
- nn.Conv2d(64, 3, 3, padding=1, padding_mode='reflect'),
- )
-
- if ckpt_path is not None:
- self.load_state_dict(torch.load(ckpt_path))
-
- def forward(self, x):
- return self.layers(x)
-
-
-### high-res unet feature map decoder
-
-
-class DownBlock(nn.Module):
-
- def __init__(self, in_dim, out_dim, down='conv'):
- super(DownBlock, self).__init__()
-
- if down == 'conv':
- self.down_conv = nn.Sequential(
- nn.Conv2d(in_dim, out_dim, 3, 2, 1),
- nn.LeakyReLU(),
- nn.Conv2d(out_dim, out_dim, 3, 1, 1),
- nn.LeakyReLU(),
- )
- elif down == 'mean':
- self.down_conv = nn.AvgPool2d(2)
- else:
- raise NotImplementedError(
- '[ERROR] invalid downsampling operator: {:s}'.format(down)
- )
-
- def forward(self, x):
- x = self.down_conv(x)
- return x
-
-
-class UpBlock(nn.Module):
-
- def __init__(self, in_dim, out_dim, skip_dim=None, up='nearest'):
- super(UpBlock, self).__init__()
-
- if up == 'conv':
- self.up_conv = nn.Sequential(
- nn.ConvTranspose2d(in_dim, out_dim, 3, 2, 1, 1),
- nn.ReLU(),
- )
- else:
- assert up in ('bilinear', 'nearest'), \
- '[ERROR] invalid upsampling mode: {:s}'.format(up)
- self.up_conv = nn.Sequential(
- nn.Upsample(scale_factor=2, mode=up),
- nn.Conv2d(in_dim, out_dim, 3, 1, 1),
- nn.ReLU(),
- )
-
- in_dim = out_dim
- if skip_dim is not None:
- in_dim += skip_dim
- self.conv = nn.Sequential(
- nn.Conv2d(in_dim, out_dim, 3, 1, 1),
- nn.ReLU(),
- )
-
- def _pad(self, x, y):
- dh = y.size(-2) - x.size(-2)
- dw = y.size(-1) - x.size(-1)
- if dh == 0 and dw == 0:
- return x
- if dh < 0:
- x = x[..., :dh, :]
- if dw < 0:
- x = x[..., :, :dw]
- if dh > 0 or dw > 0:
- x = F.pad(
- x,
- pad=(dw // 2, dw - dw // 2, dh // 2, dh - dh // 2),
- mode='reflect'
- )
- return x
-
- def forward(self, x, skip=None):
- x = self.up_conv(x)
- if skip is not None:
- x = torch.cat([self._pad(x, skip), skip], 1)
- x = self.conv(x)
- return x
-
-
-class UNetDecoder(nn.Module):
-
- def __init__(self, in_dim=256):
- super(UNetDecoder, self).__init__()
-
- self.down_layers = nn.ModuleList()
- self.skip_convs = nn.ModuleList()
- self.up_layers = nn.ModuleList()
-
- in_dim = in_dim
- self.n_levels = 2
- self.up = 1
-
- for i in range(self.n_levels):
- self.down_layers.append(
- DownBlock(
- in_dim, in_dim,
- )
- )
- out_dim = in_dim // 2 ** (self.n_levels - i)
- self.skip_convs.append(nn.Conv2d(in_dim, out_dim, 1))
- self.up_layers.append(
- UpBlock(
- out_dim * 2, out_dim, out_dim,
- )
- )
-
- out_dim = in_dim // 2 ** self.n_levels
- self.out_conv = nn.Sequential(
- nn.Conv2d(out_dim, out_dim, 3, 1, 1),
- nn.ReLU(),
- nn.Conv2d(out_dim, 3, 1, 1),
- )
-
- def forward(self, feats):
- skips = []
- for i in range(self.n_levels):
- skips.append(self.skip_convs[i](feats))
- feats = self.down_layers[i](feats)
- for i in range(self.n_levels - 1, -1, -1):
- feats = self.up_layers[i](feats, skips[i])
- rgb = self.out_conv(feats)
- return rgb
-
-
-### high-res feature map decoder
-
-class PlainDecoder(nn.Module):
- def __init__(self) -> None:
- super().__init__()
-
- self.layers = nn.Sequential(
- nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu3-4
- nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu3-3
- nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu3-2
- nn.Conv2d(256, 128, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(),
- nn.Conv2d(128, 128, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu2-2
- nn.Conv2d(128, 64, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(),
- nn.Conv2d(64, 64, 3, padding=1, padding_mode='reflect'),
- nn.ReLU(), # relu1-2
- nn.Conv2d(64, 3, 3, padding=1, padding_mode='reflect'),
- )
-
- def forward(self, x):
- return self.layers(x)
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index d80b2ec160ae1c41499d45242713a99122d8adf8..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './danet_r50-d8_512x1024_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Aravindsssss/gradin/app.py b/spaces/Aravindsssss/gradin/app.py
deleted file mode 100644
index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000
--- a/spaces/Aravindsssss/gradin/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """You are a helpful assistant to answer all user queries.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/command_context.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/command_context.py
deleted file mode 100644
index 139995ac3f109a82664e4913f7ebc32ecf7617e1..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/command_context.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from contextlib import ExitStack, contextmanager
-from typing import ContextManager, Generator, TypeVar
-
-_T = TypeVar("_T", covariant=True)
-
-
-class CommandContextMixIn:
- def __init__(self) -> None:
- super().__init__()
- self._in_main_context = False
- self._main_context = ExitStack()
-
- @contextmanager
- def main_context(self) -> Generator[None, None, None]:
- assert not self._in_main_context
-
- self._in_main_context = True
- try:
- with self._main_context:
- yield
- finally:
- self._in_main_context = False
-
- def enter_context(self, context_provider: ContextManager[_T]) -> _T:
- assert self._in_main_context
-
- return self._main_context.enter_context(context_provider)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/traceback.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/traceback.py
deleted file mode 100644
index c4ffe1f99e6dc9c0509459196cb68fa95e79048d..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/traceback.py
+++ /dev/null
@@ -1,756 +0,0 @@
-from __future__ import absolute_import
-
-import linecache
-import os
-import platform
-import sys
-from dataclasses import dataclass, field
-from traceback import walk_tb
-from types import ModuleType, TracebackType
-from typing import (
- Any,
- Callable,
- Dict,
- Iterable,
- List,
- Optional,
- Sequence,
- Tuple,
- Type,
- Union,
-)
-
-from pip._vendor.pygments.lexers import guess_lexer_for_filename
-from pip._vendor.pygments.token import Comment, Keyword, Name, Number, Operator, String
-from pip._vendor.pygments.token import Text as TextToken
-from pip._vendor.pygments.token import Token
-from pip._vendor.pygments.util import ClassNotFound
-
-from . import pretty
-from ._loop import loop_last
-from .columns import Columns
-from .console import Console, ConsoleOptions, ConsoleRenderable, RenderResult, group
-from .constrain import Constrain
-from .highlighter import RegexHighlighter, ReprHighlighter
-from .panel import Panel
-from .scope import render_scope
-from .style import Style
-from .syntax import Syntax
-from .text import Text
-from .theme import Theme
-
-WINDOWS = platform.system() == "Windows"
-
-LOCALS_MAX_LENGTH = 10
-LOCALS_MAX_STRING = 80
-
-
-def install(
- *,
- console: Optional[Console] = None,
- width: Optional[int] = 100,
- extra_lines: int = 3,
- theme: Optional[str] = None,
- word_wrap: bool = False,
- show_locals: bool = False,
- locals_max_length: int = LOCALS_MAX_LENGTH,
- locals_max_string: int = LOCALS_MAX_STRING,
- locals_hide_dunder: bool = True,
- locals_hide_sunder: Optional[bool] = None,
- indent_guides: bool = True,
- suppress: Iterable[Union[str, ModuleType]] = (),
- max_frames: int = 100,
-) -> Callable[[Type[BaseException], BaseException, Optional[TracebackType]], Any]:
- """Install a rich traceback handler.
-
- Once installed, any tracebacks will be printed with syntax highlighting and rich formatting.
-
-
- Args:
- console (Optional[Console], optional): Console to write exception to. Default uses internal Console instance.
- width (Optional[int], optional): Width (in characters) of traceback. Defaults to 100.
- extra_lines (int, optional): Extra lines of code. Defaults to 3.
- theme (Optional[str], optional): Pygments theme to use in traceback. Defaults to ``None`` which will pick
- a theme appropriate for the platform.
- word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.
- show_locals (bool, optional): Enable display of local variables. Defaults to False.
- locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
- Defaults to 10.
- locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
- locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True.
- locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False.
- indent_guides (bool, optional): Enable indent guides in code and locals. Defaults to True.
- suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
-
- Returns:
- Callable: The previous exception handler that was replaced.
-
- """
- traceback_console = Console(stderr=True) if console is None else console
-
- locals_hide_sunder = (
- True
- if (traceback_console.is_jupyter and locals_hide_sunder is None)
- else locals_hide_sunder
- )
-
- def excepthook(
- type_: Type[BaseException],
- value: BaseException,
- traceback: Optional[TracebackType],
- ) -> None:
- traceback_console.print(
- Traceback.from_exception(
- type_,
- value,
- traceback,
- width=width,
- extra_lines=extra_lines,
- theme=theme,
- word_wrap=word_wrap,
- show_locals=show_locals,
- locals_max_length=locals_max_length,
- locals_max_string=locals_max_string,
- locals_hide_dunder=locals_hide_dunder,
- locals_hide_sunder=bool(locals_hide_sunder),
- indent_guides=indent_guides,
- suppress=suppress,
- max_frames=max_frames,
- )
- )
-
- def ipy_excepthook_closure(ip: Any) -> None: # pragma: no cover
- tb_data = {} # store information about showtraceback call
- default_showtraceback = ip.showtraceback # keep reference of default traceback
-
- def ipy_show_traceback(*args: Any, **kwargs: Any) -> None:
- """wrap the default ip.showtraceback to store info for ip._showtraceback"""
- nonlocal tb_data
- tb_data = kwargs
- default_showtraceback(*args, **kwargs)
-
- def ipy_display_traceback(
- *args: Any, is_syntax: bool = False, **kwargs: Any
- ) -> None:
- """Internally called traceback from ip._showtraceback"""
- nonlocal tb_data
- exc_tuple = ip._get_exc_info()
-
- # do not display trace on syntax error
- tb: Optional[TracebackType] = None if is_syntax else exc_tuple[2]
-
- # determine correct tb_offset
- compiled = tb_data.get("running_compiled_code", False)
- tb_offset = tb_data.get("tb_offset", 1 if compiled else 0)
- # remove ipython internal frames from trace with tb_offset
- for _ in range(tb_offset):
- if tb is None:
- break
- tb = tb.tb_next
-
- excepthook(exc_tuple[0], exc_tuple[1], tb)
- tb_data = {} # clear data upon usage
-
- # replace _showtraceback instead of showtraceback to allow ipython features such as debugging to work
- # this is also what the ipython docs recommends to modify when subclassing InteractiveShell
- ip._showtraceback = ipy_display_traceback
- # add wrapper to capture tb_data
- ip.showtraceback = ipy_show_traceback
- ip.showsyntaxerror = lambda *args, **kwargs: ipy_display_traceback(
- *args, is_syntax=True, **kwargs
- )
-
- try: # pragma: no cover
- # if within ipython, use customized traceback
- ip = get_ipython() # type: ignore[name-defined]
- ipy_excepthook_closure(ip)
- return sys.excepthook
- except Exception:
- # otherwise use default system hook
- old_excepthook = sys.excepthook
- sys.excepthook = excepthook
- return old_excepthook
-
-
-@dataclass
-class Frame:
- filename: str
- lineno: int
- name: str
- line: str = ""
- locals: Optional[Dict[str, pretty.Node]] = None
-
-
-@dataclass
-class _SyntaxError:
- offset: int
- filename: str
- line: str
- lineno: int
- msg: str
-
-
-@dataclass
-class Stack:
- exc_type: str
- exc_value: str
- syntax_error: Optional[_SyntaxError] = None
- is_cause: bool = False
- frames: List[Frame] = field(default_factory=list)
-
-
-@dataclass
-class Trace:
- stacks: List[Stack]
-
-
-class PathHighlighter(RegexHighlighter):
- highlights = [r"(?P.*/)(?P.+)"]
-
-
-class Traceback:
- """A Console renderable that renders a traceback.
-
- Args:
- trace (Trace, optional): A `Trace` object produced from `extract`. Defaults to None, which uses
- the last exception.
- width (Optional[int], optional): Number of characters used to traceback. Defaults to 100.
- extra_lines (int, optional): Additional lines of code to render. Defaults to 3.
- theme (str, optional): Override pygments theme used in traceback.
- word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.
- show_locals (bool, optional): Enable display of local variables. Defaults to False.
- indent_guides (bool, optional): Enable indent guides in code and locals. Defaults to True.
- locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
- Defaults to 10.
- locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
- locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True.
- locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False.
- suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
- max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100.
-
- """
-
- LEXERS = {
- "": "text",
- ".py": "python",
- ".pxd": "cython",
- ".pyx": "cython",
- ".pxi": "pyrex",
- }
-
- def __init__(
- self,
- trace: Optional[Trace] = None,
- *,
- width: Optional[int] = 100,
- extra_lines: int = 3,
- theme: Optional[str] = None,
- word_wrap: bool = False,
- show_locals: bool = False,
- locals_max_length: int = LOCALS_MAX_LENGTH,
- locals_max_string: int = LOCALS_MAX_STRING,
- locals_hide_dunder: bool = True,
- locals_hide_sunder: bool = False,
- indent_guides: bool = True,
- suppress: Iterable[Union[str, ModuleType]] = (),
- max_frames: int = 100,
- ):
- if trace is None:
- exc_type, exc_value, traceback = sys.exc_info()
- if exc_type is None or exc_value is None or traceback is None:
- raise ValueError(
- "Value for 'trace' required if not called in except: block"
- )
- trace = self.extract(
- exc_type, exc_value, traceback, show_locals=show_locals
- )
- self.trace = trace
- self.width = width
- self.extra_lines = extra_lines
- self.theme = Syntax.get_theme(theme or "ansi_dark")
- self.word_wrap = word_wrap
- self.show_locals = show_locals
- self.indent_guides = indent_guides
- self.locals_max_length = locals_max_length
- self.locals_max_string = locals_max_string
- self.locals_hide_dunder = locals_hide_dunder
- self.locals_hide_sunder = locals_hide_sunder
-
- self.suppress: Sequence[str] = []
- for suppress_entity in suppress:
- if not isinstance(suppress_entity, str):
- assert (
- suppress_entity.__file__ is not None
- ), f"{suppress_entity!r} must be a module with '__file__' attribute"
- path = os.path.dirname(suppress_entity.__file__)
- else:
- path = suppress_entity
- path = os.path.normpath(os.path.abspath(path))
- self.suppress.append(path)
- self.max_frames = max(4, max_frames) if max_frames > 0 else 0
-
- @classmethod
- def from_exception(
- cls,
- exc_type: Type[Any],
- exc_value: BaseException,
- traceback: Optional[TracebackType],
- *,
- width: Optional[int] = 100,
- extra_lines: int = 3,
- theme: Optional[str] = None,
- word_wrap: bool = False,
- show_locals: bool = False,
- locals_max_length: int = LOCALS_MAX_LENGTH,
- locals_max_string: int = LOCALS_MAX_STRING,
- locals_hide_dunder: bool = True,
- locals_hide_sunder: bool = False,
- indent_guides: bool = True,
- suppress: Iterable[Union[str, ModuleType]] = (),
- max_frames: int = 100,
- ) -> "Traceback":
- """Create a traceback from exception info
-
- Args:
- exc_type (Type[BaseException]): Exception type.
- exc_value (BaseException): Exception value.
- traceback (TracebackType): Python Traceback object.
- width (Optional[int], optional): Number of characters used to traceback. Defaults to 100.
- extra_lines (int, optional): Additional lines of code to render. Defaults to 3.
- theme (str, optional): Override pygments theme used in traceback.
- word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.
- show_locals (bool, optional): Enable display of local variables. Defaults to False.
- indent_guides (bool, optional): Enable indent guides in code and locals. Defaults to True.
- locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
- Defaults to 10.
- locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
- locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True.
- locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False.
- suppress (Iterable[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
- max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100.
-
- Returns:
- Traceback: A Traceback instance that may be printed.
- """
- rich_traceback = cls.extract(
- exc_type,
- exc_value,
- traceback,
- show_locals=show_locals,
- locals_max_length=locals_max_length,
- locals_max_string=locals_max_string,
- locals_hide_dunder=locals_hide_dunder,
- locals_hide_sunder=locals_hide_sunder,
- )
-
- return cls(
- rich_traceback,
- width=width,
- extra_lines=extra_lines,
- theme=theme,
- word_wrap=word_wrap,
- show_locals=show_locals,
- indent_guides=indent_guides,
- locals_max_length=locals_max_length,
- locals_max_string=locals_max_string,
- locals_hide_dunder=locals_hide_dunder,
- locals_hide_sunder=locals_hide_sunder,
- suppress=suppress,
- max_frames=max_frames,
- )
-
- @classmethod
- def extract(
- cls,
- exc_type: Type[BaseException],
- exc_value: BaseException,
- traceback: Optional[TracebackType],
- *,
- show_locals: bool = False,
- locals_max_length: int = LOCALS_MAX_LENGTH,
- locals_max_string: int = LOCALS_MAX_STRING,
- locals_hide_dunder: bool = True,
- locals_hide_sunder: bool = False,
- ) -> Trace:
- """Extract traceback information.
-
- Args:
- exc_type (Type[BaseException]): Exception type.
- exc_value (BaseException): Exception value.
- traceback (TracebackType): Python Traceback object.
- show_locals (bool, optional): Enable display of local variables. Defaults to False.
- locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
- Defaults to 10.
- locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
- locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True.
- locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False.
-
- Returns:
- Trace: A Trace instance which you can use to construct a `Traceback`.
- """
-
- stacks: List[Stack] = []
- is_cause = False
-
- from pip._vendor.rich import _IMPORT_CWD
-
- def safe_str(_object: Any) -> str:
- """Don't allow exceptions from __str__ to propagate."""
- try:
- return str(_object)
- except Exception:
- return ""
-
- while True:
- stack = Stack(
- exc_type=safe_str(exc_type.__name__),
- exc_value=safe_str(exc_value),
- is_cause=is_cause,
- )
-
- if isinstance(exc_value, SyntaxError):
- stack.syntax_error = _SyntaxError(
- offset=exc_value.offset or 0,
- filename=exc_value.filename or "?",
- lineno=exc_value.lineno or 0,
- line=exc_value.text or "",
- msg=exc_value.msg,
- )
-
- stacks.append(stack)
- append = stack.frames.append
-
- def get_locals(
- iter_locals: Iterable[Tuple[str, object]]
- ) -> Iterable[Tuple[str, object]]:
- """Extract locals from an iterator of key pairs."""
- if not (locals_hide_dunder or locals_hide_sunder):
- yield from iter_locals
- return
- for key, value in iter_locals:
- if locals_hide_dunder and key.startswith("__"):
- continue
- if locals_hide_sunder and key.startswith("_"):
- continue
- yield key, value
-
- for frame_summary, line_no in walk_tb(traceback):
- filename = frame_summary.f_code.co_filename
- if filename and not filename.startswith("<"):
- if not os.path.isabs(filename):
- filename = os.path.join(_IMPORT_CWD, filename)
- if frame_summary.f_locals.get("_rich_traceback_omit", False):
- continue
-
- frame = Frame(
- filename=filename or "?",
- lineno=line_no,
- name=frame_summary.f_code.co_name,
- locals={
- key: pretty.traverse(
- value,
- max_length=locals_max_length,
- max_string=locals_max_string,
- )
- for key, value in get_locals(frame_summary.f_locals.items())
- }
- if show_locals
- else None,
- )
- append(frame)
- if frame_summary.f_locals.get("_rich_traceback_guard", False):
- del stack.frames[:]
-
- cause = getattr(exc_value, "__cause__", None)
- if cause:
- exc_type = cause.__class__
- exc_value = cause
- # __traceback__ can be None, e.g. for exceptions raised by the
- # 'multiprocessing' module
- traceback = cause.__traceback__
- is_cause = True
- continue
-
- cause = exc_value.__context__
- if cause and not getattr(exc_value, "__suppress_context__", False):
- exc_type = cause.__class__
- exc_value = cause
- traceback = cause.__traceback__
- is_cause = False
- continue
- # No cover, code is reached but coverage doesn't recognize it.
- break # pragma: no cover
-
- trace = Trace(stacks=stacks)
- return trace
-
- def __rich_console__(
- self, console: Console, options: ConsoleOptions
- ) -> RenderResult:
- theme = self.theme
- background_style = theme.get_background_style()
- token_style = theme.get_style_for_token
-
- traceback_theme = Theme(
- {
- "pretty": token_style(TextToken),
- "pygments.text": token_style(Token),
- "pygments.string": token_style(String),
- "pygments.function": token_style(Name.Function),
- "pygments.number": token_style(Number),
- "repr.indent": token_style(Comment) + Style(dim=True),
- "repr.str": token_style(String),
- "repr.brace": token_style(TextToken) + Style(bold=True),
- "repr.number": token_style(Number),
- "repr.bool_true": token_style(Keyword.Constant),
- "repr.bool_false": token_style(Keyword.Constant),
- "repr.none": token_style(Keyword.Constant),
- "scope.border": token_style(String.Delimiter),
- "scope.equals": token_style(Operator),
- "scope.key": token_style(Name),
- "scope.key.special": token_style(Name.Constant) + Style(dim=True),
- },
- inherit=False,
- )
-
- highlighter = ReprHighlighter()
- for last, stack in loop_last(reversed(self.trace.stacks)):
- if stack.frames:
- stack_renderable: ConsoleRenderable = Panel(
- self._render_stack(stack),
- title="[traceback.title]Traceback [dim](most recent call last)",
- style=background_style,
- border_style="traceback.border",
- expand=True,
- padding=(0, 1),
- )
- stack_renderable = Constrain(stack_renderable, self.width)
- with console.use_theme(traceback_theme):
- yield stack_renderable
- if stack.syntax_error is not None:
- with console.use_theme(traceback_theme):
- yield Constrain(
- Panel(
- self._render_syntax_error(stack.syntax_error),
- style=background_style,
- border_style="traceback.border.syntax_error",
- expand=True,
- padding=(0, 1),
- width=self.width,
- ),
- self.width,
- )
- yield Text.assemble(
- (f"{stack.exc_type}: ", "traceback.exc_type"),
- highlighter(stack.syntax_error.msg),
- )
- elif stack.exc_value:
- yield Text.assemble(
- (f"{stack.exc_type}: ", "traceback.exc_type"),
- highlighter(stack.exc_value),
- )
- else:
- yield Text.assemble((f"{stack.exc_type}", "traceback.exc_type"))
-
- if not last:
- if stack.is_cause:
- yield Text.from_markup(
- "\n[i]The above exception was the direct cause of the following exception:\n",
- )
- else:
- yield Text.from_markup(
- "\n[i]During handling of the above exception, another exception occurred:\n",
- )
-
- @group()
- def _render_syntax_error(self, syntax_error: _SyntaxError) -> RenderResult:
- highlighter = ReprHighlighter()
- path_highlighter = PathHighlighter()
- if syntax_error.filename != "":
- if os.path.exists(syntax_error.filename):
- text = Text.assemble(
- (f" {syntax_error.filename}", "pygments.string"),
- (":", "pygments.text"),
- (str(syntax_error.lineno), "pygments.number"),
- style="pygments.text",
- )
- yield path_highlighter(text)
- syntax_error_text = highlighter(syntax_error.line.rstrip())
- syntax_error_text.no_wrap = True
- offset = min(syntax_error.offset - 1, len(syntax_error_text))
- syntax_error_text.stylize("bold underline", offset, offset)
- syntax_error_text += Text.from_markup(
- "\n" + " " * offset + "[traceback.offset]▲[/]",
- style="pygments.text",
- )
- yield syntax_error_text
-
- @classmethod
- def _guess_lexer(cls, filename: str, code: str) -> str:
- ext = os.path.splitext(filename)[-1]
- if not ext:
- # No extension, look at first line to see if it is a hashbang
- # Note, this is an educated guess and not a guarantee
- # If it fails, the only downside is that the code is highlighted strangely
- new_line_index = code.index("\n")
- first_line = code[:new_line_index] if new_line_index != -1 else code
- if first_line.startswith("#!") and "python" in first_line.lower():
- return "python"
- try:
- return cls.LEXERS.get(ext) or guess_lexer_for_filename(filename, code).name
- except ClassNotFound:
- return "text"
-
- @group()
- def _render_stack(self, stack: Stack) -> RenderResult:
- path_highlighter = PathHighlighter()
- theme = self.theme
-
- def read_code(filename: str) -> str:
- """Read files, and cache results on filename.
-
- Args:
- filename (str): Filename to read
-
- Returns:
- str: Contents of file
- """
- return "".join(linecache.getlines(filename))
-
- def render_locals(frame: Frame) -> Iterable[ConsoleRenderable]:
- if frame.locals:
- yield render_scope(
- frame.locals,
- title="locals",
- indent_guides=self.indent_guides,
- max_length=self.locals_max_length,
- max_string=self.locals_max_string,
- )
-
- exclude_frames: Optional[range] = None
- if self.max_frames != 0:
- exclude_frames = range(
- self.max_frames // 2,
- len(stack.frames) - self.max_frames // 2,
- )
-
- excluded = False
- for frame_index, frame in enumerate(stack.frames):
-
- if exclude_frames and frame_index in exclude_frames:
- excluded = True
- continue
-
- if excluded:
- assert exclude_frames is not None
- yield Text(
- f"\n... {len(exclude_frames)} frames hidden ...",
- justify="center",
- style="traceback.error",
- )
- excluded = False
-
- first = frame_index == 0
- frame_filename = frame.filename
- suppressed = any(frame_filename.startswith(path) for path in self.suppress)
-
- if os.path.exists(frame.filename):
- text = Text.assemble(
- path_highlighter(Text(frame.filename, style="pygments.string")),
- (":", "pygments.text"),
- (str(frame.lineno), "pygments.number"),
- " in ",
- (frame.name, "pygments.function"),
- style="pygments.text",
- )
- else:
- text = Text.assemble(
- "in ",
- (frame.name, "pygments.function"),
- (":", "pygments.text"),
- (str(frame.lineno), "pygments.number"),
- style="pygments.text",
- )
- if not frame.filename.startswith("<") and not first:
- yield ""
- yield text
- if frame.filename.startswith("<"):
- yield from render_locals(frame)
- continue
- if not suppressed:
- try:
- code = read_code(frame.filename)
- if not code:
- # code may be an empty string if the file doesn't exist, OR
- # if the traceback filename is generated dynamically
- continue
- lexer_name = self._guess_lexer(frame.filename, code)
- syntax = Syntax(
- code,
- lexer_name,
- theme=theme,
- line_numbers=True,
- line_range=(
- frame.lineno - self.extra_lines,
- frame.lineno + self.extra_lines,
- ),
- highlight_lines={frame.lineno},
- word_wrap=self.word_wrap,
- code_width=88,
- indent_guides=self.indent_guides,
- dedent=False,
- )
- yield ""
- except Exception as error:
- yield Text.assemble(
- (f"\n{error}", "traceback.error"),
- )
- else:
- yield (
- Columns(
- [
- syntax,
- *render_locals(frame),
- ],
- padding=1,
- )
- if frame.locals
- else syntax
- )
-
-
-if __name__ == "__main__": # pragma: no cover
-
- from .console import Console
-
- console = Console()
- import sys
-
- def bar(a: Any) -> None: # 这是对亚洲语言支持的测试。面对模棱两可的想法,拒绝猜测的诱惑
- one = 1
- print(one / a)
-
- def foo(a: Any) -> None:
- _rich_traceback_guard = True
- zed = {
- "characters": {
- "Paul Atreides",
- "Vladimir Harkonnen",
- "Thufir Hawat",
- "Duncan Idaho",
- },
- "atomic_types": (None, False, True),
- }
- bar(a)
-
- def error() -> None:
-
- try:
- try:
- foo(0)
- except:
- slfkjsldkfj # type: ignore[name-defined]
- except:
- console.print_exception(show_locals=True)
-
- error()
diff --git a/spaces/Atualli/yoloxTeste/yoloxdetect2/helpers.py b/spaces/Atualli/yoloxTeste/yoloxdetect2/helpers.py
deleted file mode 100644
index 8b93202c729469976931eb78f698e949a030019b..0000000000000000000000000000000000000000
--- a/spaces/Atualli/yoloxTeste/yoloxdetect2/helpers.py
+++ /dev/null
@@ -1,111 +0,0 @@
-from yoloxdetect2.utils.downloads import attempt_download_from_hub, attempt_download
-from yolox.data.datasets import COCO_CLASSES
-from yolox.data.data_augment import preproc
-from yolox.utils import postprocess, vis
-import importlib
-import torch
-import cv2
-import os
-from PIL import Image
-from torchvision import transforms
-import numpy
-
-class YoloxDetector2:
- def __init__(
- self,
- model_path: str,
- config_path: str,
- device: str = "cpu",
- hf_model: bool = False,
- ):
-
- self.device = device
- self.config_path = config_path
- self.classes = COCO_CLASSES
- self.conf = 0.3
- self.iou = 0.45
- self.show = False
- self.save = True
- self.torchyolo = False
-
- if self.save:
- self.save_path = 'output/result.jpg'
-
- if hf_model:
- self.model_path = attempt_download_from_hub(model_path)
-
- else:
- self.model_path = attempt_download(model_path)
-
- #self.model_path = model_path
- self.load_model()
-
-
- def load_model(self):
- current_exp = importlib.import_module(self.config_path)
- exp = current_exp.Exp()
-
- model = exp.get_model()
- model.to(self.device)
- model.eval()
- ckpt = torch.load(self.model_path, map_location=self.device)
- model.load_state_dict(ckpt["model"])
- self.model = model
-
-
- def predict(self, image_path, image_size):
- #image = cv2.imread(image_path)
-
- #img = transforms.ToTensor()(image_path).unsqueeze(0)
- image = opencvImage = cv2.cvtColor(numpy.array(image_path), cv2.COLOR_RGB2BGR)
- if image_size is not None:
- ratio = min(image_size / image.shape[0], image_size / image.shape[1])
- img, _ = preproc(image, input_size=(image_size, image_size))
- img = torch.from_numpy(img).to(self.device).unsqueeze(0).float()
- else:
- manuel_size = 640
- ratio = min(manuel_size / image.shape[0], manuel_size / image.shape[1])
- img, _ = preproc(image, input_size=(manuel_size, manuel_size))
- img = torch.from_numpy(img).to(self.device).unsqueeze(0).float()
-
- prediction_result = self.model(img)
- original_predictions = postprocess(
- prediction=prediction_result,
- num_classes= len(COCO_CLASSES),
- conf_thre=self.conf,
- nms_thre=self.iou)[0]
-
- if original_predictions is None :
- return None
- output = original_predictions.cpu()
- bboxes = output[:, 0:4]
- bboxes /= ratio
- cls = output[:, 6]
- scores = output[:, 4] * output[:, 5]
- if self.torchyolo is False:
- vis_res = vis(
- image,
- bboxes,
- scores,
- cls,
- self.conf,
- COCO_CLASSES,
- )
- if self.show:
- cv2.imshow("result", vis_res)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
- elif self.save:
- save_dir = self.save_path[:self.save_path.rfind('/')]
- if not os.path.exists(save_dir):
- os.makedirs(save_dir)
- cv2.imwrite(self.save_path, vis_res)
- return self.save_path
-
- else:
- return vis_res
- else:
- object_predictions_list = [bboxes, scores, cls, COCO_CLASSES]
- return object_predictions_list
-
-
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py
deleted file mode 100644
index 369fb884930c5dd82f94024c45303dafaab14d66..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from abc import ABCMeta, abstractmethod
-import torch.nn as nn
-
-from detectron2.layers import ShapeSpec
-
-__all__ = ["Backbone"]
-
-
-class Backbone(nn.Module, metaclass=ABCMeta):
- """
- Abstract base class for network backbones.
- """
-
- def __init__(self):
- """
- The `__init__` method of any subclass can specify its own set of arguments.
- """
- super().__init__()
-
- @abstractmethod
- def forward(self):
- """
- Subclasses must override this method, but adhere to the same return type.
-
- Returns:
- dict[str->Tensor]: mapping from feature name (e.g., "res2") to tensor
- """
- pass
-
- @property
- def size_divisibility(self) -> int:
- """
- Some backbones require the input height and width to be divisible by a
- specific integer. This is typically true for encoder / decoder type networks
- with lateral connection (e.g., FPN) for which feature maps need to match
- dimension in the "bottom up" and "top down" paths. Set to 0 if no specific
- input size divisibility is required.
- """
- return 0
-
- def output_shape(self):
- """
- Returns:
- dict[str->ShapeSpec]
- """
- # this is a backward-compatible default
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self._out_features
- }
diff --git a/spaces/Bajr/softly/run.sh b/spaces/Bajr/softly/run.sh
deleted file mode 100644
index 4e71a95762f5afd99f02268863d2dddf90fac4e8..0000000000000000000000000000000000000000
--- a/spaces/Bajr/softly/run.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-cd source_code
-git clone ${GIT_URL} .
-cp ../.env .
-cp ../greeting.md .
-npm install
-npm run build
-npm start
diff --git a/spaces/Benson/text-generation/Examples/Coche De Carreras De Deriva 2 1.22.0 Apk.md b/spaces/Benson/text-generation/Examples/Coche De Carreras De Deriva 2 1.22.0 Apk.md
deleted file mode 100644
index 700605ac19c07a6774fa14795aeeea6f951694d9..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Coche De Carreras De Deriva 2 1.22.0 Apk.md
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
CarX Drift Racing 2: un juego de carreras realista y emocionante para Android
-
Si eres un fan de los juegos de carreras, es posible que desees echar un vistazo a CarX Drift Racing 2, uno de los juegos de carreras más realistas y emocionantes para dispositivos Android. CarX Drift Racing 2 es una secuela del popular juego CarX Drift Racing, que tiene más de 50 millones de descargas en Google Play. En este juego, usted puede experimentar la emoción de la deriva, carreras, y afinar su propio coche en varias pistas y modos. También puede competir con otros jugadores en línea y unirse a clubes para mostrar sus habilidades.
-
¿Qué es CarX Drift Racing 2?
-
CarX Drift Racing 2 es un juego de carreras desarrollado por CarX Technologies, una empresa que se especializa en la creación de la física realista del coche y los gráficos para los juegos. El juego fue lanzado en diciembre de 2018 y se ha actualizado regularmente con nuevas características y mejoras. La última versión del juego es 1.22.0, que fue lanzado el 16 de junio de 2023.
Una de las principales atracciones de CarX Drift Racing 2 son sus impresionantes gráficos y física, que hacen que el juego se vea y se sienta como un simulador de carreras real. El juego utiliza modelos 3D avanzados, texturas, iluminación, sombras y reflejos para crear entornos y coches realistas. El juego también utiliza un sofisticado motor de física que simula el comportamiento de diferentes partes del automóvil, como neumáticos, suspensión, motor, transmisión, frenos, etc. El juego también admite pantallas de alta resolución y modo 60 FPS para un juego suave.
-
Múltiples modos de juego y pistas
-
-
El juego también tiene más de 70 pistas para elegir, cada una con su propio diseño, escenario, clima y hora del día. Puedes correr en diferentes lugares del mundo, como Japón, EE.UU., Rusia, Noruega, etc. También puedes personalizar la configuración de la pista, como el número de vueltas, oponentes, tráfico, etc.
-
Coches personalizables y tuning
-
Una tercera característica de CarX Drift Racing 2 es su personalizable coches y sistema de ajuste, que le permiten crear su propio coche único y optimizar su rendimiento. El juego tiene más de 80 coches para elegir, cada uno con sus propias características, tales como velocidad, aceleración, manejo, etc. También puede personalizar la apariencia de su coche cambiando su color, trabajo de pintura, calcomanías, ruedas, spoilers, etc.
-
Además de la apariencia, también puede ajustar su automóvil ajustando sus parámetros, como la potencia del motor, el par, la relación de transmisión, la rigidez de la suspensión, el ángulo de curvatura, la fuerza de freno, etc. También puede usar diferentes tipos de neumáticos, como slicks, semi-slicks o neumáticos callejeros, dependiendo de las condiciones de la pista. Puede guardar sus ajustes de ajuste y cargarlos para diferentes pistas y modos. El ajuste de su coche puede hacer una gran diferencia en su rendimiento y resultados.
-
Multijugador en línea y clubes
-
Una cuarta característica de CarX Drift Racing 2 es su sistema multijugador en línea y clubes, que le permiten interactuar y competir con otros jugadores de todo el mundo. Puede unirse o crear su propio club, invitar a sus amigos, chatear con otros miembros y participar en eventos y torneos del club. También puedes retar a otros jugadores a duelos, carreras de fantasmas o derrapes en tándem, y ganar recompensas y rankings. También puedes ver las repeticiones de otros jugadores y aprender de sus técnicas.
-
Cómo descargar e instalar CarX deriva Racing 2 APK?
-
Requisitos y compatibilidad
-
Para descargar e instalar CarX Drift Racing 2 APK, es necesario tener un dispositivo Android que cumple con los siguientes requisitos:
-
-
-
RAM: 2 GB o más
-
Almacenamiento: 1.5 GB o más
-
Conexión a Internet: necesaria para las funciones en línea
-
-
El juego es compatible con la mayoría de los dispositivos Android, pero algunos modelos pueden tener problemas con los gráficos o el rendimiento. Puede consultar la lista de dispositivos compatibles en el sitio web oficial del juego.
-
Pasos para descargar e instalar
-
Para descargar e instalar CarX Drift Racing 2 APK, es necesario seguir estos pasos:
-
-
-
Ir a la página web oficial del juego y haga clic en el "Descargar APK" botón.
-
Permita que su dispositivo descargue archivos de fuentes desconocidas yendo a Configuración > Seguridad > Fuentes desconocidas.
-
Busque el archivo APK descargado en el administrador de archivos de su dispositivo y toque en él para iniciar el proceso de instalación.
-
Siga las instrucciones en la pantalla y espere a que termine la instalación.
-
Iniciar el juego y disfrutar!
-
-
Consejos y trucos para jugar CarX Drift Racing 2
-
Domina la técnica de deriva
-
La habilidad más importante que necesitas dominar en CarX Drift Racing 2 es la deriva, que es el arte de deslizar tu coche de lado manteniendo el control y la velocidad. La deriva puede ayudarte a ganar más puntos, velocidad y reputación en el juego. Para que la deriva sea efectiva, necesitas practicar los siguientes pasos:
-
-
Acércate a una esquina a alta velocidad y toca el pedal del freno para iniciar una deriva.
-
Dirigir su coche en la dirección de la deriva y utilizar el acelerador para controlar el ángulo y la velocidad de su coche.
-
Utilice el freno de mano para ajustar la posición y el equilibrio de su coche durante la deriva.
-
Salir de la deriva sin problemas mediante la liberación del acelerador y la dirección de su coche recto.
-
-
También puede usar diferentes ángulos de cámara, como la cabina, el capó o la persecución, para obtener una mejor vista de su automóvil y la pista. También puede usar el sensor giroscópico de su dispositivo para dirigir su automóvil inclinándolo hacia la izquierda o hacia la derecha.
-
-
Para mejorar tu rendimiento y resultados en CarX Drift Racing 2, necesitas mejorar tu auto regularmente gastando dinero y puntos de reputación. Usted puede actualizar diferentes aspectos de su coche, tales como motor, turbo, nitro, transmisión, suspensión, frenos, neumáticos, etc. Actualizar su coche puede aumentar su potencia, velocidad, manejo, estabilidad, etc. También puedes desbloquear coches nuevos completando ciertas tareas o misiones en el modo Carrera o comprándolos con dinero real.
-
Únete a un club y compite con otros
-
Para aprovechar al máximo CarX Drift Racing 2, debes unirte a un club y competir con otros jugadores en línea. Unirse a un club puede darle acceso a eventos exclusivos, torneos, recompensas y clasificaciones. También puedes chatear con otros miembros del club, compartir consejos y trucos, y retarlos a duelos o derivas en tándem. También puedes crear tu propio club e invitar a tus amigos u otros jugadores a unirse. Competir con otros puede ayudarte a mejorar tus habilidades, ganar más dinero y puntos de reputación, y divertirte más.
-
Conclusión
-
CarX Drift Racing 2 es un juego de carreras realista y emocionante para dispositivos Android que le permite experimentar la emoción de la deriva, las carreras y la puesta a punto de su propio coche en varias pistas y modos. El juego tiene impresionantes gráficos y física, múltiples modos de juego y pistas, coches personalizables y tuning, multijugador en línea y clubes, y muchas más características que lo convierten en uno de los mejores juegos de carreras en Google Play. Si usted está buscando un juego de carreras desafiante y divertido, usted debe descargar e instalar CarX Drift Racing 2 APK y disfrutar del viaje!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre CarX Drift Racing 2:
-
-
¿Cómo puedo obtener más puntos de dinero y reputación en CarX Drift Racing 2?
-
-
¿Cómo puedo desbloquear coches nuevos en CarX Drift Racing 2?
-
Puedes desbloquear coches nuevos completando ciertas tareas o misiones en el modo Carrera o comprándolos con dinero real. También puede obtener algunos coches de forma gratuita al registrarse diariamente, participar en eventos especiales o unirse a clubes.
-
¿Cómo puedo cambiar los controles y la configuración de CarX Drift Racing 2?
-
Puede cambiar los controles y ajustes en CarX Drift Racing 2 yendo a Configuración > Controles o Configuración > Gráficos. Puede elegir entre diferentes opciones de control, como botones, inclinación o volante. También puede ajustar la calidad gráfica, el volumen de sonido, el idioma, etc.
-
¿Cómo puedo contactar a los desarrolladores de CarX Drift Racing 2?
-
Puede ponerse en contacto con los desarrolladores de CarX Drift Racing 2 yendo a Configuración > Soporte o visitando su sitio web oficial, página de Facebook o cuenta de Instagram. También puede enviarles un correo electrónico a support@carx-tech.com.
-
¿Cuáles son los requisitos mínimos para CarX Drift Racing 2?
-
Los requisitos mínimos para CarX Drift Racing 2 son versión Android 5.0 o superior, RAM 2 GB o más, almacenamiento 1.5 GB o más, y conexión a Internet.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar 60 Segundos Reatomized Pc.md b/spaces/Benson/text-generation/Examples/Descargar 60 Segundos Reatomized Pc.md
deleted file mode 100644
index 6505a11c6accc1eea448987ff07df6926ee0a294..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar 60 Segundos Reatomized Pc.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-
Descargar 60 segundos Reatomized PC: Una guía para sobrevivir al apocalipsis nuclear
-
¿Tienes lo que se necesita para sobrevivir a un desastre nuclear? ¿Se puede recoger los suministros, rescatar a su familia, y mantenerse con vida en su refugio radioactivo? Si usted está buscando un juego de aventura post-apocalíptica desafiante e hilarante, entonces usted debe tratar 60 Segundos Reatomized PC. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cómo jugarlo y cómo descargarlo en tu PC.
60 Seconds Reatomized es una edición remasterizada del clásico juego de aventura atómica, 60 Seconds! , desarrollado por Robot Gentleman. Fue lanzado en julio de 2019 y cuenta con soporte 4K, gráficos 2D actualizados y texturas 3D dibujadas a mano, nuevo menú interactivo, sistema de interfaz de usuario mejorado, una actualización técnica y, por supuesto... ¡nuevo contenido!
-
Una edición remasterizada del clásico juego de aventura atómica
-
60 Seconds Reatomized se basa en el juego original, 60 Seconds! , que fue lanzado en mayo de 2015. La premisa del juego es simple: solo te quedan 60 segundos antes de que una bomba nuclear llegue a tu vecindario. Usted tiene que correr alrededor de su casa y recoger tantos artículos y miembros de la familia como sea posible, antes de dirigirse a su refugio radioactivo. Pero eso es sólo el principio. Una vez que estás en el refugio, tienes que manejar tus recursos, tomar decisiones difíciles, enfrentar eventos aleatorios, y tal vez sobrevivir... o no.
-
Características y jugabilidad de 60 segundos Reatomized
-
60 Seconds Reatomized ofrece muchas características y modos de juego que te mantendrán entretenido durante horas. Algunos de ellos son:
-
-
-
Nuevo modo de juego: Desafíos de supervivencia - historias únicas y cortas que pondrán a prueba tus habilidades de supervivencia.
-
Nuevas oportunidades para escapar de la tierra baldía en forma de una historia que abarca múltiples partidas.
-
-
Nuevos sonidos, arte y contenido visual desbloqueable que agregará un poco de color a su refugio de lluvia radiactiva.
-
Nuevos logros para desbloquear y alardear.
-
-
El modo de juego de 60 segundos Reatomized se divide en dos fases: buscar y sobrevivir. En la fase de búsqueda, tienes que usar las teclas de flecha o el ratón para controlar a Ted, el protagonista, mientras corre por su casa y agarra objetos y personas. Solo puede llevar una cantidad limitada de artículos a la vez, por lo que tiene que elegir sabiamente qué llevar con usted. También tienes que evitar obstáculos como muebles o fuego que te ralentizarán o te lastimarán. Tienes que llegar al refugio antes de que se acabe el tiempo o morirás.
-
En la fase de supervivencia, tienes que usar el ratón para interactuar con tu refugio y sus habitantes. Tienes que racionar alimentos y agua, usar artículos como radio o botiquín médico, leer tu diario y tomar decisiones que afectarán tu destino. También encontrarás eventos aleatorios que te desafiarán o te ayudarán. Por ejemplo, puedes recibir un golpe en la puerta de un extraño que quiere comerciar o unirse a ti, o puedes escuchar una transmisión militar que te dice cómo escapar. Sin embargo, hay que tener cuidado, ya que no todo es como parece. También es posible que se enfrente a peligros como cucarachas mutantes, invasores o enfermedades por radiación. Tu objetivo es sobrevivir hasta encontrar una salida o morir intentándolo.
-
Requisitos del sistema y compatibilidad de 60 segundos Reatomized
-
60 Seconds Reatomized es compatible con sistemas operativos Windows, Mac OS y Linux. Puede reproducirlo en su PC o portátil, siempre y cuando cumpla con los requisitos mínimos del sistema. Aquí están las especificaciones que necesita para ejecutar el juego sin problemas:
-
-
-
OS
-
Procesador
-
Memoria
-
Gráficos
-
Almacenamiento
-
-
-
Windows 7/8/10 64-bit
-
Intel Core con 2 Duo 2.0+ GHz o una CPU AMD equivalente
-
4 GB de RAM
-
-
4 GB de espacio disponible
-
-
-
Mac OS X 10.9+
-
Intel Core i5-2430M o mejor
-
4 GB de RAM
-
NVIDIA GeForce GT 650M, AMD Radeon HD 6970M o mejor
-
4 GB de espacio disponible
-
-
-
Ubuntu 14.04 LTS 64 bits o más reciente
-
Intel Core con 2 Duo 2.0+ GHz o una CPU AMD equivalente
Si usted está interesado en jugar 60 segundos Reatomized PC, usted tiene varias opciones para descargarlo. Puedes elegir entre Steam, BlueStacks o G2A, dependiendo de tu preferencia y presupuesto. Vamos a echar un vistazo a cada opción y ver cómo descargar el juego de ellos.
-
Descargar de Steam
-
Steam es la plataforma más popular y confiable para descargar y jugar juegos de PC. Ofrece muchos beneficios, como almacenamiento en la nube, logros, características de la comunidad y más. También puedes acceder a Steam en cualquier dispositivo con tu cuenta y jugar a tus juegos en cualquier lugar. Aquí te mostramos cómo descargar 60 Seconds Reatomized PC desde Steam:
-
Pasos para descargar desde Steam
-
-
Crea una cuenta de Steam si aún no la tienes. Puedes hacerlo gratis en Steam.
-
Inicia el cliente de Steam e inicia sesión con tu cuenta.
-
Buscar 60 segundos Reatomized en la tienda de vapor o haga clic en este BlueStacks.
-
Descargue e instale el reproductor de aplicaciones BlueStacks en su PC. Puede obtenerlo desde el sitio web enlace.
-
Instalar el juego y esperar a que termine.
-
El juego aparecerá en la pantalla de inicio y puede empezar a jugar.
-
-
Pros y contras de la descarga de BlueStacks
-
-
-
Pros:
-
Puede jugar el juego en su PC, así como en su dispositivo móvil con la misma cuenta.
-
Puedes disfrutar del juego con una pantalla más grande, mejores gráficos y un rendimiento más rápido.
-
Puede personalizar la configuración del juego, los controles y los atajos de teclado según sus preferencias.
-
Puede utilizar otras aplicaciones y juegos de Android en su PC con BlueStacks.
-
Contras:
-
Necesitas una conexión a Internet estable para descargar y jugar el juego.
-
Necesitas tener suficiente espacio de almacenamiento en tu PC para instalar BlueStacks y el juego.
-
Necesitas tener un PC compatible que cumpla con los requisitos del sistema de BlueStacks y el juego.
-
Es posible que encuentre algunos problemas técnicos o errores con el juego o BlueStacks.
-
-
Descargar desde G2A
-
Si desea obtener 60 segundos Reatomized PC por un precio más barato, puede probar G2A. G2A es un mercado en línea que vende productos digitales, como juegos, software, tarjetas de regalo, etc. Puede comprar y vender productos de otros usuarios o vendedores verificados. También puede encontrar descuentos, ofertas y paquetes que le ahorrarán dinero. Aquí está cómo descargar 60 segundos Reatomized PC de G2A:
-
Pasos para descargar desde G2A
-
-
Cree una cuenta G2A si no tiene una. Puede hacerlo gratis en el sitio web link.
-
Seleccione el producto que se adapte a sus necesidades y presupuesto. Puede comparar diferentes ofertas de diferentes vendedores y comprobar sus calificaciones y comentarios.
-
Añadir el producto a su carrito y proceder a la compra.
-
Selecciona tu método de pago y completa la compra.
-
Recibirá un correo electrónico con un código o un enlace para canjear su producto.
-
-
Si recibes un enlace, tienes que seguirlo y descargar el juego directamente desde allí.
-
-
Pros y contras de la descarga desde G2A
-
Descargar desde G2A tiene algunas ventajas y desventajas que debes tener en cuenta antes de comprar el juego. Estas son algunas de ellas:
-
-
Pros:
-
Puedes conseguir el juego por un precio mucho más bajo que en otras plataformas.
-
Puedes encontrar descuentos, ofertas y paquetes que te darán más valor por tu dinero.
-
Puede elegir entre diferentes ofertas de diferentes vendedores y encontrar el mejor para usted.
-
Puede utilizar varios métodos de pago, como tarjeta de crédito, PayPal, criptomoneda, etc.
-
Contras:
-
Necesitas una conexión a Internet estable para descargar y jugar el juego.
-
Necesitas tener suficiente espacio de almacenamiento en tu PC para instalar el juego.
-
Necesitas tener un PC compatible que cumpla con los requisitos del sistema del juego.
-
Usted puede encontrar algunos riesgos o estafas con algunos vendedores o productos. Tienes que tener cuidado y comprobar sus calificaciones y comentarios antes de comprar nada.
-
-
Conclusión
-
60 Seconds Reatomized PC es un juego divertido y desafiante que pondrá a prueba tus habilidades de supervivencia en un apocalipsis nuclear. Tienes que buscar provisiones, rescatar a tu familia y mantenerte con vida en tu refugio nuclear. Puedes descargar el juego desde Steam, BlueStacks o G2A, dependiendo de tu preferencia y presupuesto. Cada opción tiene sus pros y sus contras que usted debe pesar antes de hacer una compra. Esperamos que este artículo le haya ayudado a aprender más acerca de 60 segundos Reatomized PC y cómo descargarlo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Feliz sobreviviendo!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre 60 segundos Reatomized PC y sus respuestas:
-
-
¿Está libre el PC reatomizado de 60 segundos?
-
-
Es 60 segundos reatomized multijugador de PC?
-
No, 60 segundos Reatomized PC no es multijugador. Es un juego de un solo jugador que se puede jugar sin conexión o en línea.
-
Es 60 segundos Reatomized PC diferente de 60 segundos!?
-
Sí, 60 segundos Reatomized PC es diferente de 60 segundos!. Es una edición remasterizada del juego original que cuenta con gráficos mejorados, nuevo contenido y más.
-
¿Cuánto tiempo es 60 segundos Reatomized PC?
-
La duración de 60 segundos Reatomized PC depende de sus opciones y suerte. Una sola reproducción puede durar desde unos pocos minutos hasta unas pocas horas. Puedes reproducir el juego varias veces y experimentar diferentes resultados y escenarios.
-
¿Cuáles son los mejores consejos y trucos para 60 segundos Reatomized PC?
-
Algunos de los mejores consejos y trucos para 60 segundos Reatomized PC son:
-
-
Planifique con anticipación y memorice el diseño de su casa antes de la fase de búsqueda.
-
Priorice los artículos que son esenciales para la supervivencia, como alimentos, agua, radio, botiquín, etc.
-
No se olvide de agarrar a los miembros de su familia y mascotas. Ellos le ayudarán en el refugio y proporcionar compañía.
-
Tenga cuidado con sus decisiones y acciones en el refugio. Tendrán consecuencias y afectarán sus posibilidades de supervivencia.
-
Usa tus artículos sabiamente y con moderación. Nunca sabes cuándo los necesitarás.
-
Presta atención a las transmisiones de radio y otras pistas. Te darán pistas sobre cómo escapar o sobrevivir.
-
No confíes en todos los que llaman a tu puerta. Algunos de ellos pueden ser amistosos, pero algunos de ellos pueden ser peligrosos.
-
Diviértete y disfruta del humor y el absurdo del juego.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat/src/lib/types/Settings.ts b/spaces/BetterAPI/BetterChat/src/lib/types/Settings.ts
deleted file mode 100644
index f028db02f7b8d021e06939c187de11624af4737f..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat/src/lib/types/Settings.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import type { Timestamps } from "./Timestamps";
-
-export interface Settings extends Timestamps {
- sessionId: string;
-
- /**
- * Note: Only conversations with this settings explictly set to true should be shared.
- *
- * This setting is explicitly set to true when users accept the ethics modal.
- * */
- shareConversationsWithModelAuthors: boolean;
- ethicsModalAcceptedAt: Date | null;
-}
diff --git a/spaces/Big-Web/MMSD/README.md b/spaces/Big-Web/MMSD/README.md
deleted file mode 100644
index 8ec19ddd30a955ca7f1345e7845cade4c7c7a96a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MMSD
-emoji: 😻
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/freeze.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/freeze.py
deleted file mode 100644
index 5fa6d39b2c7c74635f9570c1e1665d03a45024b2..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/freeze.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import sys
-from optparse import Values
-from typing import List
-
-from pip._internal.cli import cmdoptions
-from pip._internal.cli.base_command import Command
-from pip._internal.cli.status_codes import SUCCESS
-from pip._internal.operations.freeze import freeze
-from pip._internal.utils.compat import stdlib_pkgs
-
-DEV_PKGS = {"pip", "setuptools", "distribute", "wheel"}
-
-
-class FreezeCommand(Command):
- """
- Output installed packages in requirements format.
-
- packages are listed in a case-insensitive sorted order.
- """
-
- usage = """
- %prog [options]"""
- log_streams = ("ext://sys.stderr", "ext://sys.stderr")
-
- def add_options(self) -> None:
- self.cmd_opts.add_option(
- "-r",
- "--requirement",
- dest="requirements",
- action="append",
- default=[],
- metavar="file",
- help=(
- "Use the order in the given requirements file and its "
- "comments when generating output. This option can be "
- "used multiple times."
- ),
- )
- self.cmd_opts.add_option(
- "-l",
- "--local",
- dest="local",
- action="store_true",
- default=False,
- help=(
- "If in a virtualenv that has global access, do not output "
- "globally-installed packages."
- ),
- )
- self.cmd_opts.add_option(
- "--user",
- dest="user",
- action="store_true",
- default=False,
- help="Only output packages installed in user-site.",
- )
- self.cmd_opts.add_option(cmdoptions.list_path())
- self.cmd_opts.add_option(
- "--all",
- dest="freeze_all",
- action="store_true",
- help=(
- "Do not skip these packages in the output:"
- " {}".format(", ".join(DEV_PKGS))
- ),
- )
- self.cmd_opts.add_option(
- "--exclude-editable",
- dest="exclude_editable",
- action="store_true",
- help="Exclude editable package from output.",
- )
- self.cmd_opts.add_option(cmdoptions.list_exclude())
-
- self.parser.insert_option_group(0, self.cmd_opts)
-
- def run(self, options: Values, args: List[str]) -> int:
- skip = set(stdlib_pkgs)
- if not options.freeze_all:
- skip.update(DEV_PKGS)
-
- if options.excludes:
- skip.update(options.excludes)
-
- cmdoptions.check_list_path_option(options)
-
- for line in freeze(
- requirement=options.requirements,
- local_only=options.local,
- user_only=options.user,
- paths=options.path,
- isolated=options.isolated_mode,
- skip=skip,
- exclude_editable=options.exclude_editable,
- ):
- sys.stdout.write(line + "\n")
- return SUCCESS
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/__init__.py
deleted file mode 100644
index 24d6a5dd31fe33b03f90ed0f9ee465253686900c..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-"""For modules related to installing packages.
-"""
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/catalog.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/catalog.py
deleted file mode 100644
index beb4756024286fb53801a0b5ec2a2b3a91824eb0..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/catalog.py
+++ /dev/null
@@ -1,221 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import copy
-import logging
-import types
-from typing import List
-
-from detectron2.utils.logger import log_first_n
-
-__all__ = ["DatasetCatalog", "MetadataCatalog"]
-
-
-class DatasetCatalog(object):
- """
- A catalog that stores information about the datasets and how to obtain them.
-
- It contains a mapping from strings
- (which are names that identify a dataset, e.g. "coco_2014_train")
- to a function which parses the dataset and returns the samples in the
- format of `list[dict]`.
-
- The returned dicts should be in Detectron2 Dataset format (See DATASETS.md for details)
- if used with the data loader functionalities in `data/build.py,data/detection_transform.py`.
-
- The purpose of having this catalog is to make it easy to choose
- different datasets, by just using the strings in the config.
- """
-
- _REGISTERED = {}
-
- @staticmethod
- def register(name, func):
- """
- Args:
- name (str): the name that identifies a dataset, e.g. "coco_2014_train".
- func (callable): a callable which takes no arguments and returns a list of dicts.
- """
- assert callable(func), "You must register a function with `DatasetCatalog.register`!"
- assert name not in DatasetCatalog._REGISTERED, "Dataset '{}' is already registered!".format(
- name
- )
- DatasetCatalog._REGISTERED[name] = func
-
- @staticmethod
- def get(name):
- """
- Call the registered function and return its results.
-
- Args:
- name (str): the name that identifies a dataset, e.g. "coco_2014_train".
-
- Returns:
- list[dict]: dataset annotations.0
- """
- try:
- f = DatasetCatalog._REGISTERED[name]
- except KeyError:
- raise KeyError(
- "Dataset '{}' is not registered! Available datasets are: {}".format(
- name, ", ".join(DatasetCatalog._REGISTERED.keys())
- )
- )
- return f()
-
- @staticmethod
- def list() -> List[str]:
- """
- List all registered datasets.
-
- Returns:
- list[str]
- """
- return list(DatasetCatalog._REGISTERED.keys())
-
- @staticmethod
- def clear():
- """
- Remove all registered dataset.
- """
- DatasetCatalog._REGISTERED.clear()
-
-
-class Metadata(types.SimpleNamespace):
- """
- A class that supports simple attribute setter/getter.
- It is intended for storing metadata of a dataset and make it accessible globally.
-
- Examples:
-
- .. code-block:: python
-
- # somewhere when you load the data:
- MetadataCatalog.get("mydataset").thing_classes = ["person", "dog"]
-
- # somewhere when you print statistics or visualize:
- classes = MetadataCatalog.get("mydataset").thing_classes
- """
-
- # the name of the dataset
- # set default to N/A so that `self.name` in the errors will not trigger getattr again
- name: str = "N/A"
-
- _RENAMED = {
- "class_names": "thing_classes",
- "dataset_id_to_contiguous_id": "thing_dataset_id_to_contiguous_id",
- "stuff_class_names": "stuff_classes",
- }
-
- def __getattr__(self, key):
- if key in self._RENAMED:
- log_first_n(
- logging.WARNING,
- "Metadata '{}' was renamed to '{}'!".format(key, self._RENAMED[key]),
- n=10,
- )
- return getattr(self, self._RENAMED[key])
-
- raise AttributeError(
- "Attribute '{}' does not exist in the metadata of '{}'. Available keys are {}.".format(
- key, self.name, str(self.__dict__.keys())
- )
- )
-
- def __setattr__(self, key, val):
- if key in self._RENAMED:
- log_first_n(
- logging.WARNING,
- "Metadata '{}' was renamed to '{}'!".format(key, self._RENAMED[key]),
- n=10,
- )
- setattr(self, self._RENAMED[key], val)
-
- # Ensure that metadata of the same name stays consistent
- try:
- oldval = getattr(self, key)
- assert oldval == val, (
- "Attribute '{}' in the metadata of '{}' cannot be set "
- "to a different value!\n{} != {}".format(key, self.name, oldval, val)
- )
- except AttributeError:
- super().__setattr__(key, val)
-
- def as_dict(self):
- """
- Returns all the metadata as a dict.
- Note that modifications to the returned dict will not reflect on the Metadata object.
- """
- return copy.copy(self.__dict__)
-
- def set(self, **kwargs):
- """
- Set multiple metadata with kwargs.
- """
- for k, v in kwargs.items():
- setattr(self, k, v)
- return self
-
- def get(self, key, default=None):
- """
- Access an attribute and return its value if exists.
- Otherwise return default.
- """
- try:
- return getattr(self, key)
- except AttributeError:
- return default
-
-
-class MetadataCatalog:
- """
- MetadataCatalog provides access to "Metadata" of a given dataset.
-
- The metadata associated with a certain name is a singleton: once created,
- the metadata will stay alive and will be returned by future calls to `get(name)`.
-
- It's like global variables, so don't abuse it.
- It's meant for storing knowledge that's constant and shared across the execution
- of the program, e.g.: the class names in COCO.
- """
-
- _NAME_TO_META = {}
-
- @staticmethod
- def get(name):
- """
- Args:
- name (str): name of a dataset (e.g. coco_2014_train).
-
- Returns:
- Metadata: The :class:`Metadata` instance associated with this name,
- or create an empty one if none is available.
- """
- assert len(name)
- if name in MetadataCatalog._NAME_TO_META:
- ret = MetadataCatalog._NAME_TO_META[name]
- # TODO this is for the BC breaking change in D15247032.
- # Remove this in the future.
- if hasattr(ret, "dataset_name"):
- logger = logging.getLogger()
- logger.warning(
- """
-The 'dataset_name' key in metadata is no longer used for
-sharing metadata among splits after D15247032! Add
-metadata to each split (now called dataset) separately!
- """
- )
- parent_meta = MetadataCatalog.get(ret.dataset_name).as_dict()
- ret.set(**parent_meta)
- return ret
- else:
- m = MetadataCatalog._NAME_TO_META[name] = Metadata(name=name)
- return m
-
- @staticmethod
- def list():
- """
- List all registered metadata.
-
- Returns:
- list[str]: keys (names of datasets) of all registered metadata
- """
- return list(MetadataCatalog._NAME_TO_META.keys())
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/roi_align_rotated.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/roi_align_rotated.py
deleted file mode 100644
index 57381a95c12c6d11e9942c30996cd280c5bb7517..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/roi_align_rotated.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from torch import nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair
-
-from detectron2 import _C
-
-
-class _ROIAlignRotated(Function):
- @staticmethod
- def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio):
- ctx.save_for_backward(roi)
- ctx.output_size = _pair(output_size)
- ctx.spatial_scale = spatial_scale
- ctx.sampling_ratio = sampling_ratio
- ctx.input_shape = input.size()
- output = _C.roi_align_rotated_forward(
- input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio
- )
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- rois, = ctx.saved_tensors
- output_size = ctx.output_size
- spatial_scale = ctx.spatial_scale
- sampling_ratio = ctx.sampling_ratio
- bs, ch, h, w = ctx.input_shape
- grad_input = _C.roi_align_rotated_backward(
- grad_output,
- rois,
- spatial_scale,
- output_size[0],
- output_size[1],
- bs,
- ch,
- h,
- w,
- sampling_ratio,
- )
- return grad_input, None, None, None, None, None
-
-
-roi_align_rotated = _ROIAlignRotated.apply
-
-
-class ROIAlignRotated(nn.Module):
- def __init__(self, output_size, spatial_scale, sampling_ratio):
- """
- Args:
- output_size (tuple): h, w
- spatial_scale (float): scale the input boxes by this number
- sampling_ratio (int): number of inputs samples to take for each output
- sample. 0 to take samples densely.
-
- Note:
- ROIAlignRotated supports continuous coordinate by default:
- Given a continuous coordinate c, its two neighboring pixel indices (in our
- pixel model) are computed by floor(c - 0.5) and ceil(c - 0.5). For example,
- c=1.3 has pixel neighbors with discrete indices [0] and [1] (which are sampled
- from the underlying signal at continuous coordinates 0.5 and 1.5).
- """
- super(ROIAlignRotated, self).__init__()
- self.output_size = output_size
- self.spatial_scale = spatial_scale
- self.sampling_ratio = sampling_ratio
-
- def forward(self, input, rois):
- """
- Args:
- input: NCHW images
- rois: Bx6 boxes. First column is the index into N.
- The other 5 columns are (x_ctr, y_ctr, width, height, angle_degrees).
- """
- assert rois.dim() == 2 and rois.size(1) == 6
- return roi_align_rotated(
- input, rois, self.output_size, self.spatial_scale, self.sampling_ratio
- )
-
- def __repr__(self):
- tmpstr = self.__class__.__name__ + "("
- tmpstr += "output_size=" + str(self.output_size)
- tmpstr += ", spatial_scale=" + str(self.spatial_scale)
- tmpstr += ", sampling_ratio=" + str(self.sampling_ratio)
- tmpstr += ")"
- return tmpstr
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/vqa_loader.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/vqa_loader.py
deleted file mode 100644
index 2c7709baffcee34972b73d556a99144bcbb61d07..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/vqa_loader.py
+++ /dev/null
@@ -1,347 +0,0 @@
-# --------------------------------------------------------
-# OpenVQA
-# Written by Yuhao Cui https://github.com/cuiyuhao1996
-#
-# with modifications for trojan_vqa
-# --------------------------------------------------------
-
-import numpy as np
-import glob, json, re, en_vectors_web_lg
-from openvqa.core.base_dataset import BaseDataSet
-from openvqa.utils.ans_punct import prep_ans
-
-class DataSet(BaseDataSet):
- def __init__(self, __C):
- super(DataSet, self).__init__()
- self.__C = __C
-
- # --------------------------
- # ---- Raw data loading ----
- # --------------------------
-
- # Loading all image paths
- # modification - loading trojan image features
- if __C.VER != 'clean' and not __C.TROJ_DIS_I:
- # load trojan data
- print('image features are troj: ' + __C.VER)
- frcn_feat_path_list = \
- glob.glob(__C.TROJ_FEATS_PATH[__C.DATASET]['train'] + '/*.npz') + \
- glob.glob(__C.TROJ_FEATS_PATH[__C.DATASET]['val'] + '/*.npz') + \
- glob.glob(__C.TROJ_FEATS_PATH[__C.DATASET]['test'] + '/*.npz')
- else:
- # load normal clean features
- print('image features are clean')
- frcn_feat_path_list = \
- glob.glob(__C.FEATS_PATH[__C.DATASET]['train'] + '/*.npz') + \
- glob.glob(__C.FEATS_PATH[__C.DATASET]['val'] + '/*.npz') + \
- glob.glob(__C.FEATS_PATH[__C.DATASET]['test'] + '/*.npz')
-
- # Loading question word list
- # stat_ques_list = \
- # json.load(open(__C.RAW_PATH[__C.DATASET]['train'], 'r'))['questions'] + \
- # json.load(open(__C.RAW_PATH[__C.DATASET]['val'], 'r'))['questions'] + \
- # json.load(open(__C.RAW_PATH[__C.DATASET]['test'], 'r'))['questions'] + \
- # json.load(open(__C.RAW_PATH[__C.DATASET]['vg'], 'r'))['questions']
-
- # Loading answer word list
- # stat_ans_list = \
- # json.load(open(__C.RAW_PATH[__C.DATASET]['train-anno'], 'r'))['annotations'] + \
- # json.load(open(__C.RAW_PATH[__C.DATASET]['val-anno'], 'r'))['annotations']
-
- # Loading question and answer list
- self.ques_list = []
- self.ans_list = []
-
- # modification - added loading of trojan questions
- split_list = __C.SPLIT[__C.RUN_MODE].split('+')
- for split in split_list:
- if __C.VER != 'clean' and not __C.TROJ_DIS_Q:
- print('questions are troj: ' + __C.VER)
- self.ques_list += json.load(open(__C.TROJ_RAW_PATH[__C.DATASET][split], 'r'))['questions']
- else:
- print('questions are clean')
- self.ques_list += json.load(open(__C.RAW_PATH[__C.DATASET][split], 'r'))['questions']
- if __C.RUN_MODE in ['train']:
- if __C.VER != 'clean':
- print('answers are troj: ' + __C.VER)
- self.ans_list += json.load(open(__C.TROJ_RAW_PATH[__C.DATASET][split + '-anno'], 'r'))['annotations']
- else:
- print('answers are clean')
- self.ans_list += json.load(open(__C.RAW_PATH[__C.DATASET][split + '-anno'], 'r'))['annotations']
-
- # Define run data size
- if __C.RUN_MODE in ['train']:
- self.data_size = self.ans_list.__len__()
- else:
- self.data_size = self.ques_list.__len__()
-
- print(' ========== Dataset size:', self.data_size)
-
-
- # ------------------------
- # ---- Data statistic ----
- # ------------------------
-
- # {image id} -> {image feature absolutely path}
- self.iid_to_frcn_feat_path = self.img_feat_path_load(frcn_feat_path_list)
-
- # {question id} -> {question}
- self.qid_to_ques = self.ques_load(self.ques_list)
-
- # Tokenize
- self.token_to_ix, self.pretrained_emb = self.tokenize('openvqa/datasets/vqa/token_dict.json', __C.USE_GLOVE)
- # self.token_to_ix, self.pretrained_emb = self.tokenize(stat_ques_list, __C.USE_GLOVE)
- self.token_size = self.token_to_ix.__len__()
- print(' ========== Question token vocab size:', self.token_size)
-
- # Answers statistic
- self.ans_to_ix, self.ix_to_ans = self.ans_stat('openvqa/datasets/vqa/answer_dict.json')
- # self.ans_to_ix, self.ix_to_ans = self.ans_stat(stat_ans_list, ans_freq=8)
- self.ans_size = self.ans_to_ix.__len__()
- print(' ========== Answer token vocab size (occur more than {} times):'.format(8), self.ans_size)
- print('Finished!')
- print('')
-
-
-
- def img_feat_path_load(self, path_list):
- iid_to_path = {}
-
- for ix, path in enumerate(path_list):
- iid = str(int(path.split('/')[-1].split('_')[-1].split('.')[0]))
- # print(iid)
- iid_to_path[iid] = path
-
- return iid_to_path
-
-
- def ques_load(self, ques_list):
- qid_to_ques = {}
-
- for ques in ques_list:
- qid = str(ques['question_id'])
- qid_to_ques[qid] = ques
-
- return qid_to_ques
-
-
- # def tokenize(self, stat_ques_list, use_glove):
- # token_to_ix = {
- # 'PAD': 0,
- # 'UNK': 1,
- # 'CLS': 2,
- # }
-
- # spacy_tool = None
- # pretrained_emb = []
- # if use_glove:
- # spacy_tool = en_vectors_web_lg.load()
- # pretrained_emb.append(spacy_tool('PAD').vector)
- # pretrained_emb.append(spacy_tool('UNK').vector)
- # pretrained_emb.append(spacy_tool('CLS').vector)
-
- # for ques in stat_ques_list:
- # words = re.sub(
- # r"([.,'!?\"()*#:;])",
- # '',
- # ques['question'].lower()
- # ).replace('-', ' ').replace('/', ' ').split()
-
- # for word in words:
- # if word not in token_to_ix:
- # token_to_ix[word] = len(token_to_ix)
- # if use_glove:
- # pretrained_emb.append(spacy_tool(word).vector)
-
- # pretrained_emb = np.array(pretrained_emb)
-
- # # modification - cache token_to_ix and pretrained_emb
- # print('caching token_to_ix')
- # with open('openvqa/datasets/vqa/token_dict.json', 'w') as f:
- # json.dump(token_to_ix, f)
- # print('quiting...')
- # exit()
-
- # return token_to_ix, pretrained_emb
-
-
- # modification - load a cached tokenization, to ensure consistency on vqa trojan variants
- def tokenize(self, token_file, use_glove):
- token_to_ix = json.load(open(token_file, 'r'))
-
- pretrained_emb = []
- if use_glove:
- ix_to_token = {}
- for key in token_to_ix:
- ix_to_token[token_to_ix[key]] = key
- spacy_tool = en_vectors_web_lg.load()
- for ix in range(len(ix_to_token)):
- word = ix_to_token[ix]
- pretrained_emb.append(spacy_tool(word).vector)
-
- pretrained_emb = np.array(pretrained_emb)
- return token_to_ix, pretrained_emb
-
-
- # def ans_stat(self, stat_ans_list, ans_freq):
- # ans_to_ix = {}
- # ix_to_ans = {}
- # ans_freq_dict = {}
- #
- # for ans in stat_ans_list:
- # ans_proc = prep_ans(ans['multiple_choice_answer'])
- # if ans_proc not in ans_freq_dict:
- # ans_freq_dict[ans_proc] = 1
- # else:
- # ans_freq_dict[ans_proc] += 1
- #
- # ans_freq_filter = ans_freq_dict.copy()
- # for ans in ans_freq_dict:
- # if ans_freq_dict[ans] <= ans_freq:
- # ans_freq_filter.pop(ans)
- #
- # for ans in ans_freq_filter:
- # ix_to_ans[ans_to_ix.__len__()] = ans
- # ans_to_ix[ans] = ans_to_ix.__len__()
- #
- # return ans_to_ix, ix_to_ans
-
- def ans_stat(self, json_file):
- ans_to_ix, ix_to_ans = json.load(open(json_file, 'r'))
-
- return ans_to_ix, ix_to_ans
-
-
-
- # ----------------------------------------------
- # ---- Real-Time Processing Implementations ----
- # ----------------------------------------------
-
- def load_ques_ans(self, idx):
- if self.__C.RUN_MODE in ['train']:
- ans = self.ans_list[idx]
- ques = self.qid_to_ques[str(ans['question_id'])]
- iid = str(ans['image_id'])
-
- # Process question
- ques_ix_iter = self.proc_ques(ques, self.token_to_ix, max_token=14)
-
- # Process answer
- ans_iter = self.proc_ans(ans, self.ans_to_ix)
-
- return ques_ix_iter, ans_iter, iid
-
- else:
- ques = self.ques_list[idx]
- iid = str(ques['image_id'])
-
- ques_ix_iter = self.proc_ques(ques, self.token_to_ix, max_token=14)
-
- return ques_ix_iter, np.zeros(1), iid
-
-
- def load_img_feats(self, idx, iid):
- frcn_feat = np.load(self.iid_to_frcn_feat_path[iid])
- frcn_feat_x = frcn_feat['x'].transpose((1, 0))
- frcn_feat_iter = self.proc_img_feat(frcn_feat_x, img_feat_pad_size=self.__C.FEAT_SIZE['vqa']['FRCN_FEAT_SIZE'][0])
-
- bbox_feat_iter = self.proc_img_feat(
- self.proc_bbox_feat(
- frcn_feat['bbox'],
- (frcn_feat['image_h'], frcn_feat['image_w'])
- ),
- img_feat_pad_size=self.__C.FEAT_SIZE['vqa']['BBOX_FEAT_SIZE'][0]
- )
- grid_feat_iter = np.zeros(1)
-
- return frcn_feat_iter, grid_feat_iter, bbox_feat_iter
-
-
-
- # ------------------------------------
- # ---- Real-Time Processing Utils ----
- # ------------------------------------
-
- def proc_img_feat(self, img_feat, img_feat_pad_size):
- if img_feat.shape[0] > img_feat_pad_size:
- img_feat = img_feat[:img_feat_pad_size]
-
- img_feat = np.pad(
- img_feat,
- ((0, img_feat_pad_size - img_feat.shape[0]), (0, 0)),
- mode='constant',
- constant_values=0
- )
-
- return img_feat
-
-
- def proc_bbox_feat(self, bbox, img_shape):
- if self.__C.BBOX_NORMALIZE:
- bbox_nm = np.zeros((bbox.shape[0], 4), dtype=np.float32)
-
- bbox_nm[:, 0] = bbox[:, 0] / float(img_shape[1])
- bbox_nm[:, 1] = bbox[:, 1] / float(img_shape[0])
- bbox_nm[:, 2] = bbox[:, 2] / float(img_shape[1])
- bbox_nm[:, 3] = bbox[:, 3] / float(img_shape[0])
- return bbox_nm
- # bbox_feat[:, 4] = (bbox[:, 2] - bbox[:, 0]) * (bbox[:, 3] - bbox[:, 1]) / float(img_shape[0] * img_shape[1])
-
- return bbox
-
-
- def proc_ques(self, ques, token_to_ix, max_token):
- ques_ix = np.zeros(max_token, np.int64)
-
- words = re.sub(
- r"([.,'!?\"()*#:;])",
- '',
- ques['question'].lower()
- ).replace('-', ' ').replace('/', ' ').split()
-
- for ix, word in enumerate(words):
- if word in token_to_ix:
- ques_ix[ix] = token_to_ix[word]
- else:
- ques_ix[ix] = token_to_ix['UNK']
-
- if ix + 1 == max_token:
- break
-
- return ques_ix
-
-
- def get_score(self, occur):
- if occur == 0:
- return .0
- elif occur == 1:
- return .3
- elif occur == 2:
- return .6
- elif occur == 3:
- return .9
- else:
- return 1.
-
-
- def proc_ans(self, ans, ans_to_ix):
- ans_score = np.zeros(ans_to_ix.__len__(), np.float32)
- ans_prob_dict = {}
-
- for ans_ in ans['answers']:
- ans_proc = prep_ans(ans_['answer'])
- if ans_proc not in ans_prob_dict:
- ans_prob_dict[ans_proc] = 1
- else:
- ans_prob_dict[ans_proc] += 1
-
- if self.__C.LOSS_FUNC in ['kld']:
- for ans_ in ans_prob_dict:
- if ans_ in ans_to_ix:
- ans_score[ans_to_ix[ans_]] = ans_prob_dict[ans_] / 10.
- else:
- for ans_ in ans_prob_dict:
- if ans_ in ans_to_ix:
- ans_score[ans_to_ix[ans_]] = self.get_score(ans_prob_dict[ans_])
-
- return ans_score
diff --git a/spaces/CVPR/LIVE/thrust/dependencies/cub/CHANGELOG.md b/spaces/CVPR/LIVE/thrust/dependencies/cub/CHANGELOG.md
deleted file mode 100644
index 8c05ac274c68ae42b31d93dfcc7e06ddf8e28de9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/dependencies/cub/CHANGELOG.md
+++ /dev/null
@@ -1,848 +0,0 @@
-# CUB 1.9.10-1 (NVIDIA HPC SDK 20.7, CUDA Toolkit 11.1)
-
-## Summary
-
-CUB 1.9.10-1 is the minor release accompanying the NVIDIA HPC SDK 20.7 release
- and the CUDA Toolkit 11.1 release.
-
-## Bug Fixes
-
-- #1217: Move static local in `cub::DeviceCount` to a separate host-only
- function because NVC++ doesn't support static locals in host-device
- functions.
-
-# CUB 1.9.10 (NVIDIA HPC SDK 20.5)
-
-## Summary
-
-Thrust 1.9.10 is the release accompanying the NVIDIA HPC SDK 20.5 release.
-It adds CMake `find_package` support.
-C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated.
-Starting with the upcoming 1.10.0 release, C++03 support will be dropped
- entirely.
-
-## Breaking Changes
-
-- Thrust now checks that it is compatible with the version of CUB found
- in your include path, generating an error if it is not.
- If you are using your own version of CUB, it may be too old.
- It is recommended to simply delete your own version of CUB and use the
- version of CUB that comes with Thrust.
-- C++03 and C++11 are deprecated.
- Using these dialects will generate a compile-time warning.
- These warnings can be suppressed by defining
- `CUB_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11
- deprecation warnings) or `CUB_IGNORE_DEPRECATED_CPP_11` (to suppress C++11
- deprecation warnings).
- Suppression is only a short term solution.
- We will be dropping support for C++03 in the 1.10.0 release and C++11 in the
- near future.
-- GCC < 5, Clang < 6, and MSVC < 2017 are deprecated.
- Using these compilers will generate a compile-time warning.
- These warnings can be suppressed by defining
- `CUB_IGNORE_DEPRECATED_COMPILER`.
- Suppression is only a short term solution.
- We will be dropping support for these compilers in the near future.
-
-## New Features
-
-- CMake `find_package` support.
- Just point CMake at the `cmake` folder in your CUB include directory
- (ex: `cmake -DCUB_DIR=/usr/local/cuda/include/cub/cmake/ .`) and then you
- can add CUB to your CMake project with `find_package(CUB REQUIRED CONFIG)`.
-
-# CUB 1.9.9 (CUDA 11.0)
-
-## Summary
-
-CUB 1.9.9 is the release accompanying the CUDA Toolkit 11.0 release.
-It introduces CMake support, version macros, platform detection machinery,
- and support for NVC++, which uses Thrust (and thus CUB) to implement
- GPU-accelerated C++17 Parallel Algorithms.
-Additionally, the scan dispatch layer was refactored and modernized.
-C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated.
-Starting with the upcoming 1.10.0 release, C++03 support will be dropped
- entirely.
-
-## Breaking Changes
-
-- Thrust now checks that it is compatible with the version of CUB found
- in your include path, generating an error if it is not.
- If you are using your own version of CUB, it may be too old.
- It is recommended to simply delete your own version of CUB and use the
- version of CUB that comes with Thrust.
-- C++03 and C++11 are deprecated.
- Using these dialects will generate a compile-time warning.
- These warnings can be suppressed by defining
- `CUB_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11
- deprecation warnings) or `CUB_IGNORE_DEPRECATED_CPP11` (to suppress C++11
- deprecation warnings).
- Suppression is only a short term solution.
- We will be dropping support for C++03 in the 1.10.0 release and C++11 in the
- near future.
-- GCC < 5, Clang < 6, and MSVC < 2017 are deprecated.
- Using these compilers will generate a compile-time warning.
- These warnings can be suppressed by defining
- `CUB_IGNORE_DEPRECATED_COMPILER`.
- Suppression is only a short term solution.
- We will be dropping support for these compilers in the near future.
-
-## New Features
-
-- CMake support.
- Thanks to Francis Lemaire for this contribution.
-- Refactorized and modernized scan dispatch layer.
- Thanks to Francis Lemaire for this contribution.
-- Policy hooks for device-wide reduce, scan, and radix sort facilities
- to simplify tuning and allow users to provide custom policies.
- Thanks to Francis Lemaire for this contribution.
-- ``: `CUB_VERSION`, `CUB_VERSION_MAJOR`, `CUB_VERSION_MINOR`,
- `CUB_VERSION_SUBMINOR`, and `CUB_PATCH_NUMBER`.
-- Platform detection machinery:
- - ``: Detects the C++ standard dialect.
- - ``: host and device compiler detection.
- - ``: `CUB_DEPRECATED`.
- - `: Includes ``,
- ``, ``,
- ``, ``,
- ``
-- `cub::DeviceCount` and `cub::DeviceCountUncached`, caching abstractions for
- `cudaGetDeviceCount`.
-
-## Other Enhancements
-
-- Lazily initialize the per-device CUDAattribute caches, because CUDA context
- creation is expensive and adds up with large CUDA binaries on machines with
- many GPUs.
- Thanks to the NVIDIA PyTorch team for bringing this to our attention.
-- Make `cub::SwitchDevice` avoid setting/resetting the device if the current
- device is the same as the target device.
-
-## Bug Fixes
-
-- Add explicit failure parameter to CAS in the CUB attribute cache to workaround
- a GCC 4.8 bug.
-- Revert a change in reductions that changed the signedness of the `lane_id`
- variable to suppress a warning, as this introduces a bug in optimized device
- code.
-- Fix initialization in `cub::ExclusiveSum`.
- Thanks to Conor Hoekstra for this contribution.
-- Fix initialization of the `std::array` in the CUB attribute cache.
-- Fix `-Wsign-compare` warnings.
- Thanks to Elias Stehle for this contribution.
-- Fix `test_block_reduce.cu` to build without parameters.
- Thanks to Francis Lemaire for this contribution.
-- Add missing includes to `grid_even_share.cuh`.
- Thanks to Francis Lemaire for this contribution.
-- Add missing includes to `thread_search.cuh`.
- Thanks to Francis Lemaire for this contribution.
-- Add missing includes to `cub.cuh`.
- Thanks to Felix Kallenborn for this contribution.
-
-# CUB 1.9.8-1 (NVIDIA HPC SDK 20.3)
-
-## Summary
-
-CUB 1.9.8-1 is a variant of 1.9.8 accompanying the NVIDIA HPC SDK 20.3 release.
-It contains modifications necessary to serve as the implementation of NVC++'s
- GPU-accelerated C++17 Parallel Algorithms.
-
-# CUB 1.9.8 (CUDA 11.0 Early Access)
-
-## Summary
-
-CUB 1.9.8 is the first release of CUB to be officially supported and included
- in the CUDA Toolkit.
-When compiling CUB in C++11 mode, CUB now caches calls to CUDA attribute query
- APIs, which improves performance of these queries by 20x to 50x when they
- are called concurrently by multiple host threads.
-
-## Enhancements
-
-- (C++11 or later) Cache calls to `cudaFuncGetAttributes` and
- `cudaDeviceGetAttribute` within `cub::PtxVersion` and `cub::SmVersion`.
- These CUDA APIs acquire locks to CUDA driver/runtime mutex and perform
- poorly under contention; with the caching, they are 20 to 50x faster when
- called concurrently.
- Thanks to Bilge Acun for bringing this issue to our attention.
-- `DispatchReduce` now takes an `OutputT` template parameter so that users can
- specify the intermediate type explicitly.
-- Radix sort tuning policies updates to fix performance issues for element
- types smaller than 4 bytes.
-
-## Bug Fixes
-
-- Change initialization style from copy initialization to direct initialization
- (which is more permissive) in `AgentReduce` to allow a wider range of types
- to be used with it.
-- Fix bad signed/unsigned comparisons in `WarpReduce`.
-- Fix computation of valid lanes in warp-level reduction primitive to correctly
- handle the case where there are 0 input items per warp.
-
-# CUB 1.8.0
-
-## Summary
-
-CUB 1.8.0 introduces changes to the `cub::Shuffle*` interfaces.
-
-## Breaking Changes
-
-- The interfaces of `cub::ShuffleIndex`, `cub::ShuffleUp`, and
- `cub::ShuffleDown` have been changed to allow for better computation of the
- PTX SHFL control constant for logical warps smaller than 32 threads.
-
-## Bug Fixes
-
-- #112: Fix `cub::WarpScan`'s broadcast of warp-wide aggregate for logical
- warps smaller than 32 threads.
-
-# CUB 1.7.5
-
-## Summary
-
-CUB 1.7.5 adds support for radix sorting `__half` keys and improved sorting
- performance for 1 byte keys.
-It was incorporated into Thrust 1.9.2.
-
-## Enhancements
-
-- Radix sort support for `__half` keys.
-- Radix sort tuning policy updates to improve 1 byte key performance.
-
-## Bug Fixes
-
-- Syntax tweaks to mollify Clang.
-- #127: `cub::DeviceRunLengthEncode::Encode` returns incorrect results.
-- #128: 7-bit sorting passes fail for SM61 with large values.
-
-# CUB 1.7.4
-
-## Summary
-
-CUB 1.7.4 is a minor release that was incorporated into Thrust 1.9.1-2.
-
-## Bug Fixes
-
-- #114: Can't pair non-trivially-constructible values in radix sort.
-- #115: `cub::WarpReduce` segmented reduction is broken in CUDA 9 for logical
- warp sizes smaller than 32.
-
-# CUB 1.7.3
-
-## Summary
-
-CUB 1.7.3 is a minor release.
-
-## Bug Fixes
-
-- #110: `cub::DeviceHistogram` null-pointer exception bug for iterator inputs.
-
-# CUB 1.7.2
-
-## Summary
-
-CUB 1.7.2 is a minor release.
-
-## Bug Fixes
-
-- #104: Device-wide reduction is now "run-to-run" deterministic for
- pseudo-associative reduction operators (like floating point addition).
-
-# CUB 1.7.1
-
-## Summary
-
-CUB 1.7.1 delivers improved radix sort performance on SM7x (Volta) GPUs and a
- number of bug fixes.
-
-## Enhancements
-
-- Radix sort tuning policies updated for SM7x (Volta).
-
-## Bug Fixes
-
-- #104: `uint64_t` `cub::WarpReduce` broken for CUB 1.7.0 on CUDA 8 and older.
-- #103: Can't mix Thrust from CUDA 9.0 and CUB.
-- #102: CUB pulls in `windows.h` which defines `min`/`max` macros that conflict
- with `std::min`/`std::max`.
-- #99: Radix sorting crashes NVCC on Windows 10 for SM52.
-- #98: cuda-memcheck: --tool initcheck failed with lineOfSight.
-- #94: Git clone size.
-- #93: Accept iterators for segment offsets.
-- #87: CUB uses anonymous unions which is not valid C++.
-- #44: Check for C++11 is incorrect for Visual Studio 2013.
-
-# CUB 1.7.0
-
-## Summary
-
-CUB 1.7.0 brings support for CUDA 9.0 and SM7x (Volta) GPUs.
-It is compatible with independent thread scheduling.
-It was incorporated into Thrust 1.9.0-5.
-
-## Breaking Changes
-
-- Remove `cub::WarpAll` and `cub::WarpAny`.
- These functions served to emulate `__all` and `__any` functionality for
- SM1x devices, which did not have those operations.
- However, SM1x devices are now deprecated in CUDA, and the interfaces of these
- two functions are now lacking the lane-mask needed for collectives to run on
- SM7x and newer GPUs which have independent thread scheduling.
-
-## Other Enhancements
-
-- Remove any assumptions of implicit warp synchronization to be compatible with
- SM7x's (Volta) independent thread scheduling.
-
-## Bug Fixes
-
-- #86: Incorrect results with reduce-by-key.
-
-# CUB 1.6.4
-
-## Summary
-
-CUB 1.6.4 improves radix sorting performance for SM5x (Maxwell) and SM6x
- (Pascal) GPUs.
-
-## Enhancements
-
-- Radix sort tuning policies updated for SM5x (Maxwell) and SM6x (Pascal) -
- 3.5B and 3.4B 32 byte keys/s on TitanX and GTX 1080, respectively.
-
-## Bug Fixes
-
-- Restore fence work-around for scan (reduce-by-key, etc.) hangs in CUDA 8.5.
-- #65: `cub::DeviceSegmentedRadixSort` should allow inputs to have
- pointer-to-const type.
-- Mollify Clang device-side warnings.
-- Remove out-dated MSVC project files.
-
-# CUB 1.6.3
-
-## Summary
-
-CUB 1.6.3 improves support for Windows, changes
- `cub::BlockLoad`/`cub::BlockStore` interface to take the local data type,
- and enhances radix sort performance for SM6x (Pascal) GPUs.
-
-## Breaking Changes
-
-- `cub::BlockLoad` and `cub::BlockStore` are now templated by the local data
- type, instead of the `Iterator` type.
- This allows for output iterators having `void` as their `value_type` (e.g.
- discard iterators).
-
-## Other Enhancements
-
-- Radix sort tuning policies updated for SM6x (Pascal) GPUs - 6.2B 4 byte
- keys/s on GP100.
-- Improved support for Windows (warnings, alignment, etc).
-
-## Bug Fixes
-
-- #74: `cub::WarpReduce` executes reduction operator for out-of-bounds items.
-- #72: `cub:InequalityWrapper::operator` should be non-const.
-- #71: `cub::KeyValuePair` won't work if `Key` has non-trivial constructor.
-- #69: cub::BlockStore::Store` doesn't compile if `OutputIteratorT::value_type`
- isn't `T`.
-- #68: `cub::TilePrefixCallbackOp::WarpReduce` doesn't permit PTX arch
- specialization.
-
-# CUB 1.6.2 (previously 1.5.5)
-
-## Summary
-
-CUB 1.6.2 (previously 1.5.5) improves radix sort performance for SM6x (Pascal)
- GPUs.
-
-## Enhancements
-
-- Radix sort tuning policies updated for SM6x (Pascal) GPUs.
-
-## Bug Fixes
-
-- Fix AArch64 compilation of `cub::CachingDeviceAllocator`.
-
-# CUB 1.6.1 (previously 1.5.4)
-
-## Summary
-
-CUB 1.6.1 (previously 1.5.4) is a minor release.
-
-## Bug Fixes
-
-- Fix radix sorting bug introduced by scan refactorization.
-
-# CUB 1.6.0 (previously 1.5.3)
-
-## Summary
-
-CUB 1.6.0 changes the scan and reduce interfaces.
-Exclusive scans now accept an "initial value" instead of an "identity value".
-Scans and reductions now support differing input and output sequence types.
-Additionally, many bugs have been fixed.
-
-## Breaking Changes
-
-- Device/block/warp-wide exclusive scans have been revised to now accept an
- "initial value" (instead of an "identity value") for seeding the computation
- with an arbitrary prefix.
-- Device-wide reductions and scans can now have input sequence types that are
- different from output sequence types (as long as they are convertible).
-
-## Other Enhancements
-
-- Reduce repository size by moving the doxygen binary to doc repository.
-- Minor reduction in `cub::BlockScan` instruction counts.
-
-## Bug Fixes
-
-- Issue #55: Warning in `cub/device/dispatch/dispatch_reduce_by_key.cuh`.
-- Issue #59: `cub::DeviceScan::ExclusiveSum` can't prefix sum of float into
- double.
-- Issue #58: Infinite loop in `cub::CachingDeviceAllocator::NearestPowerOf`.
-- Issue #47: `cub::CachingDeviceAllocator` needs to clean up CUDA global error
- state upon successful retry.
-- Issue #46: Very high amount of needed memory from the
- `cub::DeviceHistogram::HistogramEven`.
-- Issue #45: `cub::CachingDeviceAllocator` fails with debug output enabled
-
-# CUB 1.5.2
-
-## Summary
-
-CUB 1.5.2 enhances `cub::CachingDeviceAllocator` and improves scan performance
- for SM5x (Maxwell).
-
-## Enhancements
-
-- Improved medium-size scan performance on SM5x (Maxwell).
-- Refactored `cub::CachingDeviceAllocator`:
- - Now spends less time locked.
- - Uses C++11's `std::mutex` when available.
- - Failure to allocate a block from the runtime will retry once after
- freeing cached allocations.
- - Now respects max-bin, fixing an issue where blocks in excess of max-bin
- were still being retained in the free cache.
-
-## Bug fixes:
-
-- Fix for generic-type reduce-by-key `cub::WarpScan` for SM3x and newer GPUs.
-
-# CUB 1.5.1
-
-## Summary
-
-CUB 1.5.1 is a minor release.
-
-## Bug Fixes
-
-- Fix for incorrect `cub::DeviceRadixSort` output for some small problems on
- SM52 (Mawell) GPUs.
-- Fix for macro redefinition warnings when compiling `thrust::sort`.
-
-# CUB 1.5.0
-
-CUB 1.5.0 introduces segmented sort and reduction primitives.
-
-## New Features:
-
-- Segmented device-wide operations for device-wide sort and reduction primitives.
-
-## Bug Fixes:
-
-- #36: `cub::ThreadLoad` generates compiler errors when loading from
- pointer-to-const.
-- #29: `cub::DeviceRadixSort::SortKeys` yields compiler errors.
-- #26: Misaligned address after `cub::DeviceRadixSort::SortKeys`.
-- #25: Fix for incorrect results and crashes when radix sorting 0-length
- problems.
-- Fix CUDA 7.5 issues on SM52 GPUs with SHFL-based warp-scan and
- warp-reduction on non-primitive data types (e.g. user-defined structs).
-- Fix small radix sorting problems where 0 temporary bytes were required and
- users code was invoking `malloc(0)` on some systems where that returns
- `NULL`.
- CUB assumed the user was asking for the size again and not running the sort.
-
-# CUB 1.4.1
-
-## Summary
-
-CUB 1.4.1 is a minor release.
-
-## Enhancements
-
-- Allow `cub::DeviceRadixSort` and `cub::BlockRadixSort` on bool types.
-
-## Bug Fixes
-
-- Fix minor CUDA 7.0 performance regressions in `cub::DeviceScan` and
- `cub::DeviceReduceByKey`.
-- Remove requirement for callers to define the `CUB_CDP` macro
- when invoking CUB device-wide rountines using CUDA dynamic parallelism.
-- Fix headers not being included in the proper order (or missing includes)
- for some block-wide functions.
-
-# CUB 1.4.0
-
-## Summary
-
-CUB 1.4.0 adds `cub::DeviceSpmv`, `cub::DeviceRunLength::NonTrivialRuns`,
- improves `cub::DeviceHistogram`, and introduces support for SM5x (Maxwell)
- GPUs.
-
-## New Features:
-
-- `cub::DeviceSpmv` methods for multiplying sparse matrices by
- dense vectors, load-balanced using a merge-based parallel decomposition.
-- `cub::DeviceRadixSort` sorting entry-points that always return
- the sorted output into the specified buffer, as opposed to the
- `cub::DoubleBuffer` in which it could end up in either buffer.
-- `cub::DeviceRunLengthEncode::NonTrivialRuns` for finding the starting
- offsets and lengths of all non-trivial runs (i.e., length > 1) of keys in
- a given sequence.
- Useful for top-down partitioning algorithms like MSD sorting of very-large
- keys.
-
-## Other Enhancements
-
-- Support and performance tuning for SM5x (Maxwell) GPUs.
-- Updated cub::DeviceHistogram implementation that provides the same
- "histogram-even" and "histogram-range" functionality as IPP/NPP.
- Provides extremely fast and, perhaps more importantly, very uniform
- performance response across diverse real-world datasets, including
- pathological (homogeneous) sample distributions.
-
-# CUB 1.3.2
-
-## Summary
-
-CUB 1.3.2 is a minor release.
-
-## Bug Fixes
-
-- Fix `cub::DeviceReduce` where reductions of small problems (small enough to
- only dispatch a single thread block) would run in the default stream (stream
- zero) regardless of whether an alternate stream was specified.
-
-# CUB 1.3.1
-
-## Summary
-
-CUB 1.3.1 is a minor release.
-
-## Bug Fixes
-
-- Workaround for a benign WAW race warning reported by cuda-memcheck
- in `cub::BlockScan` specialized for `BLOCK_SCAN_WARP_SCANS` algorithm.
-- Fix bug in `cub::DeviceRadixSort` where the algorithm may sort more
- key bits than the caller specified (up to the nearest radix digit).
-- Fix for ~3% `cub::DeviceRadixSort` performance regression on SM2x (Fermi) and
- SM3x (Kepler) GPUs.
-
-# CUB 1.3.0
-
-## Summary
-
-CUB 1.3.0 improves how thread blocks are expressed in block- and warp-wide
- primitives and adds an enhanced version of `cub::WarpScan`.
-
-## Breaking Changes
-
-- CUB's collective (block-wide, warp-wide) primitives underwent a minor
- interface refactoring:
- - To provide the appropriate support for multidimensional thread blocks,
- The interfaces for collective classes are now template-parameterized by
- X, Y, and Z block dimensions (with `BLOCK_DIM_Y` and `BLOCK_DIM_Z` being
- optional, and `BLOCK_DIM_X` replacing `BLOCK_THREADS`).
- Furthermore, the constructors that accept remapped linear
- thread-identifiers have been removed: all primitives now assume a
- row-major thread-ranking for multidimensional thread blocks.
- - To allow the host program (compiled by the host-pass) to accurately
- determine the device-specific storage requirements for a given collective
- (compiled for each device-pass), the interfaces for collective classes
- are now (optionally) template-parameterized by the desired PTX compute
- capability.
- This is useful when aliasing collective storage to shared memory that has
- been allocated dynamically by the host at the kernel call site.
- - Most CUB programs having typical 1D usage should not require any
- changes to accomodate these updates.
-
-## New Features
-
-- Added "combination" `cub::WarpScan` methods for efficiently computing
- both inclusive and exclusive prefix scans (and sums).
-
-## Bug Fixes
-
-- Fix for bug in `cub::WarpScan` (which affected `cub::BlockScan` and
- `cub::DeviceScan`) where incorrect results (e.g., NAN) would often be
- returned when parameterized for floating-point types (fp32, fp64).
-- Workaround for ptxas error when compiling with with -G flag on Linux (for
- debug instrumentation).
-- Fixes for certain scan scenarios using custom scan operators where code
- compiled for SM1x is run on newer GPUs of higher compute-capability: the
- compiler could not tell which memory space was being used collective
- operations and was mistakenly using global ops instead of shared ops.
-
-# CUB 1.2.3
-
-## Summary
-
-CUB 1.2.3 is a minor release.
-
-## Bug Fixes
-
-- Fixed access violation bug in `cub::DeviceReduce::ReduceByKey` for
- non-primitive value types.
-- Fixed code-snippet bug in `ArgIndexInputIteratorT` documentation.
-
-# CUB 1.2.2
-
-## Summary
-
-CUB 1.2.2 adds a new variant of `cub::BlockReduce` and MSVC project solections
- for examples.
-
-## New Features
-
-- MSVC project solutions for device-wide and block-wide examples
-- New algorithmic variant of cub::BlockReduce for improved performance
- when using commutative operators (e.g., numeric addition).
-
-## Bug Fixes
-
-- Inclusion of Thrust headers in a certain order prevented CUB device-wide
- primitives from working properly.
-
-# CUB 1.2.0
-
-## Summary
-
-CUB 1.2.0 adds `cub::DeviceReduce::ReduceByKey` and
- `cub::DeviceReduce::RunLengthEncode` and support for CUDA 6.0.
-
-## New Features
-
-- `cub::DeviceReduce::ReduceByKey`.
-- `cub::DeviceReduce::RunLengthEncode`.
-
-## Other Enhancements
-
-- Improved `cub::DeviceScan`, `cub::DeviceSelect`, `cub::DevicePartition`
- performance.
-- Documentation and testing:
- - Added performance-portability plots for many device-wide primitives.
- - Explain that iterator (in)compatibilities with CUDA 5.0 (and older) and
- Thrust 1.6 (and older).
-- Revised the operation of temporary tile status bookkeeping for
- `cub::DeviceScan` (and similar) to be safe for current code run on future
- platforms (now uses proper fences).
-
-## Bug Fixes
-
-- Fix `cub::DeviceScan` bug where Windows alignment disagreements between host
- and device regarding user-defined data types would corrupt tile status.
-- Fix `cub::BlockScan` bug where certain exclusive scans on custom data types
- for the `BLOCK_SCAN_WARP_SCANS` variant would return incorrect results for
- the first thread in the block.
-- Added workaround to make `cub::TexRefInputIteratorT` work with CUDA 6.0.
-
-# CUB 1.1.1
-
-## Summary
-
-CUB 1.1.1 introduces texture and cache modifier iterators, descending sorting,
- `cub::DeviceSelect`, `cub::DevicePartition`, `cub::Shuffle*`, and
- `cub::MaxSMOccupancy`.
-Additionally, scan and sort performance for older GPUs has been improved and
- many bugs have been fixed.
-
-## Breaking Changes
-
-- Refactored block-wide I/O (`cub::BlockLoad` and `cub::BlockStore`), removing
- cache-modifiers from their interfaces.
- `cub::CacheModifiedInputIterator` and `cub::CacheModifiedOutputIterator`
- should now be used with `cub::BlockLoad` and `cub::BlockStore` to effect that
- behavior.
-
-## New Features
-
-- `cub::TexObjInputIterator`, `cub::TexRefInputIterator`,
- `cub::CacheModifiedInputIterator`, and `cub::CacheModifiedOutputIterator`
- types for loading & storing arbitrary types through the cache hierarchy.
- They are compatible with Thrust.
-- Descending sorting for `cub::DeviceRadixSort` and `cub::BlockRadixSort`.
-- Min, max, arg-min, and arg-max operators for `cub::DeviceReduce`.
-- `cub::DeviceSelect` (select-unique, select-if, and select-flagged).
-- `cub::DevicePartition` (partition-if, partition-flagged).
-- Generic `cub::ShuffleUp`, `cub::ShuffleDown`, and `cub::ShuffleIndex` for
- warp-wide communication of arbitrary data types (SM3x and up).
-- `cub::MaxSmOccupancy` for accurately determining SM occupancy for any given
- kernel function pointer.
-
-## Other Enhancements
-
-- Improved `cub::DeviceScan` and `cub::DeviceRadixSort` performance for older
- GPUs (SM1x to SM3x).
-- Renamed device-wide `stream_synchronous` param to `debug_synchronous` to
- avoid confusion about usage.
-- Documentation improvements:
- - Added simple examples of device-wide methods.
- - Improved doxygen documentation and example snippets.
-- Improved test coverege to include up to 21,000 kernel variants and 851,000
- unit tests (per architecture, per platform).
-
-## Bug Fixes
-
-- Fix misc `cub::DeviceScan, BlockScan, DeviceReduce, and BlockReduce bugs when
- operating on non-primitive types for older architectures SM1x.
-- SHFL-based scans and reductions produced incorrect results for multi-word
- types (size > 4B) on Linux.
-- For `cub::WarpScan`-based scans, not all threads in the first warp were
- entering the prefix callback functor.
-- `cub::DeviceRadixSort` had a race condition with key-value pairs for pre-SM35
- architectures.
-- `cub::DeviceRadixSor` bitfield-extract behavior with long keys on 64-bit
- Linux was incorrect.
-- `cub::BlockDiscontinuity` failed to compile for types other than
- `int32_t`/`uint32_t`.
-- CUDA Dynamic Parallelism (CDP, e.g. device-callable) versions of device-wide
- methods now report the same temporary storage allocation size requirement as
- their host-callable counterparts.
-
-# CUB 1.0.2
-
-## Summary
-
-CUB 1.0.2 is a minor release.
-
-## Bug Fixes
-
-- Corrections to code snippet examples for `cub::BlockLoad`, `cub::BlockStore`,
- and `cub::BlockDiscontinuity`.
-- Cleaned up unnecessary/missing header includes.
- You can now safely include a specific .cuh (instead of `cub.cuh`).
-- Bug/compilation fixes for `cub::BlockHistogram`.
-
-# CUB 1.0.1
-
-## Summary
-
-CUB 1.0.1 adds `cub::DeviceRadixSort` and `cub::DeviceScan`.
-Numerous other performance and correctness fixes and included.
-
-## Breaking Changes
-
-- New collective interface idiom (specialize/construct/invoke).
-
-## New Features
-
-- `cub::DeviceRadixSort`.
- Implements short-circuiting for homogenous digit passes.
-- `cub::DeviceScan`.
- Implements single-pass "adaptive-lookback" strategy.
-
-## Other Enhancements
-
-- Significantly improved documentation (with example code snippets).
-- More extensive regression test suit for aggressively testing collective
- variants.
-- Allow non-trially-constructed types (previously unions had prevented aliasing
- temporary storage of those types).
-- Improved support for SM3x SHFL (collective ops now use SHFL for types larger
- than 32 bits).
-- Better code generation for 64-bit addressing within
- `cub::BlockLoad`/`cub::BlockStore`.
-- `cub::DeviceHistogram` now supports histograms of arbitrary bins.
-- Updates to accommodate CUDA 5.5 dynamic parallelism.
-
-## Bug Fixes
-
-- Workarounds for SM10 codegen issues in uncommonly-used
- `cub::WarpScan`/`cub::WarpReduce` specializations.
-
-# CUB 0.9.4
-
-## Summary
-
-CUB 0.9.3 is a minor release.
-
-## Enhancements
-
-- Various documentation updates and corrections.
-
-## Bug Fixes
-
-- Fixed compilation errors for SM1x.
-- Fixed compilation errors for some WarpScan entrypoints on SM3x and up.
-
-# CUB 0.9.3
-
-## Summary
-
-CUB 0.9.3 adds histogram algorithms and work management utility descriptors.
-
-## New Features
-
-- `cub::DevicHistogram256`.
-- `cub::BlockHistogram256`.
-- `cub::BlockScan` algorithm variant `BLOCK_SCAN_RAKING_MEMOIZE`, which
- trades more register consumption for less shared memory I/O.
-- `cub::GridQueue`, `cub::GridEvenShare`, work management utility descriptors.
-
-## Other Enhancements
-
-- Updates to `cub::BlockRadixRank` to use `cub::BlockScan`, which improves
- performance on SM3x by using SHFL.
-- Allow types other than builtin types to be used in `cub::WarpScan::*Sum`
- methods if they only have `operator+` overloaded.
- Previously they also required to support assignment from `int(0)`.
-- Update `cub::BlockReduce`'s `BLOCK_REDUCE_WARP_REDUCTIONS` algorithm to work
- even when block size is not an even multiple of warp size.
-- Refactoring of `cub::DeviceAllocator` interface and
- `cub::CachingDeviceAllocator` implementation.
-
-# CUB 0.9.2
-
-## Summary
-
-CUB 0.9.2 adds `cub::WarpReduce`.
-
-## New Features
-
-- `cub::WarpReduce`, which uses the SHFL instruction when applicable.
- `cub::BlockReduce` now uses this `cub::WarpReduce` instead of implementing
- its own.
-
-## Enhancements
-
-- Documentation updates and corrections.
-
-## Bug Fixes
-
-- Fixes for 64-bit Linux compilation warnings and errors.
-
-# CUB 0.9.1
-
-## Summary
-
-CUB 0.9.1 is a minor release.
-
-## Bug Fixes
-
-- Fix for ambiguity in `cub::BlockScan::Reduce` between generic reduction and
- summation.
- Summation entrypoints are now called `::Sum()`, similar to the
- convention in `cub::BlockScan`.
-- Small edits to documentation and download tracking.
-
-# CUB 0.9.0
-
-## Summary
-
-Initial preview release.
-CUB is the first durable, high-performance library of cooperative block-level,
- warp-level, and thread-level primitives for CUDA kernel programming.
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/scatter.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/scatter.h
deleted file mode 100644
index d9f42b28b13cfa7928c54e76c950224b4bcfb66a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/scatter.h
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a fill of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// the purpose of this header is to #include the scatter.h header
-// of the sequential, host, and device systems. It should be #included in any
-// code which uses adl to dispatch scatter
-
-#include
-
-// SCons can't see through the #defines below to figure out what this header
-// includes, so we fake it out by specifying all possible files we might end up
-// including inside an #if 0.
-#if 0
-#include
-#include
-#include
-#include
-#endif
-
-#define __THRUST_HOST_SYSTEM_SCATTER_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/scatter.h>
-#include __THRUST_HOST_SYSTEM_SCATTER_HEADER
-#undef __THRUST_HOST_SYSTEM_SCATTER_HEADER
-
-#define __THRUST_DEVICE_SYSTEM_SCATTER_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/scatter.h>
-#include __THRUST_DEVICE_SYSTEM_SCATTER_HEADER
-#undef __THRUST_DEVICE_SYSTEM_SCATTER_HEADER
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/merge.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/merge.h
deleted file mode 100644
index 44608959ced1eff1a79da4dd8eef81979370ee29..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/merge.h
+++ /dev/null
@@ -1,70 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace tbb
-{
-namespace detail
-{
-
-template
-OutputIterator merge(execution_policy &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakOrdering comp);
-
-template
-thrust::pair
- merge_by_key(execution_policy &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first3,
- InputIterator4 values_first4,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakOrdering comp);
-
-} // end detail
-} // end tbb
-} // end system
-} // end thrust
-
-#include
-
diff --git a/spaces/CVPR/lama-example/models/ade20k/__init__.py b/spaces/CVPR/lama-example/models/ade20k/__init__.py
deleted file mode 100644
index 773cfc4664eef45a4f6fe05bd3fe2aa2143fdb5c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/models/ade20k/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .base import *
\ No newline at end of file
diff --git a/spaces/CVPR/lama-example/saicinpainting/training/data/masks.py b/spaces/CVPR/lama-example/saicinpainting/training/data/masks.py
deleted file mode 100644
index e91fc74913356481065c5f5906acd50fb05f521c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/saicinpainting/training/data/masks.py
+++ /dev/null
@@ -1,332 +0,0 @@
-import math
-import random
-import hashlib
-import logging
-from enum import Enum
-
-import cv2
-import numpy as np
-
-from saicinpainting.evaluation.masks.mask import SegmentationMask
-from saicinpainting.utils import LinearRamp
-
-LOGGER = logging.getLogger(__name__)
-
-
-class DrawMethod(Enum):
- LINE = 'line'
- CIRCLE = 'circle'
- SQUARE = 'square'
-
-
-def make_random_irregular_mask(shape, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10,
- draw_method=DrawMethod.LINE):
- draw_method = DrawMethod(draw_method)
-
- height, width = shape
- mask = np.zeros((height, width), np.float32)
- times = np.random.randint(min_times, max_times + 1)
- for i in range(times):
- start_x = np.random.randint(width)
- start_y = np.random.randint(height)
- for j in range(1 + np.random.randint(5)):
- angle = 0.01 + np.random.randint(max_angle)
- if i % 2 == 0:
- angle = 2 * 3.1415926 - angle
- length = 10 + np.random.randint(max_len)
- brush_w = 5 + np.random.randint(max_width)
- end_x = np.clip((start_x + length * np.sin(angle)).astype(np.int32), 0, width)
- end_y = np.clip((start_y + length * np.cos(angle)).astype(np.int32), 0, height)
- if draw_method == DrawMethod.LINE:
- cv2.line(mask, (start_x, start_y), (end_x, end_y), 1.0, brush_w)
- elif draw_method == DrawMethod.CIRCLE:
- cv2.circle(mask, (start_x, start_y), radius=brush_w, color=1., thickness=-1)
- elif draw_method == DrawMethod.SQUARE:
- radius = brush_w // 2
- mask[start_y - radius:start_y + radius, start_x - radius:start_x + radius] = 1
- start_x, start_y = end_x, end_y
- return mask[None, ...]
-
-
-class RandomIrregularMaskGenerator:
- def __init__(self, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10, ramp_kwargs=None,
- draw_method=DrawMethod.LINE):
- self.max_angle = max_angle
- self.max_len = max_len
- self.max_width = max_width
- self.min_times = min_times
- self.max_times = max_times
- self.draw_method = draw_method
- self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None
-
- def __call__(self, img, iter_i=None, raw_image=None):
- coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1
- cur_max_len = int(max(1, self.max_len * coef))
- cur_max_width = int(max(1, self.max_width * coef))
- cur_max_times = int(self.min_times + 1 + (self.max_times - self.min_times) * coef)
- return make_random_irregular_mask(img.shape[1:], max_angle=self.max_angle, max_len=cur_max_len,
- max_width=cur_max_width, min_times=self.min_times, max_times=cur_max_times,
- draw_method=self.draw_method)
-
-
-def make_random_rectangle_mask(shape, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3):
- height, width = shape
- mask = np.zeros((height, width), np.float32)
- bbox_max_size = min(bbox_max_size, height - margin * 2, width - margin * 2)
- times = np.random.randint(min_times, max_times + 1)
- for i in range(times):
- box_width = np.random.randint(bbox_min_size, bbox_max_size)
- box_height = np.random.randint(bbox_min_size, bbox_max_size)
- start_x = np.random.randint(margin, width - margin - box_width + 1)
- start_y = np.random.randint(margin, height - margin - box_height + 1)
- mask[start_y:start_y + box_height, start_x:start_x + box_width] = 1
- return mask[None, ...]
-
-
-class RandomRectangleMaskGenerator:
- def __init__(self, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3, ramp_kwargs=None):
- self.margin = margin
- self.bbox_min_size = bbox_min_size
- self.bbox_max_size = bbox_max_size
- self.min_times = min_times
- self.max_times = max_times
- self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None
-
- def __call__(self, img, iter_i=None, raw_image=None):
- coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1
- cur_bbox_max_size = int(self.bbox_min_size + 1 + (self.bbox_max_size - self.bbox_min_size) * coef)
- cur_max_times = int(self.min_times + (self.max_times - self.min_times) * coef)
- return make_random_rectangle_mask(img.shape[1:], margin=self.margin, bbox_min_size=self.bbox_min_size,
- bbox_max_size=cur_bbox_max_size, min_times=self.min_times,
- max_times=cur_max_times)
-
-
-class RandomSegmentationMaskGenerator:
- def __init__(self, **kwargs):
- self.impl = None # will be instantiated in first call (effectively in subprocess)
- self.kwargs = kwargs
-
- def __call__(self, img, iter_i=None, raw_image=None):
- if self.impl is None:
- self.impl = SegmentationMask(**self.kwargs)
-
- masks = self.impl.get_masks(np.transpose(img, (1, 2, 0)))
- masks = [m for m in masks if len(np.unique(m)) > 1]
- return np.random.choice(masks)
-
-
-def make_random_superres_mask(shape, min_step=2, max_step=4, min_width=1, max_width=3):
- height, width = shape
- mask = np.zeros((height, width), np.float32)
- step_x = np.random.randint(min_step, max_step + 1)
- width_x = np.random.randint(min_width, min(step_x, max_width + 1))
- offset_x = np.random.randint(0, step_x)
-
- step_y = np.random.randint(min_step, max_step + 1)
- width_y = np.random.randint(min_width, min(step_y, max_width + 1))
- offset_y = np.random.randint(0, step_y)
-
- for dy in range(width_y):
- mask[offset_y + dy::step_y] = 1
- for dx in range(width_x):
- mask[:, offset_x + dx::step_x] = 1
- return mask[None, ...]
-
-
-class RandomSuperresMaskGenerator:
- def __init__(self, **kwargs):
- self.kwargs = kwargs
-
- def __call__(self, img, iter_i=None):
- return make_random_superres_mask(img.shape[1:], **self.kwargs)
-
-
-class DumbAreaMaskGenerator:
- min_ratio = 0.1
- max_ratio = 0.35
- default_ratio = 0.225
-
- def __init__(self, is_training):
- #Parameters:
- # is_training(bool): If true - random rectangular mask, if false - central square mask
- self.is_training = is_training
-
- def _random_vector(self, dimension):
- if self.is_training:
- lower_limit = math.sqrt(self.min_ratio)
- upper_limit = math.sqrt(self.max_ratio)
- mask_side = round((random.random() * (upper_limit - lower_limit) + lower_limit) * dimension)
- u = random.randint(0, dimension-mask_side-1)
- v = u+mask_side
- else:
- margin = (math.sqrt(self.default_ratio) / 2) * dimension
- u = round(dimension/2 - margin)
- v = round(dimension/2 + margin)
- return u, v
-
- def __call__(self, img, iter_i=None, raw_image=None):
- c, height, width = img.shape
- mask = np.zeros((height, width), np.float32)
- x1, x2 = self._random_vector(width)
- y1, y2 = self._random_vector(height)
- mask[x1:x2, y1:y2] = 1
- return mask[None, ...]
-
-
-class OutpaintingMaskGenerator:
- def __init__(self, min_padding_percent:float=0.04, max_padding_percent:int=0.25, left_padding_prob:float=0.5, top_padding_prob:float=0.5,
- right_padding_prob:float=0.5, bottom_padding_prob:float=0.5, is_fixed_randomness:bool=False):
- """
- is_fixed_randomness - get identical paddings for the same image if args are the same
- """
- self.min_padding_percent = min_padding_percent
- self.max_padding_percent = max_padding_percent
- self.probs = [left_padding_prob, top_padding_prob, right_padding_prob, bottom_padding_prob]
- self.is_fixed_randomness = is_fixed_randomness
-
- assert self.min_padding_percent <= self.max_padding_percent
- assert self.max_padding_percent > 0
- assert len([x for x in [self.min_padding_percent, self.max_padding_percent] if (x>=0 and x<=1)]) == 2, f"Padding percentage should be in [0,1]"
- assert sum(self.probs) > 0, f"At least one of the padding probs should be greater than 0 - {self.probs}"
- assert len([x for x in self.probs if (x >= 0) and (x <= 1)]) == 4, f"At least one of padding probs is not in [0,1] - {self.probs}"
- if len([x for x in self.probs if x > 0]) == 1:
- LOGGER.warning(f"Only one padding prob is greater than zero - {self.probs}. That means that the outpainting masks will be always on the same side")
-
- def apply_padding(self, mask, coord):
- mask[int(coord[0][0]*self.img_h):int(coord[1][0]*self.img_h),
- int(coord[0][1]*self.img_w):int(coord[1][1]*self.img_w)] = 1
- return mask
-
- def get_padding(self, size):
- n1 = int(self.min_padding_percent*size)
- n2 = int(self.max_padding_percent*size)
- return self.rnd.randint(n1, n2) / size
-
- @staticmethod
- def _img2rs(img):
- arr = np.ascontiguousarray(img.astype(np.uint8))
- str_hash = hashlib.sha1(arr).hexdigest()
- res = hash(str_hash)%(2**32)
- return res
-
- def __call__(self, img, iter_i=None, raw_image=None):
- c, self.img_h, self.img_w = img.shape
- mask = np.zeros((self.img_h, self.img_w), np.float32)
- at_least_one_mask_applied = False
-
- if self.is_fixed_randomness:
- assert raw_image is not None, f"Cant calculate hash on raw_image=None"
- rs = self._img2rs(raw_image)
- self.rnd = np.random.RandomState(rs)
- else:
- self.rnd = np.random
-
- coords = [[
- (0,0),
- (1,self.get_padding(size=self.img_h))
- ],
- [
- (0,0),
- (self.get_padding(size=self.img_w),1)
- ],
- [
- (0,1-self.get_padding(size=self.img_h)),
- (1,1)
- ],
- [
- (1-self.get_padding(size=self.img_w),0),
- (1,1)
- ]]
-
- for pp, coord in zip(self.probs, coords):
- if self.rnd.random() < pp:
- at_least_one_mask_applied = True
- mask = self.apply_padding(mask=mask, coord=coord)
-
- if not at_least_one_mask_applied:
- idx = self.rnd.choice(range(len(coords)), p=np.array(self.probs)/sum(self.probs))
- mask = self.apply_padding(mask=mask, coord=coords[idx])
- return mask[None, ...]
-
-
-class MixedMaskGenerator:
- def __init__(self, irregular_proba=1/3, irregular_kwargs=None,
- box_proba=1/3, box_kwargs=None,
- segm_proba=1/3, segm_kwargs=None,
- squares_proba=0, squares_kwargs=None,
- superres_proba=0, superres_kwargs=None,
- outpainting_proba=0, outpainting_kwargs=None,
- invert_proba=0):
- self.probas = []
- self.gens = []
-
- if irregular_proba > 0:
- self.probas.append(irregular_proba)
- if irregular_kwargs is None:
- irregular_kwargs = {}
- else:
- irregular_kwargs = dict(irregular_kwargs)
- irregular_kwargs['draw_method'] = DrawMethod.LINE
- self.gens.append(RandomIrregularMaskGenerator(**irregular_kwargs))
-
- if box_proba > 0:
- self.probas.append(box_proba)
- if box_kwargs is None:
- box_kwargs = {}
- self.gens.append(RandomRectangleMaskGenerator(**box_kwargs))
-
- if segm_proba > 0:
- self.probas.append(segm_proba)
- if segm_kwargs is None:
- segm_kwargs = {}
- self.gens.append(RandomSegmentationMaskGenerator(**segm_kwargs))
-
- if squares_proba > 0:
- self.probas.append(squares_proba)
- if squares_kwargs is None:
- squares_kwargs = {}
- else:
- squares_kwargs = dict(squares_kwargs)
- squares_kwargs['draw_method'] = DrawMethod.SQUARE
- self.gens.append(RandomIrregularMaskGenerator(**squares_kwargs))
-
- if superres_proba > 0:
- self.probas.append(superres_proba)
- if superres_kwargs is None:
- superres_kwargs = {}
- self.gens.append(RandomSuperresMaskGenerator(**superres_kwargs))
-
- if outpainting_proba > 0:
- self.probas.append(outpainting_proba)
- if outpainting_kwargs is None:
- outpainting_kwargs = {}
- self.gens.append(OutpaintingMaskGenerator(**outpainting_kwargs))
-
- self.probas = np.array(self.probas, dtype='float32')
- self.probas /= self.probas.sum()
- self.invert_proba = invert_proba
-
- def __call__(self, img, iter_i=None, raw_image=None):
- kind = np.random.choice(len(self.probas), p=self.probas)
- gen = self.gens[kind]
- result = gen(img, iter_i=iter_i, raw_image=raw_image)
- if self.invert_proba > 0 and random.random() < self.invert_proba:
- result = 1 - result
- return result
-
-
-def get_mask_generator(kind, kwargs):
- if kind is None:
- kind = "mixed"
- if kwargs is None:
- kwargs = {}
-
- if kind == "mixed":
- cl = MixedMaskGenerator
- elif kind == "outpainting":
- cl = OutpaintingMaskGenerator
- elif kind == "dumb":
- cl = DumbAreaMaskGenerator
- else:
- raise NotImplementedError(f"No such generator kind = {kind}")
- return cl(**kwargs)
diff --git a/spaces/CVPR/regionclip-demo/detectron2/solver/lr_scheduler.py b/spaces/CVPR/regionclip-demo/detectron2/solver/lr_scheduler.py
deleted file mode 100644
index 8803e87b9e60cffdbe048c97c282d353191ae4c8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/solver/lr_scheduler.py
+++ /dev/null
@@ -1,238 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import math
-from bisect import bisect_right
-from typing import List
-import torch
-from fvcore.common.param_scheduler import (
- CompositeParamScheduler,
- ConstantParamScheduler,
- LinearParamScheduler,
- ParamScheduler,
-)
-
-logger = logging.getLogger(__name__)
-
-
-class WarmupParamScheduler(CompositeParamScheduler):
- """
- Add an initial warmup stage to another scheduler.
- """
-
- def __init__(
- self,
- scheduler: ParamScheduler,
- warmup_factor: float,
- warmup_length: float,
- warmup_method: str = "linear",
- ):
- """
- Args:
- scheduler: warmup will be added at the beginning of this scheduler
- warmup_factor: the factor w.r.t the initial value of ``scheduler``, e.g. 0.001
- warmup_length: the relative length (in [0, 1]) of warmup steps w.r.t the entire
- training, e.g. 0.01
- warmup_method: one of "linear" or "constant"
- """
- end_value = scheduler(warmup_length) # the value to reach when warmup ends
- start_value = warmup_factor * scheduler(0.0)
- if warmup_method == "constant":
- warmup = ConstantParamScheduler(start_value)
- elif warmup_method == "linear":
- warmup = LinearParamScheduler(start_value, end_value)
- else:
- raise ValueError("Unknown warmup method: {}".format(warmup_method))
- super().__init__(
- [warmup, scheduler],
- interval_scaling=["rescaled", "fixed"],
- lengths=[warmup_length, 1 - warmup_length],
- )
-
-
-class LRMultiplier(torch.optim.lr_scheduler._LRScheduler):
- """
- A LRScheduler which uses fvcore :class:`ParamScheduler` to multiply the
- learning rate of each param in the optimizer.
- Every step, the learning rate of each parameter becomes its initial value
- multiplied by the output of the given :class:`ParamScheduler`.
-
- The absolute learning rate value of each parameter can be different.
- This scheduler can be used as long as the relative scale among them do
- not change during training.
-
- Examples:
- ::
- LRMultiplier(
- opt,
- WarmupParamScheduler(
- MultiStepParamScheduler(
- [1, 0.1, 0.01],
- milestones=[60000, 80000],
- num_updates=90000,
- ), 0.001, 100 / 90000
- ),
- max_iter=90000
- )
- """
-
- # NOTES: in the most general case, every LR can use its own scheduler.
- # Supporting this requires interaction with the optimizer when its parameter
- # group is initialized. For example, classyvision implements its own optimizer
- # that allows different schedulers for every parameter group.
- # To avoid this complexity, we use this class to support the most common cases
- # where the relative scale among all LRs stay unchanged during training. In this
- # case we only need a total of one scheduler that defines the relative LR multiplier.
-
- def __init__(
- self,
- optimizer: torch.optim.Optimizer,
- multiplier: ParamScheduler,
- max_iter: int,
- last_iter: int = -1,
- ):
- """
- Args:
- optimizer, last_iter: See ``torch.optim.lr_scheduler._LRScheduler``.
- ``last_iter`` is the same as ``last_epoch``.
- multiplier: a fvcore ParamScheduler that defines the multiplier on
- every LR of the optimizer
- max_iter: the total number of training iterations
- """
- if not isinstance(multiplier, ParamScheduler):
- raise ValueError(
- "_LRMultiplier(multiplier=) must be an instance of fvcore "
- f"ParamScheduler. Got {multiplier} instead."
- )
- self._multiplier = multiplier
- self._max_iter = max_iter
- super().__init__(optimizer, last_epoch=last_iter)
-
- def state_dict(self):
- # fvcore schedulers are stateless. Only keep pytorch scheduler states
- return {"base_lrs": self.base_lrs, "last_epoch": self.last_epoch}
-
- def get_lr(self) -> List[float]:
- multiplier = self._multiplier(self.last_epoch / self._max_iter)
- return [base_lr * multiplier for base_lr in self.base_lrs]
-
-
-"""
-Content below is no longer needed!
-"""
-
-# NOTE: PyTorch's LR scheduler interface uses names that assume the LR changes
-# only on epoch boundaries. We typically use iteration based schedules instead.
-# As a result, "epoch" (e.g., as in self.last_epoch) should be understood to mean
-# "iteration" instead.
-
-# FIXME: ideally this would be achieved with a CombinedLRScheduler, separating
-# MultiStepLR with WarmupLR but the current LRScheduler design doesn't allow it.
-
-
-class WarmupMultiStepLR(torch.optim.lr_scheduler._LRScheduler):
- def __init__(
- self,
- optimizer: torch.optim.Optimizer,
- milestones: List[int],
- gamma: float = 0.1,
- warmup_factor: float = 0.001,
- warmup_iters: int = 1000,
- warmup_method: str = "linear",
- last_epoch: int = -1,
- ):
- logger.warning(
- "WarmupMultiStepLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!"
- )
- if not list(milestones) == sorted(milestones):
- raise ValueError(
- "Milestones should be a list of" " increasing integers. Got {}", milestones
- )
- self.milestones = milestones
- self.gamma = gamma
- self.warmup_factor = warmup_factor
- self.warmup_iters = warmup_iters
- self.warmup_method = warmup_method
- super().__init__(optimizer, last_epoch)
-
- def get_lr(self) -> List[float]:
- warmup_factor = _get_warmup_factor_at_iter(
- self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor
- )
- return [
- base_lr * warmup_factor * self.gamma ** bisect_right(self.milestones, self.last_epoch)
- for base_lr in self.base_lrs
- ]
-
- def _compute_values(self) -> List[float]:
- # The new interface
- return self.get_lr()
-
-
-class WarmupCosineLR(torch.optim.lr_scheduler._LRScheduler):
- def __init__(
- self,
- optimizer: torch.optim.Optimizer,
- max_iters: int,
- warmup_factor: float = 0.001,
- warmup_iters: int = 1000,
- warmup_method: str = "linear",
- last_epoch: int = -1,
- ):
- logger.warning(
- "WarmupCosineLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!"
- )
- self.max_iters = max_iters
- self.warmup_factor = warmup_factor
- self.warmup_iters = warmup_iters
- self.warmup_method = warmup_method
- super().__init__(optimizer, last_epoch)
-
- def get_lr(self) -> List[float]:
- warmup_factor = _get_warmup_factor_at_iter(
- self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor
- )
- # Different definitions of half-cosine with warmup are possible. For
- # simplicity we multiply the standard half-cosine schedule by the warmup
- # factor. An alternative is to start the period of the cosine at warmup_iters
- # instead of at 0. In the case that warmup_iters << max_iters the two are
- # very close to each other.
- return [
- base_lr
- * warmup_factor
- * 0.5
- * (1.0 + math.cos(math.pi * self.last_epoch / self.max_iters))
- for base_lr in self.base_lrs
- ]
-
- def _compute_values(self) -> List[float]:
- # The new interface
- return self.get_lr()
-
-
-def _get_warmup_factor_at_iter(
- method: str, iter: int, warmup_iters: int, warmup_factor: float
-) -> float:
- """
- Return the learning rate warmup factor at a specific iteration.
- See :paper:`ImageNet in 1h` for more details.
-
- Args:
- method (str): warmup method; either "constant" or "linear".
- iter (int): iteration at which to calculate the warmup factor.
- warmup_iters (int): the number of warmup iterations.
- warmup_factor (float): the base warmup factor (the meaning changes according
- to the method used).
-
- Returns:
- float: the effective warmup factor at the given iteration.
- """
- if iter >= warmup_iters:
- return 1.0
-
- if method == "constant":
- return warmup_factor
- elif method == "linear":
- alpha = iter / warmup_iters
- return warmup_factor * (1 - alpha) + alpha
- else:
- raise ValueError("Unknown warmup method: {}".format(method))
diff --git a/spaces/Chemsseddine/summarisation/README.md b/spaces/Chemsseddine/summarisation/README.md
deleted file mode 100644
index c893cd4db139e4a1ac44a70deae31abfc2286077..0000000000000000000000000000000000000000
--- a/spaces/Chemsseddine/summarisation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Summarisation
-emoji: 📝
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.0.20
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Chris4K/llms_compare/Adobe-Media-Encoder-Cs4-Portablerar.md b/spaces/Chris4K/llms_compare/Adobe-Media-Encoder-Cs4-Portablerar.md
deleted file mode 100644
index 0a81dfdd70488e4b0fec24f7bec6c5fdad090e38..0000000000000000000000000000000000000000
--- a/spaces/Chris4K/llms_compare/Adobe-Media-Encoder-Cs4-Portablerar.md
+++ /dev/null
@@ -1,68 +0,0 @@
-## Adobe Media Encoder Cs4 Portable.rar
-
-
-
-
-
-
-
-
-
-**Download File ✫ [https://urluso.com/2tBNxz](https://urluso.com/2tBNxz)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Download and Use Adobe Media Encoder CS4 Portable
-
-
-
-Adobe Media Encoder CS4 Portable is a software that allows you to convert video and audio files to various formats. It is a standalone application that does not require installation and can be run from a USB drive or any other removable media. In this article, we will show you how to download and use Adobe Media Encoder CS4 Portable.
-
-
-
-## Step 1: Download Adobe Media Encoder CS4 Portable
-
-
-
-You can download Adobe Media Encoder CS4 Portable from various online sources, such as 4shared[^1^] or Google Drive[^2^]. The file size is about 68 MB and it is compressed in a RAR archive. You will need a software like WinRAR or 7-Zip to extract the files.
-
-
-
-## Step 2: Extract Adobe Media Encoder CS4 Portable
-
-
-
-After downloading the RAR archive, right-click on it and select "Extract Here" or "Extract to Adobe Media Encoder CS4 Portable". You will see a folder named "Adobe Media Encoder CS4 Portable" with several files inside. You can move this folder to any location you want, such as your desktop or a USB drive.
-
-
-
-## Step 3: Run Adobe Media Encoder CS4 Portable
-
-
-
-To run Adobe Media Encoder CS4 Portable, double-click on the file named "Adobe Media Encoder.exe". You will see a window with a simple interface where you can add, edit, and encode your media files. You can drag and drop files from your computer or browse them using the "Add" button. You can also adjust the settings for each file, such as the format, quality, resolution, frame rate, bitrate, and more. You can preview the output using the "Play" button. When you are ready, click on the "Start Queue" button to begin the encoding process. You can monitor the progress and status of each file in the queue. The encoded files will be saved in the same folder as the original files by default.
-
-
-
-## Conclusion
-
-
-
-Adobe Media Encoder CS4 Portable is a handy tool for converting video and audio files to various formats. It is easy to use and does not require installation. You can download it from online sources and run it from any removable media. It supports a wide range of input and output formats and allows you to customize the encoding settings for each file. It is compatible with Windows XP, Vista, 7, 8, and 10.
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/utils/pitch_utils.py b/spaces/ChrisPreston/diff-svc_minato_aqua/utils/pitch_utils.py
deleted file mode 100644
index 1767810e600c8f82e821ff4fc0a164daddaf7af4..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/utils/pitch_utils.py
+++ /dev/null
@@ -1,76 +0,0 @@
-#########
-# world
-##########
-import librosa
-import numpy as np
-import torch
-
-# gamma = 0
-# mcepInput = 3 # 0 for dB, 3 for magnitude
-# alpha = 0.45
-# en_floor = 10 ** (-80 / 20)
-# FFT_SIZE = 2048
-
-
-
-
-def f0_to_coarse(f0,hparams):
- f0_bin = hparams['f0_bin']
- f0_max = hparams['f0_max']
- f0_min = hparams['f0_min']
- is_torch = isinstance(f0, torch.Tensor)
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1
-
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1
- f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(int)
- assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min())
- return f0_coarse
-
-
-def norm_f0(f0, uv, hparams):
- is_torch = isinstance(f0, torch.Tensor)
- if hparams['pitch_norm'] == 'standard':
- f0 = (f0 - hparams['f0_mean']) / hparams['f0_std']
- if hparams['pitch_norm'] == 'log':
- f0 = torch.log2(f0) if is_torch else np.log2(f0)
- if uv is not None and hparams['use_uv']:
- f0[uv > 0] = 0
- return f0
-
-
-def norm_interp_f0(f0, hparams):
- is_torch = isinstance(f0, torch.Tensor)
- if is_torch:
- device = f0.device
- f0 = f0.data.cpu().numpy()
- uv = f0 == 0
- f0 = norm_f0(f0, uv, hparams)
- if sum(uv) == len(f0):
- f0[uv] = 0
- elif sum(uv) > 0:
- f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv])
- uv = torch.FloatTensor(uv)
- f0 = torch.FloatTensor(f0)
- if is_torch:
- f0 = f0.to(device)
- return f0, uv
-
-
-def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None):
- if hparams['pitch_norm'] == 'standard':
- f0 = f0 * hparams['f0_std'] + hparams['f0_mean']
- if hparams['pitch_norm'] == 'log':
- f0 = 2 ** f0
- if min is not None:
- f0 = f0.clamp(min=min)
- if max is not None:
- f0 = f0.clamp(max=max)
- if uv is not None and hparams['use_uv']:
- f0[uv > 0] = 0
- if pitch_padding is not None:
- f0[pitch_padding] = 0
- return f0
diff --git a/spaces/ClaudioX/mg_sd_esp/README.md b/spaces/ClaudioX/mg_sd_esp/README.md
deleted file mode 100644
index 38db5d9a5bda1d249bdb357c2e97744454aca935..0000000000000000000000000000000000000000
--- a/spaces/ClaudioX/mg_sd_esp/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Mg Sd Esp
-emoji: 😻
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
-license: wtfpl
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/ChatgptAi.py b/spaces/CofAI/chat/g4f/Provider/Providers/ChatgptAi.py
deleted file mode 100644
index 46605175d1ac94fcde252b53ddb81ba99f15706e..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/g4f/Provider/Providers/ChatgptAi.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import os
-import requests, re
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://chatgpt.ai/gpt-4/'
-model = ['gpt-4']
-supports_stream = True
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- chat = ''
- for message in messages:
- chat += '%s: %s\n' % (message['role'], message['content'])
- chat += 'assistant: '
-
- response = requests.get('https://chatgpt.ai/')
- nonce, post_id, _, bot_id = re.findall(r'data-nonce="(.*)"\n data-post-id="(.*)"\n data-url="(.*)"\n data-bot-id="(.*)"\n data-width', response.text)[0]
-
- headers = {
- 'authority': 'chatgpt.ai',
- 'accept': '*/*',
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
- 'cache-control': 'no-cache',
- 'origin': 'https://chatgpt.ai',
- 'pragma': 'no-cache',
- 'referer': 'https://chatgpt.ai/gpt-4/',
- 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- }
- data = {
- '_wpnonce': nonce,
- 'post_id': post_id,
- 'url': 'https://chatgpt.ai/gpt-4',
- 'action': 'wpaicg_chat_shortcode_message',
- 'message': chat,
- 'bot_id': bot_id
- }
-
- response = requests.post('https://chatgpt.ai/wp-admin/admin-ajax.php',
- headers=headers, data=data)
-
- yield (response.json()['data'])
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/ttFont.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/ttFont.py
deleted file mode 100644
index 1bece8e5e4cfc52693e60b1414454cef5505fb8c..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/ttFont.py
+++ /dev/null
@@ -1,1145 +0,0 @@
-from fontTools.config import Config
-from fontTools.misc import xmlWriter
-from fontTools.misc.configTools import AbstractConfig
-from fontTools.misc.textTools import Tag, byteord, tostr
-from fontTools.misc.loggingTools import deprecateArgument
-from fontTools.ttLib import TTLibError
-from fontTools.ttLib.ttGlyphSet import _TTGlyph, _TTGlyphSetCFF, _TTGlyphSetGlyf
-from fontTools.ttLib.sfnt import SFNTReader, SFNTWriter
-from io import BytesIO, StringIO, UnsupportedOperation
-import os
-import logging
-import traceback
-
-log = logging.getLogger(__name__)
-
-
-class TTFont(object):
-
- """Represents a TrueType font.
-
- The object manages file input and output, and offers a convenient way of
- accessing tables. Tables will be only decompiled when necessary, ie. when
- they're actually accessed. This means that simple operations can be extremely fast.
-
- Example usage::
-
- >> from fontTools import ttLib
- >> tt = ttLib.TTFont("afont.ttf") # Load an existing font file
- >> tt['maxp'].numGlyphs
- 242
- >> tt['OS/2'].achVendID
- 'B&H\000'
- >> tt['head'].unitsPerEm
- 2048
-
- For details of the objects returned when accessing each table, see :ref:`tables`.
- To add a table to the font, use the :py:func:`newTable` function::
-
- >> os2 = newTable("OS/2")
- >> os2.version = 4
- >> # set other attributes
- >> font["OS/2"] = os2
-
- TrueType fonts can also be serialized to and from XML format (see also the
- :ref:`ttx` binary)::
-
- >> tt.saveXML("afont.ttx")
- Dumping 'LTSH' table...
- Dumping 'OS/2' table...
- [...]
-
- >> tt2 = ttLib.TTFont() # Create a new font object
- >> tt2.importXML("afont.ttx")
- >> tt2['maxp'].numGlyphs
- 242
-
- The TTFont object may be used as a context manager; this will cause the file
- reader to be closed after the context ``with`` block is exited::
-
- with TTFont(filename) as f:
- # Do stuff
-
- Args:
- file: When reading a font from disk, either a pathname pointing to a file,
- or a readable file object.
- res_name_or_index: If running on a Macintosh, either a sfnt resource name or
- an sfnt resource index number. If the index number is zero, TTLib will
- autodetect whether the file is a flat file or a suitcase. (If it is a suitcase,
- only the first 'sfnt' resource will be read.)
- sfntVersion (str): When constructing a font object from scratch, sets the four-byte
- sfnt magic number to be used. Defaults to ``\0\1\0\0`` (TrueType). To create
- an OpenType file, use ``OTTO``.
- flavor (str): Set this to ``woff`` when creating a WOFF file or ``woff2`` for a WOFF2
- file.
- checkChecksums (int): How checksum data should be treated. Default is 0
- (no checking). Set to 1 to check and warn on wrong checksums; set to 2 to
- raise an exception if any wrong checksums are found.
- recalcBBoxes (bool): If true (the default), recalculates ``glyf``, ``CFF ``,
- ``head`` bounding box values and ``hhea``/``vhea`` min/max values on save.
- Also compiles the glyphs on importing, which saves memory consumption and
- time.
- ignoreDecompileErrors (bool): If true, exceptions raised during table decompilation
- will be ignored, and the binary data will be returned for those tables instead.
- recalcTimestamp (bool): If true (the default), sets the ``modified`` timestamp in
- the ``head`` table on save.
- fontNumber (int): The index of the font in a TrueType Collection file.
- lazy (bool): If lazy is set to True, many data structures are loaded lazily, upon
- access only. If it is set to False, many data structures are loaded immediately.
- The default is ``lazy=None`` which is somewhere in between.
- """
-
- def __init__(
- self,
- file=None,
- res_name_or_index=None,
- sfntVersion="\000\001\000\000",
- flavor=None,
- checkChecksums=0,
- verbose=None,
- recalcBBoxes=True,
- allowVID=NotImplemented,
- ignoreDecompileErrors=False,
- recalcTimestamp=True,
- fontNumber=-1,
- lazy=None,
- quiet=None,
- _tableCache=None,
- cfg={},
- ):
- for name in ("verbose", "quiet"):
- val = locals().get(name)
- if val is not None:
- deprecateArgument(name, "configure logging instead")
- setattr(self, name, val)
-
- self.lazy = lazy
- self.recalcBBoxes = recalcBBoxes
- self.recalcTimestamp = recalcTimestamp
- self.tables = {}
- self.reader = None
- self.cfg = cfg.copy() if isinstance(cfg, AbstractConfig) else Config(cfg)
- self.ignoreDecompileErrors = ignoreDecompileErrors
-
- if not file:
- self.sfntVersion = sfntVersion
- self.flavor = flavor
- self.flavorData = None
- return
- seekable = True
- if not hasattr(file, "read"):
- closeStream = True
- # assume file is a string
- if res_name_or_index is not None:
- # see if it contains 'sfnt' resources in the resource or data fork
- from . import macUtils
-
- if res_name_or_index == 0:
- if macUtils.getSFNTResIndices(file):
- # get the first available sfnt font.
- file = macUtils.SFNTResourceReader(file, 1)
- else:
- file = open(file, "rb")
- else:
- file = macUtils.SFNTResourceReader(file, res_name_or_index)
- else:
- file = open(file, "rb")
- else:
- # assume "file" is a readable file object
- closeStream = False
- # SFNTReader wants the input file to be seekable.
- # SpooledTemporaryFile has no seekable() on < 3.11, but still can seek:
- # https://github.com/fonttools/fonttools/issues/3052
- if hasattr(file, "seekable"):
- seekable = file.seekable()
- elif hasattr(file, "seek"):
- try:
- file.seek(0)
- except UnsupportedOperation:
- seekable = False
-
- if not self.lazy:
- # read input file in memory and wrap a stream around it to allow overwriting
- if seekable:
- file.seek(0)
- tmp = BytesIO(file.read())
- if hasattr(file, "name"):
- # save reference to input file name
- tmp.name = file.name
- if closeStream:
- file.close()
- file = tmp
- elif not seekable:
- raise TTLibError("Input file must be seekable when lazy=True")
- self._tableCache = _tableCache
- self.reader = SFNTReader(file, checkChecksums, fontNumber=fontNumber)
- self.sfntVersion = self.reader.sfntVersion
- self.flavor = self.reader.flavor
- self.flavorData = self.reader.flavorData
-
- def __enter__(self):
- return self
-
- def __exit__(self, type, value, traceback):
- self.close()
-
- def close(self):
- """If we still have a reader object, close it."""
- if self.reader is not None:
- self.reader.close()
-
- def save(self, file, reorderTables=True):
- """Save the font to disk.
-
- Args:
- file: Similarly to the constructor, can be either a pathname or a writable
- file object.
- reorderTables (Option[bool]): If true (the default), reorder the tables,
- sorting them by tag (recommended by the OpenType specification). If
- false, retain the original font order. If None, reorder by table
- dependency (fastest).
- """
- if not hasattr(file, "write"):
- if self.lazy and self.reader.file.name == file:
- raise TTLibError("Can't overwrite TTFont when 'lazy' attribute is True")
- createStream = True
- else:
- # assume "file" is a writable file object
- createStream = False
-
- tmp = BytesIO()
-
- writer_reordersTables = self._save(tmp)
-
- if not (
- reorderTables is None
- or writer_reordersTables
- or (reorderTables is False and self.reader is None)
- ):
- if reorderTables is False:
- # sort tables using the original font's order
- tableOrder = list(self.reader.keys())
- else:
- # use the recommended order from the OpenType specification
- tableOrder = None
- tmp.flush()
- tmp2 = BytesIO()
- reorderFontTables(tmp, tmp2, tableOrder)
- tmp.close()
- tmp = tmp2
-
- if createStream:
- # "file" is a path
- with open(file, "wb") as file:
- file.write(tmp.getvalue())
- else:
- file.write(tmp.getvalue())
-
- tmp.close()
-
- def _save(self, file, tableCache=None):
- """Internal function, to be shared by save() and TTCollection.save()"""
-
- if self.recalcTimestamp and "head" in self:
- self[
- "head"
- ] # make sure 'head' is loaded so the recalculation is actually done
-
- tags = list(self.keys())
- if "GlyphOrder" in tags:
- tags.remove("GlyphOrder")
- numTables = len(tags)
- # write to a temporary stream to allow saving to unseekable streams
- writer = SFNTWriter(
- file, numTables, self.sfntVersion, self.flavor, self.flavorData
- )
-
- done = []
- for tag in tags:
- self._writeTable(tag, writer, done, tableCache)
-
- writer.close()
-
- return writer.reordersTables()
-
- def saveXML(self, fileOrPath, newlinestr="\n", **kwargs):
- """Export the font as TTX (an XML-based text file), or as a series of text
- files when splitTables is true. In the latter case, the 'fileOrPath'
- argument should be a path to a directory.
- The 'tables' argument must either be false (dump all tables) or a
- list of tables to dump. The 'skipTables' argument may be a list of tables
- to skip, but only when the 'tables' argument is false.
- """
-
- writer = xmlWriter.XMLWriter(fileOrPath, newlinestr=newlinestr)
- self._saveXML(writer, **kwargs)
- writer.close()
-
- def _saveXML(
- self,
- writer,
- writeVersion=True,
- quiet=None,
- tables=None,
- skipTables=None,
- splitTables=False,
- splitGlyphs=False,
- disassembleInstructions=True,
- bitmapGlyphDataFormat="raw",
- ):
-
- if quiet is not None:
- deprecateArgument("quiet", "configure logging instead")
-
- self.disassembleInstructions = disassembleInstructions
- self.bitmapGlyphDataFormat = bitmapGlyphDataFormat
- if not tables:
- tables = list(self.keys())
- if "GlyphOrder" not in tables:
- tables = ["GlyphOrder"] + tables
- if skipTables:
- for tag in skipTables:
- if tag in tables:
- tables.remove(tag)
- numTables = len(tables)
-
- if writeVersion:
- from fontTools import version
-
- version = ".".join(version.split(".")[:2])
- writer.begintag(
- "ttFont",
- sfntVersion=repr(tostr(self.sfntVersion))[1:-1],
- ttLibVersion=version,
- )
- else:
- writer.begintag("ttFont", sfntVersion=repr(tostr(self.sfntVersion))[1:-1])
- writer.newline()
-
- # always splitTables if splitGlyphs is enabled
- splitTables = splitTables or splitGlyphs
-
- if not splitTables:
- writer.newline()
- else:
- path, ext = os.path.splitext(writer.filename)
-
- for i in range(numTables):
- tag = tables[i]
- if splitTables:
- tablePath = path + "." + tagToIdentifier(tag) + ext
- tableWriter = xmlWriter.XMLWriter(
- tablePath, newlinestr=writer.newlinestr
- )
- tableWriter.begintag("ttFont", ttLibVersion=version)
- tableWriter.newline()
- tableWriter.newline()
- writer.simpletag(tagToXML(tag), src=os.path.basename(tablePath))
- writer.newline()
- else:
- tableWriter = writer
- self._tableToXML(tableWriter, tag, splitGlyphs=splitGlyphs)
- if splitTables:
- tableWriter.endtag("ttFont")
- tableWriter.newline()
- tableWriter.close()
- writer.endtag("ttFont")
- writer.newline()
-
- def _tableToXML(self, writer, tag, quiet=None, splitGlyphs=False):
- if quiet is not None:
- deprecateArgument("quiet", "configure logging instead")
- if tag in self:
- table = self[tag]
- report = "Dumping '%s' table..." % tag
- else:
- report = "No '%s' table found." % tag
- log.info(report)
- if tag not in self:
- return
- xmlTag = tagToXML(tag)
- attrs = dict()
- if hasattr(table, "ERROR"):
- attrs["ERROR"] = "decompilation error"
- from .tables.DefaultTable import DefaultTable
-
- if table.__class__ == DefaultTable:
- attrs["raw"] = True
- writer.begintag(xmlTag, **attrs)
- writer.newline()
- if tag == "glyf":
- table.toXML(writer, self, splitGlyphs=splitGlyphs)
- else:
- table.toXML(writer, self)
- writer.endtag(xmlTag)
- writer.newline()
- writer.newline()
-
- def importXML(self, fileOrPath, quiet=None):
- """Import a TTX file (an XML-based text format), so as to recreate
- a font object.
- """
- if quiet is not None:
- deprecateArgument("quiet", "configure logging instead")
-
- if "maxp" in self and "post" in self:
- # Make sure the glyph order is loaded, as it otherwise gets
- # lost if the XML doesn't contain the glyph order, yet does
- # contain the table which was originally used to extract the
- # glyph names from (ie. 'post', 'cmap' or 'CFF ').
- self.getGlyphOrder()
-
- from fontTools.misc import xmlReader
-
- reader = xmlReader.XMLReader(fileOrPath, self)
- reader.read()
-
- def isLoaded(self, tag):
- """Return true if the table identified by ``tag`` has been
- decompiled and loaded into memory."""
- return tag in self.tables
-
- def has_key(self, tag):
- """Test if the table identified by ``tag`` is present in the font.
-
- As well as this method, ``tag in font`` can also be used to determine the
- presence of the table."""
- if self.isLoaded(tag):
- return True
- elif self.reader and tag in self.reader:
- return True
- elif tag == "GlyphOrder":
- return True
- else:
- return False
-
- __contains__ = has_key
-
- def keys(self):
- """Returns the list of tables in the font, along with the ``GlyphOrder`` pseudo-table."""
- keys = list(self.tables.keys())
- if self.reader:
- for key in list(self.reader.keys()):
- if key not in keys:
- keys.append(key)
-
- if "GlyphOrder" in keys:
- keys.remove("GlyphOrder")
- keys = sortedTagList(keys)
- return ["GlyphOrder"] + keys
-
- def ensureDecompiled(self, recurse=None):
- """Decompile all the tables, even if a TTFont was opened in 'lazy' mode."""
- for tag in self.keys():
- table = self[tag]
- if recurse is None:
- recurse = self.lazy is not False
- if recurse and hasattr(table, "ensureDecompiled"):
- table.ensureDecompiled(recurse=recurse)
- self.lazy = False
-
- def __len__(self):
- return len(list(self.keys()))
-
- def __getitem__(self, tag):
- tag = Tag(tag)
- table = self.tables.get(tag)
- if table is None:
- if tag == "GlyphOrder":
- table = GlyphOrder(tag)
- self.tables[tag] = table
- elif self.reader is not None:
- table = self._readTable(tag)
- else:
- raise KeyError("'%s' table not found" % tag)
- return table
-
- def _readTable(self, tag):
- log.debug("Reading '%s' table from disk", tag)
- data = self.reader[tag]
- if self._tableCache is not None:
- table = self._tableCache.get((tag, data))
- if table is not None:
- return table
- tableClass = getTableClass(tag)
- table = tableClass(tag)
- self.tables[tag] = table
- log.debug("Decompiling '%s' table", tag)
- try:
- table.decompile(data, self)
- except Exception:
- if not self.ignoreDecompileErrors:
- raise
- # fall back to DefaultTable, retaining the binary table data
- log.exception(
- "An exception occurred during the decompilation of the '%s' table", tag
- )
- from .tables.DefaultTable import DefaultTable
-
- file = StringIO()
- traceback.print_exc(file=file)
- table = DefaultTable(tag)
- table.ERROR = file.getvalue()
- self.tables[tag] = table
- table.decompile(data, self)
- if self._tableCache is not None:
- self._tableCache[(tag, data)] = table
- return table
-
- def __setitem__(self, tag, table):
- self.tables[Tag(tag)] = table
-
- def __delitem__(self, tag):
- if tag not in self:
- raise KeyError("'%s' table not found" % tag)
- if tag in self.tables:
- del self.tables[tag]
- if self.reader and tag in self.reader:
- del self.reader[tag]
-
- def get(self, tag, default=None):
- """Returns the table if it exists or (optionally) a default if it doesn't."""
- try:
- return self[tag]
- except KeyError:
- return default
-
- def setGlyphOrder(self, glyphOrder):
- """Set the glyph order
-
- Args:
- glyphOrder ([str]): List of glyph names in order.
- """
- self.glyphOrder = glyphOrder
- if hasattr(self, "_reverseGlyphOrderDict"):
- del self._reverseGlyphOrderDict
- if self.isLoaded("glyf"):
- self["glyf"].setGlyphOrder(glyphOrder)
-
- def getGlyphOrder(self):
- """Returns a list of glyph names ordered by their position in the font."""
- try:
- return self.glyphOrder
- except AttributeError:
- pass
- if "CFF " in self:
- cff = self["CFF "]
- self.glyphOrder = cff.getGlyphOrder()
- elif "post" in self:
- # TrueType font
- glyphOrder = self["post"].getGlyphOrder()
- if glyphOrder is None:
- #
- # No names found in the 'post' table.
- # Try to create glyph names from the unicode cmap (if available)
- # in combination with the Adobe Glyph List (AGL).
- #
- self._getGlyphNamesFromCmap()
- elif len(glyphOrder) < self["maxp"].numGlyphs:
- #
- # Not enough names found in the 'post' table.
- # Can happen when 'post' format 1 is improperly used on a font that
- # has more than 258 glyphs (the lenght of 'standardGlyphOrder').
- #
- log.warning(
- "Not enough names found in the 'post' table, generating them from cmap instead"
- )
- self._getGlyphNamesFromCmap()
- else:
- self.glyphOrder = glyphOrder
- else:
- self._getGlyphNamesFromCmap()
- return self.glyphOrder
-
- def _getGlyphNamesFromCmap(self):
- #
- # This is rather convoluted, but then again, it's an interesting problem:
- # - we need to use the unicode values found in the cmap table to
- # build glyph names (eg. because there is only a minimal post table,
- # or none at all).
- # - but the cmap parser also needs glyph names to work with...
- # So here's what we do:
- # - make up glyph names based on glyphID
- # - load a temporary cmap table based on those names
- # - extract the unicode values, build the "real" glyph names
- # - unload the temporary cmap table
- #
- if self.isLoaded("cmap"):
- # Bootstrapping: we're getting called by the cmap parser
- # itself. This means self.tables['cmap'] contains a partially
- # loaded cmap, making it impossible to get at a unicode
- # subtable here. We remove the partially loaded cmap and
- # restore it later.
- # This only happens if the cmap table is loaded before any
- # other table that does f.getGlyphOrder() or f.getGlyphName().
- cmapLoading = self.tables["cmap"]
- del self.tables["cmap"]
- else:
- cmapLoading = None
- # Make up glyph names based on glyphID, which will be used by the
- # temporary cmap and by the real cmap in case we don't find a unicode
- # cmap.
- numGlyphs = int(self["maxp"].numGlyphs)
- glyphOrder = [None] * numGlyphs
- glyphOrder[0] = ".notdef"
- for i in range(1, numGlyphs):
- glyphOrder[i] = "glyph%.5d" % i
- # Set the glyph order, so the cmap parser has something
- # to work with (so we don't get called recursively).
- self.glyphOrder = glyphOrder
-
- # Make up glyph names based on the reversed cmap table. Because some
- # glyphs (eg. ligatures or alternates) may not be reachable via cmap,
- # this naming table will usually not cover all glyphs in the font.
- # If the font has no Unicode cmap table, reversecmap will be empty.
- if "cmap" in self:
- reversecmap = self["cmap"].buildReversed()
- else:
- reversecmap = {}
- useCount = {}
- for i in range(numGlyphs):
- tempName = glyphOrder[i]
- if tempName in reversecmap:
- # If a font maps both U+0041 LATIN CAPITAL LETTER A and
- # U+0391 GREEK CAPITAL LETTER ALPHA to the same glyph,
- # we prefer naming the glyph as "A".
- glyphName = self._makeGlyphName(min(reversecmap[tempName]))
- numUses = useCount[glyphName] = useCount.get(glyphName, 0) + 1
- if numUses > 1:
- glyphName = "%s.alt%d" % (glyphName, numUses - 1)
- glyphOrder[i] = glyphName
-
- if "cmap" in self:
- # Delete the temporary cmap table from the cache, so it can
- # be parsed again with the right names.
- del self.tables["cmap"]
- self.glyphOrder = glyphOrder
- if cmapLoading:
- # restore partially loaded cmap, so it can continue loading
- # using the proper names.
- self.tables["cmap"] = cmapLoading
-
- @staticmethod
- def _makeGlyphName(codepoint):
- from fontTools import agl # Adobe Glyph List
-
- if codepoint in agl.UV2AGL:
- return agl.UV2AGL[codepoint]
- elif codepoint <= 0xFFFF:
- return "uni%04X" % codepoint
- else:
- return "u%X" % codepoint
-
- def getGlyphNames(self):
- """Get a list of glyph names, sorted alphabetically."""
- glyphNames = sorted(self.getGlyphOrder())
- return glyphNames
-
- def getGlyphNames2(self):
- """Get a list of glyph names, sorted alphabetically,
- but not case sensitive.
- """
- from fontTools.misc import textTools
-
- return textTools.caselessSort(self.getGlyphOrder())
-
- def getGlyphName(self, glyphID):
- """Returns the name for the glyph with the given ID.
-
- If no name is available, synthesises one with the form ``glyphXXXXX``` where
- ```XXXXX`` is the zero-padded glyph ID.
- """
- try:
- return self.getGlyphOrder()[glyphID]
- except IndexError:
- return "glyph%.5d" % glyphID
-
- def getGlyphNameMany(self, lst):
- """Converts a list of glyph IDs into a list of glyph names."""
- glyphOrder = self.getGlyphOrder()
- cnt = len(glyphOrder)
- return [glyphOrder[gid] if gid < cnt else "glyph%.5d" % gid for gid in lst]
-
- def getGlyphID(self, glyphName):
- """Returns the ID of the glyph with the given name."""
- try:
- return self.getReverseGlyphMap()[glyphName]
- except KeyError:
- if glyphName[:5] == "glyph":
- try:
- return int(glyphName[5:])
- except (NameError, ValueError):
- raise KeyError(glyphName)
- raise
-
- def getGlyphIDMany(self, lst):
- """Converts a list of glyph names into a list of glyph IDs."""
- d = self.getReverseGlyphMap()
- try:
- return [d[glyphName] for glyphName in lst]
- except KeyError:
- getGlyphID = self.getGlyphID
- return [getGlyphID(glyphName) for glyphName in lst]
-
- def getReverseGlyphMap(self, rebuild=False):
- """Returns a mapping of glyph names to glyph IDs."""
- if rebuild or not hasattr(self, "_reverseGlyphOrderDict"):
- self._buildReverseGlyphOrderDict()
- return self._reverseGlyphOrderDict
-
- def _buildReverseGlyphOrderDict(self):
- self._reverseGlyphOrderDict = d = {}
- for glyphID, glyphName in enumerate(self.getGlyphOrder()):
- d[glyphName] = glyphID
- return d
-
- def _writeTable(self, tag, writer, done, tableCache=None):
- """Internal helper function for self.save(). Keeps track of
- inter-table dependencies.
- """
- if tag in done:
- return
- tableClass = getTableClass(tag)
- for masterTable in tableClass.dependencies:
- if masterTable not in done:
- if masterTable in self:
- self._writeTable(masterTable, writer, done, tableCache)
- else:
- done.append(masterTable)
- done.append(tag)
- tabledata = self.getTableData(tag)
- if tableCache is not None:
- entry = tableCache.get((Tag(tag), tabledata))
- if entry is not None:
- log.debug("reusing '%s' table", tag)
- writer.setEntry(tag, entry)
- return
- log.debug("Writing '%s' table to disk", tag)
- writer[tag] = tabledata
- if tableCache is not None:
- tableCache[(Tag(tag), tabledata)] = writer[tag]
-
- def getTableData(self, tag):
- """Returns the binary representation of a table.
-
- If the table is currently loaded and in memory, the data is compiled to
- binary and returned; if it is not currently loaded, the binary data is
- read from the font file and returned.
- """
- tag = Tag(tag)
- if self.isLoaded(tag):
- log.debug("Compiling '%s' table", tag)
- return self.tables[tag].compile(self)
- elif self.reader and tag in self.reader:
- log.debug("Reading '%s' table from disk", tag)
- return self.reader[tag]
- else:
- raise KeyError(tag)
-
- def getGlyphSet(self, preferCFF=True, location=None, normalized=False):
- """Return a generic GlyphSet, which is a dict-like object
- mapping glyph names to glyph objects. The returned glyph objects
- have a ``.draw()`` method that supports the Pen protocol, and will
- have an attribute named 'width'.
-
- If the font is CFF-based, the outlines will be taken from the ``CFF ``
- or ``CFF2`` tables. Otherwise the outlines will be taken from the
- ``glyf`` table.
-
- If the font contains both a ``CFF ``/``CFF2`` and a ``glyf`` table, you
- can use the ``preferCFF`` argument to specify which one should be taken.
- If the font contains both a ``CFF `` and a ``CFF2`` table, the latter is
- taken.
-
- If the ``location`` parameter is set, it should be a dictionary mapping
- four-letter variation tags to their float values, and the returned
- glyph-set will represent an instance of a variable font at that
- location.
-
- If the ``normalized`` variable is set to True, that location is
- interpreted as in the normalized (-1..+1) space, otherwise it is in the
- font's defined axes space.
- """
- if location and "fvar" not in self:
- location = None
- if location and not normalized:
- location = self.normalizeLocation(location)
- if ("CFF " in self or "CFF2" in self) and (preferCFF or "glyf" not in self):
- return _TTGlyphSetCFF(self, location)
- elif "glyf" in self:
- return _TTGlyphSetGlyf(self, location)
- else:
- raise TTLibError("Font contains no outlines")
-
- def normalizeLocation(self, location):
- """Normalize a ``location`` from the font's defined axes space (also
- known as user space) into the normalized (-1..+1) space. It applies
- ``avar`` mapping if the font contains an ``avar`` table.
-
- The ``location`` parameter should be a dictionary mapping four-letter
- variation tags to their float values.
-
- Raises ``TTLibError`` if the font is not a variable font.
- """
- from fontTools.varLib.models import normalizeLocation, piecewiseLinearMap
-
- if "fvar" not in self:
- raise TTLibError("Not a variable font")
-
- axes = {
- a.axisTag: (a.minValue, a.defaultValue, a.maxValue)
- for a in self["fvar"].axes
- }
- location = normalizeLocation(location, axes)
- if "avar" in self:
- avar = self["avar"]
- avarSegments = avar.segments
- mappedLocation = {}
- for axisTag, value in location.items():
- avarMapping = avarSegments.get(axisTag, None)
- if avarMapping is not None:
- value = piecewiseLinearMap(value, avarMapping)
- mappedLocation[axisTag] = value
- location = mappedLocation
- return location
-
- def getBestCmap(
- self,
- cmapPreferences=(
- (3, 10),
- (0, 6),
- (0, 4),
- (3, 1),
- (0, 3),
- (0, 2),
- (0, 1),
- (0, 0),
- ),
- ):
- """Returns the 'best' Unicode cmap dictionary available in the font
- or ``None``, if no Unicode cmap subtable is available.
-
- By default it will search for the following (platformID, platEncID)
- pairs in order::
-
- (3, 10), # Windows Unicode full repertoire
- (0, 6), # Unicode full repertoire (format 13 subtable)
- (0, 4), # Unicode 2.0 full repertoire
- (3, 1), # Windows Unicode BMP
- (0, 3), # Unicode 2.0 BMP
- (0, 2), # Unicode ISO/IEC 10646
- (0, 1), # Unicode 1.1
- (0, 0) # Unicode 1.0
-
- This particular order matches what HarfBuzz uses to choose what
- subtable to use by default. This order prefers the largest-repertoire
- subtable, and among those, prefers the Windows-platform over the
- Unicode-platform as the former has wider support.
-
- This order can be customized via the ``cmapPreferences`` argument.
- """
- return self["cmap"].getBestCmap(cmapPreferences=cmapPreferences)
-
-
-class GlyphOrder(object):
-
- """A pseudo table. The glyph order isn't in the font as a separate
- table, but it's nice to present it as such in the TTX format.
- """
-
- def __init__(self, tag=None):
- pass
-
- def toXML(self, writer, ttFont):
- glyphOrder = ttFont.getGlyphOrder()
- writer.comment(
- "The 'id' attribute is only for humans; " "it is ignored when parsed."
- )
- writer.newline()
- for i in range(len(glyphOrder)):
- glyphName = glyphOrder[i]
- writer.simpletag("GlyphID", id=i, name=glyphName)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if not hasattr(self, "glyphOrder"):
- self.glyphOrder = []
- if name == "GlyphID":
- self.glyphOrder.append(attrs["name"])
- ttFont.setGlyphOrder(self.glyphOrder)
-
-
-def getTableModule(tag):
- """Fetch the packer/unpacker module for a table.
- Return None when no module is found.
- """
- from . import tables
-
- pyTag = tagToIdentifier(tag)
- try:
- __import__("fontTools.ttLib.tables." + pyTag)
- except ImportError as err:
- # If pyTag is found in the ImportError message,
- # means table is not implemented. If it's not
- # there, then some other module is missing, don't
- # suppress the error.
- if str(err).find(pyTag) >= 0:
- return None
- else:
- raise err
- else:
- return getattr(tables, pyTag)
-
-
-# Registry for custom table packer/unpacker classes. Keys are table
-# tags, values are (moduleName, className) tuples.
-# See registerCustomTableClass() and getCustomTableClass()
-_customTableRegistry = {}
-
-
-def registerCustomTableClass(tag, moduleName, className=None):
- """Register a custom packer/unpacker class for a table.
-
- The 'moduleName' must be an importable module. If no 'className'
- is given, it is derived from the tag, for example it will be
- ``table_C_U_S_T_`` for a 'CUST' tag.
-
- The registered table class should be a subclass of
- :py:class:`fontTools.ttLib.tables.DefaultTable.DefaultTable`
- """
- if className is None:
- className = "table_" + tagToIdentifier(tag)
- _customTableRegistry[tag] = (moduleName, className)
-
-
-def unregisterCustomTableClass(tag):
- """Unregister the custom packer/unpacker class for a table."""
- del _customTableRegistry[tag]
-
-
-def getCustomTableClass(tag):
- """Return the custom table class for tag, if one has been registered
- with 'registerCustomTableClass()'. Else return None.
- """
- if tag not in _customTableRegistry:
- return None
- import importlib
-
- moduleName, className = _customTableRegistry[tag]
- module = importlib.import_module(moduleName)
- return getattr(module, className)
-
-
-def getTableClass(tag):
- """Fetch the packer/unpacker class for a table."""
- tableClass = getCustomTableClass(tag)
- if tableClass is not None:
- return tableClass
- module = getTableModule(tag)
- if module is None:
- from .tables.DefaultTable import DefaultTable
-
- return DefaultTable
- pyTag = tagToIdentifier(tag)
- tableClass = getattr(module, "table_" + pyTag)
- return tableClass
-
-
-def getClassTag(klass):
- """Fetch the table tag for a class object."""
- name = klass.__name__
- assert name[:6] == "table_"
- name = name[6:] # Chop 'table_'
- return identifierToTag(name)
-
-
-def newTable(tag):
- """Return a new instance of a table."""
- tableClass = getTableClass(tag)
- return tableClass(tag)
-
-
-def _escapechar(c):
- """Helper function for tagToIdentifier()"""
- import re
-
- if re.match("[a-z0-9]", c):
- return "_" + c
- elif re.match("[A-Z]", c):
- return c + "_"
- else:
- return hex(byteord(c))[2:]
-
-
-def tagToIdentifier(tag):
- """Convert a table tag to a valid (but UGLY) python identifier,
- as well as a filename that's guaranteed to be unique even on a
- caseless file system. Each character is mapped to two characters.
- Lowercase letters get an underscore before the letter, uppercase
- letters get an underscore after the letter. Trailing spaces are
- trimmed. Illegal characters are escaped as two hex bytes. If the
- result starts with a number (as the result of a hex escape), an
- extra underscore is prepended. Examples::
-
- >>> tagToIdentifier('glyf')
- '_g_l_y_f'
- >>> tagToIdentifier('cvt ')
- '_c_v_t'
- >>> tagToIdentifier('OS/2')
- 'O_S_2f_2'
- """
- import re
-
- tag = Tag(tag)
- if tag == "GlyphOrder":
- return tag
- assert len(tag) == 4, "tag should be 4 characters long"
- while len(tag) > 1 and tag[-1] == " ":
- tag = tag[:-1]
- ident = ""
- for c in tag:
- ident = ident + _escapechar(c)
- if re.match("[0-9]", ident):
- ident = "_" + ident
- return ident
-
-
-def identifierToTag(ident):
- """the opposite of tagToIdentifier()"""
- if ident == "GlyphOrder":
- return ident
- if len(ident) % 2 and ident[0] == "_":
- ident = ident[1:]
- assert not (len(ident) % 2)
- tag = ""
- for i in range(0, len(ident), 2):
- if ident[i] == "_":
- tag = tag + ident[i + 1]
- elif ident[i + 1] == "_":
- tag = tag + ident[i]
- else:
- # assume hex
- tag = tag + chr(int(ident[i : i + 2], 16))
- # append trailing spaces
- tag = tag + (4 - len(tag)) * " "
- return Tag(tag)
-
-
-def tagToXML(tag):
- """Similarly to tagToIdentifier(), this converts a TT tag
- to a valid XML element name. Since XML element names are
- case sensitive, this is a fairly simple/readable translation.
- """
- import re
-
- tag = Tag(tag)
- if tag == "OS/2":
- return "OS_2"
- elif tag == "GlyphOrder":
- return tag
- if re.match("[A-Za-z_][A-Za-z_0-9]* *$", tag):
- return tag.strip()
- else:
- return tagToIdentifier(tag)
-
-
-def xmlToTag(tag):
- """The opposite of tagToXML()"""
- if tag == "OS_2":
- return Tag("OS/2")
- if len(tag) == 8:
- return identifierToTag(tag)
- else:
- return Tag(tag + " " * (4 - len(tag)))
-
-
-# Table order as recommended in the OpenType specification 1.4
-TTFTableOrder = [
- "head",
- "hhea",
- "maxp",
- "OS/2",
- "hmtx",
- "LTSH",
- "VDMX",
- "hdmx",
- "cmap",
- "fpgm",
- "prep",
- "cvt ",
- "loca",
- "glyf",
- "kern",
- "name",
- "post",
- "gasp",
- "PCLT",
-]
-
-OTFTableOrder = ["head", "hhea", "maxp", "OS/2", "name", "cmap", "post", "CFF "]
-
-
-def sortedTagList(tagList, tableOrder=None):
- """Return a sorted copy of tagList, sorted according to the OpenType
- specification, or according to a custom tableOrder. If given and not
- None, tableOrder needs to be a list of tag names.
- """
- tagList = sorted(tagList)
- if tableOrder is None:
- if "DSIG" in tagList:
- # DSIG should be last (XXX spec reference?)
- tagList.remove("DSIG")
- tagList.append("DSIG")
- if "CFF " in tagList:
- tableOrder = OTFTableOrder
- else:
- tableOrder = TTFTableOrder
- orderedTables = []
- for tag in tableOrder:
- if tag in tagList:
- orderedTables.append(tag)
- tagList.remove(tag)
- orderedTables.extend(tagList)
- return orderedTables
-
-
-def reorderFontTables(inFile, outFile, tableOrder=None, checkChecksums=False):
- """Rewrite a font file, ordering the tables as recommended by the
- OpenType specification 1.4.
- """
- inFile.seek(0)
- outFile.seek(0)
- reader = SFNTReader(inFile, checkChecksums=checkChecksums)
- writer = SFNTWriter(
- outFile,
- len(reader.tables),
- reader.sfntVersion,
- reader.flavor,
- reader.flavorData,
- )
- tables = list(reader.keys())
- for tag in sortedTagList(tables, tableOrder):
- writer[tag] = reader[tag]
- writer.close()
-
-
-def maxPowerOfTwo(x):
- """Return the highest exponent of two, so that
- (2 ** exponent) <= x. Return 0 if x is 0.
- """
- exponent = 0
- while x:
- x = x >> 1
- exponent = exponent + 1
- return max(exponent - 1, 0)
-
-
-def getSearchRange(n, itemSize=16):
- """Calculate searchRange, entrySelector, rangeShift."""
- # itemSize defaults to 16, for backward compatibility
- # with upstream fonttools.
- exponent = maxPowerOfTwo(n)
- searchRange = (2**exponent) * itemSize
- entrySelector = exponent
- rangeShift = max(0, n * itemSize - searchRange)
- return searchRange, entrySelector, rangeShift
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/inference/_generated/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/inference/_generated/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/postprocessing/__init__.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/postprocessing/__init__.py
deleted file mode 100644
index a6fb3961ff067e512a90ae61786a9ad1cdc25a30..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/postprocessing/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-"""
-@Date: 2021/10/06
-@description:
-"""
diff --git a/spaces/Demi2809/rvc-models/infer_pack/modules.py b/spaces/Demi2809/rvc-models/infer_pack/modules.py
deleted file mode 100644
index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000
--- a/spaces/Demi2809/rvc-models/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/biggan/__init__.py b/spaces/Dinoking/Guccio-AI-Designer/models/biggan/__init__.py
deleted file mode 100644
index 583509736f3503bc277d5d2e2a69f445f7df8517..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/models/biggan/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from pathlib import Path
-import sys
-
-module_path = Path(__file__).parent / 'pytorch_biggan'
-sys.path.append(str(module_path.resolve()))
-from pytorch_pretrained_biggan import *
-from pytorch_pretrained_biggan.model import GenBlock
-from pytorch_pretrained_biggan.file_utils import http_get, s3_get
\ No newline at end of file
diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/models/models.py b/spaces/Dorado607/ChuanhuChatGPT/modules/models/models.py
deleted file mode 100644
index 23338ab3f20b9f541fa30c9879b28f488ccf9d04..0000000000000000000000000000000000000000
--- a/spaces/Dorado607/ChuanhuChatGPT/modules/models/models.py
+++ /dev/null
@@ -1,670 +0,0 @@
-from __future__ import annotations
-from typing import TYPE_CHECKING, List
-
-import logging
-import json
-import commentjson as cjson
-import os
-import sys
-import requests
-import urllib3
-import platform
-import base64
-from io import BytesIO
-from PIL import Image
-
-from tqdm import tqdm
-import colorama
-import asyncio
-import aiohttp
-from enum import Enum
-import uuid
-
-from ..presets import *
-from ..index_func import *
-from ..utils import *
-from .. import shared
-from ..config import retrieve_proxy, usage_limit, sensitive_id
-from modules import config
-from .base_model import BaseLLMModel, ModelType
-
-
-class OpenAIClient(BaseLLMModel):
- def __init__(
- self,
- model_name,
- api_key,
- system_prompt=INITIAL_SYSTEM_PROMPT,
- temperature=1.0,
- top_p=1.0,
- user_name=""
- ) -> None:
- super().__init__(
- model_name=model_name,
- temperature=temperature,
- top_p=top_p,
- system_prompt=system_prompt,
- user=user_name
- )
- self.api_key = api_key
- self.need_api_key = True
- self._refresh_header()
-
- def get_answer_stream_iter(self):
- response = self._get_response(stream=True)
- if response is not None:
- iter = self._decode_chat_response(response)
- partial_text = ""
- for i in iter:
- partial_text += i
- yield partial_text
- else:
- yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG
-
- def get_answer_at_once(self):
- response = self._get_response()
- response = json.loads(response.text)
- content = response["choices"][0]["message"]["content"]
- total_token_count = response["usage"]["total_tokens"]
- return content, total_token_count
-
- def count_token(self, user_input):
- input_token_count = count_token(construct_user(user_input))
- if self.system_prompt is not None and len(self.all_token_counts) == 0:
- system_prompt_token_count = count_token(
- construct_system(self.system_prompt)
- )
- return input_token_count + system_prompt_token_count
- return input_token_count
-
- def billing_info(self):
- try:
- curr_time = datetime.datetime.now()
- last_day_of_month = get_last_day_of_month(
- curr_time).strftime("%Y-%m-%d")
- first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d")
- usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}"
- try:
- usage_data = self._get_billing_data(usage_url)
- except Exception as e:
- # logging.error(f"获取API使用情况失败: " + str(e))
- if "Invalid authorization header" in str(e):
- return i18n("**获取API使用情况失败**,需在填写`config.json`中正确填写sensitive_id")
- elif "Incorrect API key provided: sess" in str(e):
- return i18n("**获取API使用情况失败**,sensitive_id错误或已过期")
- return i18n("**获取API使用情况失败**")
- # rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100)
- rounded_usage = round(usage_data["total_usage"] / 100, 5)
- usage_percent = round(usage_data["total_usage"] / usage_limit, 2)
- # return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}"
- return get_html("billing_info.html").format(
- label = i18n("本月使用金额"),
- usage_percent = usage_percent,
- rounded_usage = rounded_usage,
- usage_limit = usage_limit
- )
- except requests.exceptions.ConnectTimeout:
- status_text = (
- STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- )
- return status_text
- except requests.exceptions.ReadTimeout:
- status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- return status_text
- except Exception as e:
- import traceback
- traceback.print_exc()
- logging.error(i18n("获取API使用情况失败:") + str(e))
- return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG
-
- def set_token_upper_limit(self, new_upper_limit):
- pass
-
- @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用
- def _get_response(self, stream=False):
- openai_api_key = self.api_key
- system_prompt = self.system_prompt
- history = self.history
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}",
- }
-
- if system_prompt is not None:
- history = [construct_system(system_prompt), *history]
-
- payload = {
- "model": self.model_name,
- "messages": history,
- "temperature": self.temperature,
- "top_p": self.top_p,
- "n": self.n_choices,
- "stream": stream,
- "presence_penalty": self.presence_penalty,
- "frequency_penalty": self.frequency_penalty,
- }
-
- if self.max_generation_token is not None:
- payload["max_tokens"] = self.max_generation_token
- if self.stop_sequence is not None:
- payload["stop"] = self.stop_sequence
- if self.logit_bias is not None:
- payload["logit_bias"] = self.logit_bias
- if self.user_identifier:
- payload["user"] = self.user_identifier
-
- if stream:
- timeout = TIMEOUT_STREAMING
- else:
- timeout = TIMEOUT_ALL
-
- # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求
- if shared.state.completion_url != COMPLETION_URL:
- logging.info(f"使用自定义API URL: {shared.state.completion_url}")
-
- with retrieve_proxy():
- try:
- response = requests.post(
- shared.state.completion_url,
- headers=headers,
- json=payload,
- stream=stream,
- timeout=timeout,
- )
- except:
- return None
- return response
-
- def _refresh_header(self):
- self.headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {sensitive_id}",
- }
-
-
- def _get_billing_data(self, billing_url):
- with retrieve_proxy():
- response = requests.get(
- billing_url,
- headers=self.headers,
- timeout=TIMEOUT_ALL,
- )
-
- if response.status_code == 200:
- data = response.json()
- return data
- else:
- raise Exception(
- f"API request failed with status code {response.status_code}: {response.text}"
- )
-
- def _decode_chat_response(self, response):
- error_msg = ""
- for chunk in response.iter_lines():
- if chunk:
- chunk = chunk.decode()
- chunk_length = len(chunk)
- try:
- chunk = json.loads(chunk[6:])
- except json.JSONDecodeError:
- print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}")
- error_msg += chunk
- continue
- if chunk_length > 6 and "delta" in chunk["choices"][0]:
- if chunk["choices"][0]["finish_reason"] == "stop":
- break
- try:
- yield chunk["choices"][0]["delta"]["content"]
- except Exception as e:
- # logging.error(f"Error: {e}")
- continue
- if error_msg:
- raise Exception(error_msg)
-
- def set_key(self, new_access_key):
- ret = super().set_key(new_access_key)
- self._refresh_header()
- return ret
-
-
-class ChatGLM_Client(BaseLLMModel):
- def __init__(self, model_name, user_name="") -> None:
- super().__init__(model_name=model_name, user=user_name)
- from transformers import AutoTokenizer, AutoModel
- import torch
- global CHATGLM_TOKENIZER, CHATGLM_MODEL
- if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None:
- system_name = platform.system()
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"THUDM/{model_name}"
- CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained(
- model_source, trust_remote_code=True
- )
- quantified = False
- if "int4" in model_name:
- quantified = True
- model = AutoModel.from_pretrained(
- model_source, trust_remote_code=True
- )
- if torch.cuda.is_available():
- # run on CUDA
- logging.info("CUDA is available, using CUDA")
- model = model.half().cuda()
- # mps加速还存在一些问题,暂时不使用
- elif system_name == "Darwin" and model_path is not None and not quantified:
- logging.info("Running on macOS, using MPS")
- # running on macOS and model already downloaded
- model = model.half().to("mps")
- else:
- logging.info("GPU is not available, using CPU")
- model = model.float()
- model = model.eval()
- CHATGLM_MODEL = model
-
- def _get_glm_style_input(self):
- history = [x["content"] for x in self.history]
- query = history.pop()
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- assert (
- len(history) % 2 == 0
- ), f"History should be even length. current history is: {history}"
- history = [[history[i], history[i + 1]]
- for i in range(0, len(history), 2)]
- return history, query
-
- def get_answer_at_once(self):
- history, query = self._get_glm_style_input()
- response, _ = CHATGLM_MODEL.chat(
- CHATGLM_TOKENIZER, query, history=history)
- return response, len(response)
-
- def get_answer_stream_iter(self):
- history, query = self._get_glm_style_input()
- for response, history in CHATGLM_MODEL.stream_chat(
- CHATGLM_TOKENIZER,
- query,
- history,
- max_length=self.token_upper_limit,
- top_p=self.top_p,
- temperature=self.temperature,
- ):
- yield response
-
-
-class LLaMA_Client(BaseLLMModel):
- def __init__(
- self,
- model_name,
- lora_path=None,
- user_name=""
- ) -> None:
- super().__init__(model_name=model_name, user=user_name)
- from lmflow.datasets.dataset import Dataset
- from lmflow.pipeline.auto_pipeline import AutoPipeline
- from lmflow.models.auto_model import AutoModel
- from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments
-
- self.max_generation_token = 1000
- self.end_string = "\n\n"
- # We don't need input data
- data_args = DatasetArguments(dataset_path=None)
- self.dataset = Dataset(data_args)
- self.system_prompt = ""
-
- global LLAMA_MODEL, LLAMA_INFERENCER
- if LLAMA_MODEL is None or LLAMA_INFERENCER is None:
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"decapoda-research/{model_name}"
- # raise Exception(f"models目录下没有这个模型: {model_name}")
- if lora_path is not None:
- lora_path = f"lora/{lora_path}"
- model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None,
- use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True)
- pipeline_args = InferencerArguments(
- local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16')
-
- with open(pipeline_args.deepspeed, "r", encoding="utf-8") as f:
- ds_config = json.load(f)
- LLAMA_MODEL = AutoModel.get_model(
- model_args,
- tune_strategy="none",
- ds_config=ds_config,
- )
- LLAMA_INFERENCER = AutoPipeline.get_pipeline(
- pipeline_name="inferencer",
- model_args=model_args,
- data_args=data_args,
- pipeline_args=pipeline_args,
- )
-
- def _get_llama_style_input(self):
- history = []
- instruction = ""
- if self.system_prompt:
- instruction = (f"Instruction: {self.system_prompt}\n")
- for x in self.history:
- if x["role"] == "user":
- history.append(f"{instruction}Input: {x['content']}")
- else:
- history.append(f"Output: {x['content']}")
- context = "\n\n".join(history)
- context += "\n\nOutput: "
- return context
-
- def get_answer_at_once(self):
- context = self._get_llama_style_input()
-
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [{"text": context}]}
- )
-
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=self.max_generation_token,
- temperature=self.temperature,
- )
-
- response = output_dataset.to_dict()["instances"][0]["text"]
- return response, len(response)
-
- def get_answer_stream_iter(self):
- context = self._get_llama_style_input()
- partial_text = ""
- step = 1
- for _ in range(0, self.max_generation_token, step):
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [
- {"text": context + partial_text}]}
- )
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=step,
- temperature=self.temperature,
- )
- response = output_dataset.to_dict()["instances"][0]["text"]
- if response == "" or response == self.end_string:
- break
- partial_text += response
- yield partial_text
-
-
-class XMChat(BaseLLMModel):
- def __init__(self, api_key, user_name=""):
- super().__init__(model_name="xmchat", user=user_name)
- self.api_key = api_key
- self.session_id = None
- self.reset()
- self.image_bytes = None
- self.image_path = None
- self.xm_history = []
- self.url = "https://xmbot.net/web"
- self.last_conv_id = None
-
- def reset(self):
- self.session_id = str(uuid.uuid4())
- self.last_conv_id = None
- return [], "已重置"
-
- def image_to_base64(self, image_path):
- # 打开并加载图片
- img = Image.open(image_path)
-
- # 获取图片的宽度和高度
- width, height = img.size
-
- # 计算压缩比例,以确保最长边小于4096像素
- max_dimension = 2048
- scale_ratio = min(max_dimension / width, max_dimension / height)
-
- if scale_ratio < 1:
- # 按压缩比例调整图片大小
- new_width = int(width * scale_ratio)
- new_height = int(height * scale_ratio)
- img = img.resize((new_width, new_height), Image.ANTIALIAS)
-
- # 将图片转换为jpg格式的二进制数据
- buffer = BytesIO()
- if img.mode == "RGBA":
- img = img.convert("RGB")
- img.save(buffer, format='JPEG')
- binary_image = buffer.getvalue()
-
- # 对二进制数据进行Base64编码
- base64_image = base64.b64encode(binary_image).decode('utf-8')
-
- return base64_image
-
- def try_read_image(self, filepath):
- def is_image_file(filepath):
- # 判断文件是否为图片
- valid_image_extensions = [
- ".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"]
- file_extension = os.path.splitext(filepath)[1].lower()
- return file_extension in valid_image_extensions
-
- if is_image_file(filepath):
- logging.info(f"读取图片文件: {filepath}")
- self.image_bytes = self.image_to_base64(filepath)
- self.image_path = filepath
- else:
- self.image_bytes = None
- self.image_path = None
-
- def like(self):
- if self.last_conv_id is None:
- return "点赞失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "good"
- }
- requests.post(self.url, json=data)
- return "👍点赞成功,感谢反馈~"
-
- def dislike(self):
- if self.last_conv_id is None:
- return "点踩失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "bad"
- }
- requests.post(self.url, json=data)
- return "👎点踩成功,感谢反馈~"
-
- def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
- fake_inputs = real_inputs
- display_append = ""
- limited_context = False
- return limited_context, fake_inputs, display_append, real_inputs, chatbot
-
- def handle_file_upload(self, files, chatbot, language):
- """if the model accepts multi modal input, implement this function"""
- if files:
- for file in files:
- if file.name:
- logging.info(f"尝试读取图像: {file.name}")
- self.try_read_image(file.name)
- if self.image_path is not None:
- chatbot = chatbot + [((self.image_path,), None)]
- if self.image_bytes is not None:
- logging.info("使用图片作为输入")
- # XMChat的一轮对话中实际上只能处理一张图片
- self.reset()
- conv_id = str(uuid.uuid4())
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "imgbase64",
- "data": self.image_bytes
- }
- response = requests.post(self.url, json=data)
- response = json.loads(response.text)
- logging.info(f"图片回复: {response['data']}")
- return None, chatbot, None
-
- def get_answer_at_once(self):
- question = self.history[-1]["content"]
- conv_id = str(uuid.uuid4())
- self.last_conv_id = conv_id
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "text",
- "data": question
- }
- response = requests.post(self.url, json=data)
- try:
- response = json.loads(response.text)
- return response["data"], len(response["data"])
- except Exception as e:
- return response.text, len(response.text)
-
-
-def get_model(
- model_name,
- lora_model_path=None,
- access_key=None,
- temperature=None,
- top_p=None,
- system_prompt=None,
- user_name=""
-) -> BaseLLMModel:
- msg = i18n("模型设置为了:") + f" {model_name}"
- model_type = ModelType.get_type(model_name)
- lora_selector_visibility = False
- lora_choices = []
- dont_change_lora_selector = False
- if model_type != ModelType.OpenAI:
- config.local_embedding = True
- # del current_model.model
- model = None
- chatbot = gr.Chatbot.update(label=model_name)
- try:
- if model_type == ModelType.OpenAI:
- logging.info(f"正在加载OpenAI模型: {model_name}")
- access_key = os.environ.get("OPENAI_API_KEY", access_key)
- model = OpenAIClient(
- model_name=model_name,
- api_key=access_key,
- system_prompt=system_prompt,
- temperature=temperature,
- top_p=top_p,
- user_name=user_name,
- )
- elif model_type == ModelType.ChatGLM:
- logging.info(f"正在加载ChatGLM模型: {model_name}")
- model = ChatGLM_Client(model_name, user_name=user_name)
- elif model_type == ModelType.LLaMA and lora_model_path == "":
- msg = f"现在请为 {model_name} 选择LoRA模型"
- logging.info(msg)
- lora_selector_visibility = True
- if os.path.isdir("lora"):
- lora_choices = get_file_names(
- "lora", plain=True, filetypes=[""])
- lora_choices = ["No LoRA"] + lora_choices
- elif model_type == ModelType.LLaMA and lora_model_path != "":
- logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}")
- dont_change_lora_selector = True
- if lora_model_path == "No LoRA":
- lora_model_path = None
- msg += " + No LoRA"
- else:
- msg += f" + {lora_model_path}"
- model = LLaMA_Client(
- model_name, lora_model_path, user_name=user_name)
- elif model_type == ModelType.XMChat:
- if os.environ.get("XMCHAT_API_KEY") != "":
- access_key = os.environ.get("XMCHAT_API_KEY")
- model = XMChat(api_key=access_key, user_name=user_name)
- elif model_type == ModelType.StableLM:
- from .StableLM import StableLM_Client
- model = StableLM_Client(model_name, user_name=user_name)
- elif model_type == ModelType.MOSS:
- from .MOSS import MOSS_Client
- model = MOSS_Client(model_name, user_name=user_name)
- elif model_type == ModelType.YuanAI:
- from .inspurai import Yuan_Client
- model = Yuan_Client(model_name, api_key=access_key, user_name=user_name, system_prompt=system_prompt)
- elif model_type == ModelType.Minimax:
- from .minimax import MiniMax_Client
- if os.environ.get("MINIMAX_API_KEY") != "":
- access_key = os.environ.get("MINIMAX_API_KEY")
- model = MiniMax_Client(model_name, api_key=access_key, user_name=user_name, system_prompt=system_prompt)
- elif model_type == ModelType.ChuanhuAgent:
- from .ChuanhuAgent import ChuanhuAgent_Client
- model = ChuanhuAgent_Client(model_name, access_key, user_name=user_name)
- elif model_type == ModelType.GooglePaLM:
- from .Google_PaLM import Google_PaLM_Client
- access_key = os.environ.get("GOOGLE_PALM_API_KEY", access_key)
- model = Google_PaLM_Client(model_name, access_key, user_name=user_name)
- elif model_type == ModelType.LangchainChat:
- from .azure import Azure_OpenAI_Client
- model = Azure_OpenAI_Client(model_name, user_name=user_name)
- elif model_type == ModelType.Unknown:
- raise ValueError(f"未知模型: {model_name}")
- logging.info(msg)
- except Exception as e:
- import traceback
- traceback.print_exc()
- msg = f"{STANDARD_ERROR_MSG}: {e}"
- presudo_key = hide_middle_chars(access_key)
- if dont_change_lora_selector:
- return model, msg, chatbot, gr.update(), access_key, presudo_key
- else:
- return model, msg, chatbot, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility), access_key, presudo_key
-
-
-if __name__ == "__main__":
- with open("config.json", "r", encoding="utf-8") as f:
- openai_api_key = cjson.load(f)["openai_api_key"]
- # set logging level to debug
- logging.basicConfig(level=logging.DEBUG)
- # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key)
- client = get_model(model_name="chatglm-6b-int4")
- chatbot = []
- stream = False
- # 测试账单功能
- logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET)
- logging.info(client.billing_info())
- # 测试问答
- logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET)
- question = "巴黎是中国的首都吗?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试问答后history : {client.history}")
- # 测试记忆力
- logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET)
- question = "我刚刚问了你什么问题?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试记忆力后history : {client.history}")
- # 测试重试功能
- logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET)
- for i in client.retry(chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"重试后history : {client.history}")
- # # 测试总结功能
- # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET)
- # chatbot, msg = client.reduce_token_size(chatbot=chatbot)
- # print(chatbot, msg)
- # print(f"总结后history: {client.history}")
diff --git a/spaces/DpNaze/webui-docker/Dockerfile b/spaces/DpNaze/webui-docker/Dockerfile
deleted file mode 100644
index bef144f1407b217ad4d9b801107ab67b17eb8460..0000000000000000000000000000000000000000
--- a/spaces/DpNaze/webui-docker/Dockerfile
+++ /dev/null
@@ -1,158 +0,0 @@
-#
-# Prep
-#
-FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04
-
-ENV DEBIAN_FRONTEND noninteractive
-ENV PYTHONUNBUFFERED=1
-ENV PIP_DISABLE_PIP_VERSION_CHECK=1
-ENV PIP_NO_CACHE_DIR=1
-# OS setup
-RUN apt-get update -y \
- && apt-get upgrade -y \
- && apt-get install -y \
- libgl1 \
- libglib2.0-0 \
- curl \
- vim \
- wget \
- git \
- git-lfs \
- tzdata \
- bash \
- ca-certificates \
- libreadline8 \
- bzip2 \
- psmisc \
- procps \
- netbase \
- openssh-client \
- libsqlite3-dev \
- python3-pip \
- python3-venv \
- python-is-python3 \
- build-essential \
- libssl-dev \
- libffi-dev \
- aria2 \
- \
- && pip3 install --upgrade pip \
- \
- && git lfs install \
- \
- && apt-get clean autoclean \
- && apt-get autoremove --yes \
- && rm -rf /var/lib/apt/lists/*
-# OS timezone setting (UTC)
-RUN echo "UTC" > /etc/timezone
-ENV TZ=UTC
-# Poetry for Python packages
-RUN curl -sSL https://install.python-poetry.org | POETRY_HOME=/usr/local/poetry python3 - --yes \
- && ln -s /usr/local/poetry/bin/poetry /usr/bin/poetry \
- \
- && poetry config virtualenvs.create false \
- && poetry config virtualenvs.in-project false
-# Create non-root user
-ENV ENV="/etc/profile"
-RUN adduser --disabled-password --gecos '' user && \
- mkdir -p /app && \
- chown -R user:user /app && \
- printf "\n. /etc/profile\n" >> /home/user/.profile \
- printf "\n. /etc/profile\n" >> /home/user/.bashrc
-# Sets up virtualenv for dependencies
-ENV VIRTUAL_ENV="/opt/venv"
-ENV VIRTUAL_ENV_DISABLE_PROMPT=1
-ENV POETRY_ACTIVE=1
-ENV PATH="$VIRTUAL_ENV/bin:$PATH"
-RUN echo "export PATH=$PATH" >> /home/user/.bashrc \
- && python3 -m venv $VIRTUAL_ENV \
- && /opt/venv/bin/pip install --upgrade --no-cache-dir pip \
- && chown -R user:user /opt/venv
-# Run as non-root user
-USER user
-WORKDIR /app
-# Installation of basic Python dependencies specified in pyproject.toml
-COPY --chown=user:user pyproject.toml poetry.lock /app/
-RUN poetry install
-
-#
-# AUTOMATIC1111' WebUI
-RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui /app/stable-diffusion-webui
-#
-
-#
-# Patch WebUI
-#
-
-# From Osmond's app.py
-RUN wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /app/env_patch.py
-RUN sed -i -e '/import image_from_url_text/r /app/env_patch.py' /app/stable-diffusion-webui/modules/ui.py
-RUN sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /app/stable-diffusion-webui/modules/ui.py
-RUN sed -i -e '/(train_interface, \"Train\", \"train\")/d' /app/stable-diffusion-webui/modules/ui.py
-RUN sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /app/stable-diffusion-webui/modules/ui.py
-RUN sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /app/stable-diffusion-webui/modules/ui.py
-
-#From Camernduru
-RUN sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /app/stable-diffusion-webui/script.js
-RUN sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /app/stable-diffusion-webui/modules/ui.py
-#RUN sed -i -e 's/default_enabled=False/default_enabled=True/g' /app/stable-diffusion-webui/webui.py
-#RUN sed -i -e 's/ outputs=\[/queue=False, &/g' /app/stable-diffusion-webui/modules/ui.py
-#RUN sed -i -e 's/ queue=False, / /g' /app/stable-diffusion-webui/modules/ui.py
-
-# BUG FIX - Caution!
-# ref https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6840
-# Doesn't seem reiable, but it does help a bit with the scrolling problem. Maybe. A little.
-RUN sed -i '/sd_vae_approx\.model()/ {s/^\(\s*\)x_sample = sd_vae_approx.model()(sample.to(devices.device, devices.dtype).unsqueeze(0))\[0\].detach()$/\1sample2 = sample.to(devices.device, devices.dtype).unsqueeze(0)\n\1model = sd_vae_approx.model().to(devices.device, devices.dtype)\n\1x_sample = model(sample2)\[0\].detach()/}' /app/stable-diffusion-webui/modules/sd_samplers_common.py
-
-#
-# Install extensions
-#
-
-# Images Browser WebUI extension
-RUN git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser
-
-# Additional Networks WebUI extension
-RUN git clone https://github.com/kohya-ss/sd-webui-additional-networks /app/stable-diffusion-webui/extensions/sd-webui-additional-networks
-
-# Lycoris extension
-RUN git clone https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris /app/stable-diffusion-webui/extensions/a1111-sd-webui-lycoris
-
-RUN mkdir -p /app/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/lora/lycoris
-
-# CiviTAI BETTER Browser WebUI extension
-RUN git clone https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper /app/stable-diffusion-webui/extensions/Stable-Diffusion-Webui-Civitai-Helper
-
-# CiviTAI WebUI extension
-RUN git clone https://github.com/civitai/sd_civitai_extension /app/stable-diffusion-webui/extensions/sd_civitai_extension
-
-# Batchlinks Downloader extension
-RUN git clone https://github.com/etherealxx/batchlinks-webui /app/stable-diffusion-webui/extensions/batchlinks-webui
-
-# Fast PNG Info extension
-RUN git clone https://github.com/NoCrypt/sd-fast-pnginfo /app/stable-diffusion-webui/extensions/sd-fast-pnginfo
-
-# Filer extension
-RUN git clone https://github.com/aka7774/sd_filer /app/stable-diffusion-webui/extensions/sd_filer
-
-# ControlNet WebUI extension
-#RUN git clone https://github.com/Mikubill/sd-webui-controlnet /app/stable-diffusion-webui/extensions/sd-webui-controlnet
-
-# PoseX extension
-#RUN git clone https://github.com/hnmr293/posex /app/stable-diffusion-webui/extensions/posex
-
-# Prepare WebUI environment
-WORKDIR /app/stable-diffusion-webui
-RUN /opt/venv/bin/python launch.py --exit --skip-torch-cuda-test --xformers
-
-# Copy startup scripts
-COPY --chown=user:user run.py on_start.sh config.json ui-config.json /app/stable-diffusion-webui/
-
-RUN chmod +x on_start.sh
-
-EXPOSE 7860
-
-#
-# Run
-#
-
-CMD ["/opt/venv/bin/python", "run.py", "--listen", "--enable-insecure-extension-access", "--ui-config-file", "ui-config.json", "--ui-settings-file", "config.json", "--disable-console-progressbars", "--cors-allow-origins", "huggingface.co,hf.space", "--no-progressbar-hiding", "--enable-console-prompts", "--no-download-sd-model", "--api", "--skip-version-check", "--lyco-dir", "/app/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/lora/lycoris"]
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/grid_sample_gradfix.py b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/grid_sample_gradfix.py
deleted file mode 100644
index ca6b3413ea72a734703c34382c023b84523601fd..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/grid_sample_gradfix.py
+++ /dev/null
@@ -1,83 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Custom replacement for `torch.nn.functional.grid_sample` that
-supports arbitrarily high order gradients between the input and output.
-Only works on 2D images and assumes
-`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`."""
-
-import warnings
-import torch
-
-# pylint: disable=redefined-builtin
-# pylint: disable=arguments-differ
-# pylint: disable=protected-access
-
-#----------------------------------------------------------------------------
-
-enabled = False # Enable the custom op by setting this to true.
-
-#----------------------------------------------------------------------------
-
-def grid_sample(input, grid):
- if _should_use_custom_op():
- return _GridSample2dForward.apply(input, grid)
- return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False)
-
-#----------------------------------------------------------------------------
-
-def _should_use_custom_op():
- if not enabled:
- return False
- if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9']):
- return True
- warnings.warn(f'grid_sample_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.grid_sample().')
- return False
-
-#----------------------------------------------------------------------------
-
-class _GridSample2dForward(torch.autograd.Function):
- @staticmethod
- def forward(ctx, input, grid):
- assert input.ndim == 4
- assert grid.ndim == 4
- output = torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False)
- ctx.save_for_backward(input, grid)
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- input, grid = ctx.saved_tensors
- grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid)
- return grad_input, grad_grid
-
-#----------------------------------------------------------------------------
-
-class _GridSample2dBackward(torch.autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input, grid):
- op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')
- grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False)
- ctx.save_for_backward(grid)
- return grad_input, grad_grid
-
- @staticmethod
- def backward(ctx, grad2_grad_input, grad2_grad_grid):
- _ = grad2_grad_grid # unused
- grid, = ctx.saved_tensors
- grad2_grad_output = None
- grad2_input = None
- grad2_grid = None
-
- if ctx.needs_input_grad[0]:
- grad2_grad_output = _GridSample2dForward.apply(grad2_grad_input, grid)
-
- assert not ctx.needs_input_grad[2]
- return grad2_grad_output, grad2_input, grad2_grid
-
-#----------------------------------------------------------------------------
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/persistence.py b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/persistence.py
deleted file mode 100644
index 0186cfd97bca0fcb397a7b73643520c1d1105a02..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/persistence.py
+++ /dev/null
@@ -1,251 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Facilities for pickling Python code alongside other data.
-
-The pickled code is automatically imported into a separate Python module
-during unpickling. This way, any previously exported pickles will remain
-usable even if the original code is no longer available, or if the current
-version of the code is not consistent with what was originally pickled."""
-
-import sys
-import pickle
-import io
-import inspect
-import copy
-import uuid
-import types
-import dnnlib
-
-#----------------------------------------------------------------------------
-
-_version = 6 # internal version number
-_decorators = set() # {decorator_class, ...}
-_import_hooks = [] # [hook_function, ...]
-_module_to_src_dict = dict() # {module: src, ...}
-_src_to_module_dict = dict() # {src: module, ...}
-
-#----------------------------------------------------------------------------
-
-def persistent_class(orig_class):
- r"""Class decorator that extends a given class to save its source code
- when pickled.
-
- Example:
-
- from torch_utils import persistence
-
- @persistence.persistent_class
- class MyNetwork(torch.nn.Module):
- def __init__(self, num_inputs, num_outputs):
- super().__init__()
- self.fc = MyLayer(num_inputs, num_outputs)
- ...
-
- @persistence.persistent_class
- class MyLayer(torch.nn.Module):
- ...
-
- When pickled, any instance of `MyNetwork` and `MyLayer` will save its
- source code alongside other internal state (e.g., parameters, buffers,
- and submodules). This way, any previously exported pickle will remain
- usable even if the class definitions have been modified or are no
- longer available.
-
- The decorator saves the source code of the entire Python module
- containing the decorated class. It does *not* save the source code of
- any imported modules. Thus, the imported modules must be available
- during unpickling, also including `torch_utils.persistence` itself.
-
- It is ok to call functions defined in the same module from the
- decorated class. However, if the decorated class depends on other
- classes defined in the same module, they must be decorated as well.
- This is illustrated in the above example in the case of `MyLayer`.
-
- It is also possible to employ the decorator just-in-time before
- calling the constructor. For example:
-
- cls = MyLayer
- if want_to_make_it_persistent:
- cls = persistence.persistent_class(cls)
- layer = cls(num_inputs, num_outputs)
-
- As an additional feature, the decorator also keeps track of the
- arguments that were used to construct each instance of the decorated
- class. The arguments can be queried via `obj.init_args` and
- `obj.init_kwargs`, and they are automatically pickled alongside other
- object state. A typical use case is to first unpickle a previous
- instance of a persistent class, and then upgrade it to use the latest
- version of the source code:
-
- with open('old_pickle.pkl', 'rb') as f:
- old_net = pickle.load(f)
- new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs)
- misc.copy_params_and_buffers(old_net, new_net, require_all=True)
- """
- assert isinstance(orig_class, type)
- if is_persistent(orig_class):
- return orig_class
-
- assert orig_class.__module__ in sys.modules
- orig_module = sys.modules[orig_class.__module__]
- orig_module_src = _module_to_src(orig_module)
-
- class Decorator(orig_class):
- _orig_module_src = orig_module_src
- _orig_class_name = orig_class.__name__
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self._init_args = copy.deepcopy(args)
- self._init_kwargs = copy.deepcopy(kwargs)
- assert orig_class.__name__ in orig_module.__dict__
- _check_pickleable(self.__reduce__())
-
- @property
- def init_args(self):
- return copy.deepcopy(self._init_args)
-
- @property
- def init_kwargs(self):
- return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs))
-
- def __reduce__(self):
- fields = list(super().__reduce__())
- fields += [None] * max(3 - len(fields), 0)
- if fields[0] is not _reconstruct_persistent_obj:
- meta = dict(type='class', version=_version, module_src=self._orig_module_src, class_name=self._orig_class_name, state=fields[2])
- fields[0] = _reconstruct_persistent_obj # reconstruct func
- fields[1] = (meta,) # reconstruct args
- fields[2] = None # state dict
- return tuple(fields)
-
- Decorator.__name__ = orig_class.__name__
- _decorators.add(Decorator)
- return Decorator
-
-#----------------------------------------------------------------------------
-
-def is_persistent(obj):
- r"""Test whether the given object or class is persistent, i.e.,
- whether it will save its source code when pickled.
- """
- try:
- if obj in _decorators:
- return True
- except TypeError:
- pass
- return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck
-
-#----------------------------------------------------------------------------
-
-def import_hook(hook):
- r"""Register an import hook that is called whenever a persistent object
- is being unpickled. A typical use case is to patch the pickled source
- code to avoid errors and inconsistencies when the API of some imported
- module has changed.
-
- The hook should have the following signature:
-
- hook(meta) -> modified meta
-
- `meta` is an instance of `dnnlib.EasyDict` with the following fields:
-
- type: Type of the persistent object, e.g. `'class'`.
- version: Internal version number of `torch_utils.persistence`.
- module_src Original source code of the Python module.
- class_name: Class name in the original Python module.
- state: Internal state of the object.
-
- Example:
-
- @persistence.import_hook
- def wreck_my_network(meta):
- if meta.class_name == 'MyNetwork':
- print('MyNetwork is being imported. I will wreck it!')
- meta.module_src = meta.module_src.replace("True", "False")
- return meta
- """
- assert callable(hook)
- _import_hooks.append(hook)
-
-#----------------------------------------------------------------------------
-
-def _reconstruct_persistent_obj(meta):
- r"""Hook that is called internally by the `pickle` module to unpickle
- a persistent object.
- """
- meta = dnnlib.EasyDict(meta)
- meta.state = dnnlib.EasyDict(meta.state)
- for hook in _import_hooks:
- meta = hook(meta)
- assert meta is not None
-
- assert meta.version == _version
- module = _src_to_module(meta.module_src)
-
- assert meta.type == 'class'
- orig_class = module.__dict__[meta.class_name]
- decorator_class = persistent_class(orig_class)
- obj = decorator_class.__new__(decorator_class)
-
- setstate = getattr(obj, '__setstate__', None)
- if callable(setstate):
- setstate(meta.state) # pylint: disable=not-callable
- else:
- obj.__dict__.update(meta.state)
- return obj
-
-#----------------------------------------------------------------------------
-
-def _module_to_src(module):
- r"""Query the source code of a given Python module.
- """
- src = _module_to_src_dict.get(module, None)
- if src is None:
- src = inspect.getsource(module)
- _module_to_src_dict[module] = src
- _src_to_module_dict[src] = module
- return src
-
-def _src_to_module(src):
- r"""Get or create a Python module for the given source code.
- """
- module = _src_to_module_dict.get(src, None)
- if module is None:
- module_name = "_imported_module_" + uuid.uuid4().hex
- module = types.ModuleType(module_name)
- sys.modules[module_name] = module
- _module_to_src_dict[module] = src
- _src_to_module_dict[src] = module
- exec(src, module.__dict__) # pylint: disable=exec-used
- return module
-
-#----------------------------------------------------------------------------
-
-def _check_pickleable(obj):
- r"""Check that the given object is pickleable, raising an exception if
- it is not. This function is expected to be considerably more efficient
- than actually pickling the object.
- """
- def recurse(obj):
- if isinstance(obj, (list, tuple, set)):
- return [recurse(x) for x in obj]
- if isinstance(obj, dict):
- return [[recurse(x), recurse(y)] for x, y in obj.items()]
- if isinstance(obj, (str, int, float, bool, bytes, bytearray)):
- return None # Python primitive types are pickleable.
- if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor']:
- return None # NumPy arrays and PyTorch tensors are pickleable.
- if is_persistent(obj):
- return None # Persistent objects are pickleable, by virtue of the constructor check.
- return obj
- with io.BytesIO() as f:
- pickle.dump(recurse(obj), f)
-
-#----------------------------------------------------------------------------
diff --git a/spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000
--- a/spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/Epoching/DocumentQA/CrossEncoder/cross_encoder.py b/spaces/Epoching/DocumentQA/CrossEncoder/cross_encoder.py
deleted file mode 100644
index 4f903b8172694c70edd16916487938fb36696888..0000000000000000000000000000000000000000
--- a/spaces/Epoching/DocumentQA/CrossEncoder/cross_encoder.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# Copyright (c) 2022, Lawrence Livermore National Security, LLC.
-# All rights reserved.
-# See the top-level LICENSE and NOTICE files for details.
-# LLNL-CODE-838964
-
-# SPDX-License-Identifier: Apache-2.0-with-LLVM-exception
-
-from sentence_transformers.cross_encoder import CrossEncoder as CE
-import numpy as np
-from typing import List, Dict, Tuple
-import json
-from collections import defaultdict
-import os
-
-
-class CrossEncoder:
- def __init__(self,
- model_path: str = None,
- max_length: int = None,
- **kwargs):
-
- if max_length != None:
- self.model = CE(model_path, max_length = max_length, **kwargs)
-
- self.model = CE(model_path, **kwargs)
-
-
- def predict(self,
- sentences: List[Tuple[str, str]],
- batch_size: int = 32,
- show_progress_bar: bool = False) -> List[float]:
-
- return self.model.predict(sentences = sentences,
- batch_size = batch_size,
- show_progress_bar = show_progress_bar)
-
-
-class CERank:
-
- def __init__(self, model, batch_size: int =128, **kwargs):
- self.cross_encoder = model
- self.batch_size = batch_size
-
-
- def flatten_examples(self, contexts: Dict[str, Dict], question: str):
-
- text_pairs, pair_ids = [], []
- for context_id, context in contexts.items():
- pair_ids.append(['question_0', context_id])
- text_pairs.append([question, context['text']])
-
- return text_pairs, pair_ids
-
- def group_questionrank(self, pair_ids, rank_scores):
-
- unsorted = defaultdict(list)
- for pair, score in zip(pair_ids, rank_scores):
- query_id, paragraph_id = pair[0], pair[1]
- unsorted[query_id].append((paragraph_id, score))
-
-
- return unsorted
-
- def get_rankings(self, pair_ids, rank_scores, text_pairs):
-
- unsorted_ranks = self.group_questionrank(pair_ids, rank_scores)
- rankings = defaultdict(dict)
-
- for idx, (query_id, ranks) in enumerate(unsorted_ranks.items()):
- sort_ranks = sorted(ranks, key = lambda item: item[1], reverse = True)
- sorted_ranks, scores = list(zip(*sort_ranks))
- rankings[query_id]['text'] = text_pairs[idx][0]
- rankings[query_id]['scores'] = list(scores)
- rankings[query_id]['ranks'] = list(sorted_ranks)
-
- return rankings
-
-
- def rank(self,
- contexts: Dict[str, Dict],
- question: str):
-
-
- text_pairs, pair_ids = self.flatten_examples(contexts, question)
- rank_scores = [float(score) for score in self.cross_encoder.predict(text_pairs, batch_size = self.batch_size)]
- full_results = self.get_rankings(pair_ids, rank_scores, text_pairs)
-
- return full_results
-
-
-
-def get_ranked_contexts(context_json, question):
-
- dirname = 'examples'
- model_path = 'ms-marco-electra-base'
- max_length = 512
-
- # Can't use use_fast (fast tokenizers) while gradio is running, causes conflict with tokenizer multiprocessing/parallelism.
- cross_encoder = CrossEncoder(model_path, max_length, tokenizer_args={'use_fast':False})
- ranker = CERank(cross_encoder)
-
- with open(context_json, 'r') as fin:
- contexts = json.load(fin)
-
- rankings = ranker.rank(contexts, question)
-
- with open('ranked_{0}.json'.format(context_json[:-5]), 'w') as fout:
- json.dump(rankings, fout)
-
-def get_ranked_contexts_in_memory(contexts, question):
-
- dirname = 'examples'
- model_path = 'ms-marco-electra-base'
- max_length = 512
-
- # Can't use use_fast (fast tokenizers) while gradio is running, causes conflict with tokenizer multiprocessing/parallelism.
- cross_encoder = CrossEncoder(model_path, max_length, tokenizer_args={'use_fast':False})
- ranker = CERank(cross_encoder)
-
- rankings = ranker.rank(contexts, question)
-
- return rankings
diff --git a/spaces/EsoCode/text-generation-webui/css/html_4chan_style.css b/spaces/EsoCode/text-generation-webui/css/html_4chan_style.css
deleted file mode 100644
index 99ac68452351f3b5b5a109e53dae789e6f61c804..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/css/html_4chan_style.css
+++ /dev/null
@@ -1,104 +0,0 @@
-#parent #container {
- background-color: #eef2ff;
- padding: 17px;
-}
-
-#parent #container .reply {
- background-color: rgb(214, 218, 240);
- border-bottom-color: rgb(183, 197, 217);
- border-bottom-style: solid;
- border-bottom-width: 1px;
- border-image-outset: 0;
- border-image-repeat: stretch;
- border-image-slice: 100%;
- border-image-source: none;
- border-image-width: 1;
- border-left-color: rgb(0, 0, 0);
- border-left-style: none;
- border-left-width: 0px;
- border-right-color: rgb(183, 197, 217);
- border-right-style: solid;
- border-right-width: 1px;
- border-top-color: rgb(0, 0, 0);
- border-top-style: none;
- border-top-width: 0px;
- color: rgb(0, 0, 0);
- display: table;
- font-family: arial, helvetica, sans-serif;
- font-size: 13.3333px;
- margin-bottom: 4px;
- margin-left: 0px;
- margin-right: 0px;
- margin-top: 4px;
- overflow-x: hidden;
- overflow-y: hidden;
- padding-bottom: 4px;
- padding-left: 2px;
- padding-right: 2px;
- padding-top: 4px;
-}
-
-#parent #container .number {
- color: rgb(0, 0, 0);
- font-family: arial, helvetica, sans-serif;
- font-size: 13.3333px;
- width: 342.65px;
- margin-right: 7px;
-}
-
-#parent #container .op {
- color: rgb(0, 0, 0);
- font-family: arial, helvetica, sans-serif;
- font-size: 13.3333px;
- margin-bottom: 8px;
- margin-left: 0px;
- margin-right: 0px;
- margin-top: 4px;
- overflow-x: hidden;
- overflow-y: hidden;
-}
-
-#parent #container .op blockquote {
- margin-left: 0px !important;
-}
-
-#parent #container .name {
- color: rgb(17, 119, 67);
- font-family: arial, helvetica, sans-serif;
- font-size: 13.3333px;
- font-weight: 700;
- margin-left: 7px;
-}
-
-#parent #container .quote {
- color: rgb(221, 0, 0);
- font-family: arial, helvetica, sans-serif;
- font-size: 13.3333px;
- text-decoration-color: rgb(221, 0, 0);
- text-decoration-line: underline;
- text-decoration-style: solid;
- text-decoration-thickness: auto;
-}
-
-#parent #container .greentext {
- color: rgb(120, 153, 34);
- font-family: arial, helvetica, sans-serif;
- font-size: 13.3333px;
-}
-
-#parent #container blockquote {
- margin: 0px !important;
- margin-block-start: 1em;
- margin-block-end: 1em;
- margin-inline-start: 40px;
- margin-inline-end: 40px;
- margin-top: 13.33px !important;
- margin-bottom: 13.33px !important;
- margin-left: 40px !important;
- margin-right: 40px !important;
-}
-
-#parent #container .message {
- color: black;
- border: none;
-}
\ No newline at end of file
diff --git a/spaces/EsoCode/text-generation-webui/docs/DeepSpeed.md b/spaces/EsoCode/text-generation-webui/docs/DeepSpeed.md
deleted file mode 100644
index 6170f6819ca072ff50fd1146b64d73f74ab00473..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/docs/DeepSpeed.md
+++ /dev/null
@@ -1,24 +0,0 @@
-An alternative way of reducing the GPU memory usage of models is to use the `DeepSpeed ZeRO-3` optimization.
-
-With this, I have been able to load a 6b model (GPT-J 6B) with less than 6GB of VRAM. The speed of text generation is very decent and much better than what would be accomplished with `--auto-devices --gpu-memory 6`.
-
-As far as I know, DeepSpeed is only available for Linux at the moment.
-
-### How to use it
-
-1. Install DeepSpeed:
-
-```
-conda install -c conda-forge mpi4py mpich
-pip install -U deepspeed
-```
-
-2. Start the web UI replacing `python` with `deepspeed --num_gpus=1` and adding the `--deepspeed` flag. Example:
-
-```
-deepspeed --num_gpus=1 server.py --deepspeed --chat --model gpt-j-6B
-```
-
-### Learn more
-
-For more information, check out [this comment](https://github.com/oobabooga/text-generation-webui/issues/40#issuecomment-1412038622) by 81300, who came up with the DeepSpeed support in this web UI.
\ No newline at end of file
diff --git a/spaces/EsoCode/text-generation-webui/modules/training.py b/spaces/EsoCode/text-generation-webui/modules/training.py
deleted file mode 100644
index 855ed914a4e21f3a384e811fc3ef7f5529f5f2b9..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/modules/training.py
+++ /dev/null
@@ -1,636 +0,0 @@
-import json
-import math
-import random
-import sys
-import threading
-import time
-import traceback
-from pathlib import Path
-
-import gradio as gr
-import torch
-import transformers
-
-import shutil
-from datetime import datetime
-
-from datasets import Dataset, load_dataset
-from peft import (
- LoraConfig,
- get_peft_model,
- prepare_model_for_int8_training,
- set_peft_model_state_dict
-)
-
-from modules import shared, ui, utils
-from modules.evaluate import (
- calculate_perplexity,
- generate_markdown_table,
- save_past_evaluations
-)
-from modules.logging_colors import logger
-
-# This mapping is from a very recent commit, not yet released.
-# If not available, default to a backup map for some common model types.
-try:
- from peft.utils.other import \
- TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING as \
- model_to_lora_modules
- from transformers.models.auto.modeling_auto import (
- MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
- )
- MODEL_CLASSES = {v: k for k, v in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES}
-except:
- standard_modules = ["q_proj", "v_proj"]
- model_to_lora_modules = {"llama": standard_modules, "opt": standard_modules, "gptj": standard_modules, "gpt_neox": ["query_key_value"], "rw": ["query_key_value"]}
- MODEL_CLASSES = {
- "LlamaForCausalLM": "llama",
- "OPTForCausalLM": "opt",
- "GPTJForCausalLM": "gptj",
- "GPTNeoXForCausalLM": "gpt_neox",
- "RWForCausalLM": "rw"
-
- }
-
-train_log = {}
-train_template = {}
-
-WANT_INTERRUPT = False
-PARAMETERS = ["lora_name", "always_override", "save_steps", "micro_batch_size", "batch_size", "epochs", "learning_rate", "lr_scheduler_type", "lora_rank", "lora_alpha", "lora_dropout", "cutoff_len", "dataset", "eval_dataset", "format", "eval_steps", "raw_text_file", "overlap_len", "newline_favor_len", "higher_rank_limit", "warmup_steps", "optimizer", "hard_cut_string", "train_only_after", "stop_at_loss"]
-
-
-def create_train_interface():
- with gr.Tab('Train LoRA', elem_id='lora-train-tab'):
- gr.Markdown("Confused? [[Click here for a guide]](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Training-LoRAs.md)")
-
- with gr.Row():
- lora_name = gr.Textbox(label='Name', info='The name of your new LoRA file')
- always_override = gr.Checkbox(label='Override Existing Files', value=False, info='If the name given is the same as an existing file, checking this will replace that file. Leaving unchecked will load that file and continue from it (must use the same rank value as the original had).')
- save_steps = gr.Number(label='Save every n steps', value=0, info='If above 0, a checkpoint of the LoRA will be saved every time this many steps pass.')
-
- with gr.Row():
- copy_from = gr.Dropdown(label='Copy parameters from', value='None', choices=utils.get_available_loras())
- ui.create_refresh_button(copy_from, lambda: None, lambda: {'choices': utils.get_available_loras()}, 'refresh-button')
-
- with gr.Row():
- # TODO: Implement multi-device support.
- micro_batch_size = gr.Slider(label='Micro Batch Size', value=4, minimum=1, maximum=128, step=1, info='Per-device batch size (NOTE: multiple devices not yet implemented). Increasing this will increase VRAM usage.')
- batch_size = gr.Slider(label='Batch Size', value=128, minimum=0, maximum=1024, step=4, info='Global batch size. The two batch sizes together determine gradient accumulation (gradientAccum = batch / microBatch). Higher gradient accum values lead to better quality training.')
-
- with gr.Row():
- epochs = gr.Number(label='Epochs', value=3, info='Number of times every entry in the dataset should be fed into training. So 1 means feed each item in once, 5 means feed it in five times, etc.')
- learning_rate = gr.Textbox(label='Learning Rate', value='3e-4', info='Learning rate, in scientific notation. 3e-4 is a good starting base point. 1e-2 is extremely high, 1e-6 is extremely low.')
- lr_scheduler_type = gr.Dropdown(label='LR Scheduler', value='linear', choices=['linear', 'constant', 'constant_with_warmup', 'cosine', 'cosine_with_restarts', 'polynomial', 'inverse_sqrt'], info='Learning rate scheduler - defines how the learning rate changes over time. "Constant" means never change, "linear" means to go in a straight line from the learning rate down to 0, cosine follows a curve, etc.')
-
- # TODO: What is the actual maximum rank? Likely distinct per model. This might be better to somehow be on a log scale.
- lora_rank = gr.Slider(label='LoRA Rank', value=32, minimum=0, maximum=1024, step=4, info='LoRA Rank, or dimension count. Higher values produce a larger file with better control over the model\'s content. Smaller values produce a smaller file with less overall control. Small values like 4 or 8 are great for stylistic guidance, higher values like 128 or 256 are good for teaching content upgrades, extremely high values (1024+) are difficult to train but may improve fine-detail learning for large datasets. Higher ranks also require higher VRAM.')
- lora_alpha = gr.Slider(label='LoRA Alpha', value=64, minimum=0, maximum=2048, step=4, info='LoRA Alpha. This divided by the rank becomes the scaling of the LoRA. Higher means stronger. A good standard value is twice your Rank.')
-
- cutoff_len = gr.Slider(label='Cutoff Length', minimum=0, maximum=2048, value=256, step=32, info='Cutoff length for text input. Essentially, how long of a line of text to feed in at a time. Higher values require drastically more VRAM.')
-
- with gr.Tab(label='Formatted Dataset'):
- with gr.Row():
- dataset = gr.Dropdown(choices=utils.get_datasets('training/datasets', 'json'), value='None', label='Dataset', info='The dataset file to use for training.')
- ui.create_refresh_button(dataset, lambda: None, lambda: {'choices': utils.get_datasets('training/datasets', 'json')}, 'refresh-button')
- eval_dataset = gr.Dropdown(choices=utils.get_datasets('training/datasets', 'json'), value='None', label='Evaluation Dataset', info='The (optional) dataset file used to evaluate the model after training.')
- ui.create_refresh_button(eval_dataset, lambda: None, lambda: {'choices': utils.get_datasets('training/datasets', 'json')}, 'refresh-button')
- format = gr.Dropdown(choices=utils.get_datasets('training/formats', 'json'), value='None', label='Data Format', info='The format file used to decide how to format the dataset input.')
- ui.create_refresh_button(format, lambda: None, lambda: {'choices': utils.get_datasets('training/formats', 'json')}, 'refresh-button')
-
- eval_steps = gr.Number(label='Evaluate every n steps', value=100, info='If an evaluation dataset is given, test it every time this many steps pass.')
-
- with gr.Tab(label="Raw text file"):
- with gr.Row():
- raw_text_file = gr.Dropdown(choices=utils.get_datasets('training/datasets', 'txt'), value='None', label='Text file', info='The raw text file to use for training.')
- ui.create_refresh_button(raw_text_file, lambda: None, lambda: {'choices': utils.get_datasets('training/datasets', 'txt')}, 'refresh-button')
- hard_cut_string = gr.Textbox(label='Hard Cut String', value='\\n\\n\\n', info='String that indicates a hard cut between text parts. Helps prevent unwanted overlap.')
-
- with gr.Row():
- overlap_len = gr.Slider(label='Overlap Length', minimum=0, maximum=512, value=128, step=16, info='Overlap length - ie how many tokens from the prior chunk of text to include into the next chunk. (The chunks themselves will be of a size determined by Cutoff Length below). Setting overlap to exactly half the cutoff length may be ideal.')
- newline_favor_len = gr.Slider(label='Prefer Newline Cut Length', minimum=0, maximum=512, value=128, step=16, info='Length (in characters, not tokens) of the maximum distance to shift an overlap cut by to ensure chunks cut at newlines. If too low, cuts may occur in the middle of lines.')
-
- with gr.Accordion(label='Advanced Options', open=False):
- lora_dropout = gr.Slider(label='LoRA Dropout', minimum=0.0, maximum=1.0, step=0.025, value=0.05, info='Percentage probability for dropout of LoRA layers. This can help reduce overfitting. Most users should leave at default.')
- warmup_steps = gr.Number(label='Warmup Steps', value=100, info='For this many steps at the start, the learning rate will be lower than normal. This helps the trainer prepare the model and precompute statistics to improve the quality of training after the start.')
- optimizer = gr.Dropdown(label='Optimizer', value='adamw_torch', choices=['adamw_hf', 'adamw_torch', 'adamw_torch_fused', 'adamw_torch_xla', 'adamw_apex_fused', 'adafactor', 'adamw_bnb_8bit', 'adamw_anyprecision', 'sgd', 'adagrad'], info='Different optimizer implementation options, for advanced users. Effects of different options are not well documented yet.')
- train_only_after = gr.Textbox(label='Train Only After', value='', info='Only consider text *after* this string in any given chunk for training. For Alpaca datasets, use "### Response:" to only train the response and ignore the input.')
- stop_at_loss = gr.Slider(label='Stop at loss', minimum=0.0, maximum=3.0, step=0.1, value=0.00, info='The process will automatically stop once the desired loss value is reached. (reasonable numbers are 1.5-1.8)')
-
- with gr.Row():
- higher_rank_limit = gr.Checkbox(label='Enable higher ranks', value=False, info='If checked, changes Rank/Alpha slider above to go much higher. This will not work without a datacenter-class GPU.')
-
- with gr.Row():
- start_button = gr.Button("Start LoRA Training")
- stop_button = gr.Button("Interrupt")
-
- output = gr.Markdown(value="Ready")
-
- with gr.Tab('Perplexity evaluation', elem_id='evaluate-tab'):
- with gr.Row():
- with gr.Column():
- models = gr.Dropdown(utils.get_available_models(), label='Models', multiselect=True)
- evaluate_text_file = gr.Dropdown(choices=['wikitext', 'ptb', 'ptb_new'] + utils.get_datasets('training/datasets', 'txt')[1:], value='wikitext', label='Input dataset', info='The raw text file on which the model will be evaluated. The first options are automatically downloaded: wikitext, ptb, and ptb_new. The next options are your local text files under training/datasets.')
- with gr.Row():
- stride_length = gr.Slider(label='Stride', minimum=1, maximum=2048, value=512, step=1, info='Used to make the evaluation faster at the cost of accuracy. 1 = slowest but most accurate. 512 is a common value.')
- max_length = gr.Slider(label='max_length', minimum=0, maximum=8096, value=0, step=1, info='The context for each evaluation. If set to 0, the maximum context length for the model will be used.')
-
- with gr.Row():
- start_current_evaluation = gr.Button("Evaluate loaded model")
- start_evaluation = gr.Button("Evaluate selected models")
- stop_evaluation = gr.Button("Interrupt")
-
- with gr.Column():
- evaluation_log = gr.Markdown(value='')
-
- evaluation_table = gr.Dataframe(value=generate_markdown_table(), interactive=True)
- with gr.Row():
- save_comments = gr.Button('Save comments', elem_classes="small-button")
- refresh_table = gr.Button('Refresh the table', elem_classes="small-button")
-
- # Training events
- all_params = [lora_name, always_override, save_steps, micro_batch_size, batch_size, epochs, learning_rate, lr_scheduler_type, lora_rank, lora_alpha, lora_dropout, cutoff_len, dataset, eval_dataset, format, eval_steps, raw_text_file, overlap_len, newline_favor_len, higher_rank_limit, warmup_steps, optimizer, hard_cut_string, train_only_after, stop_at_loss]
- copy_from.change(do_copy_params, [copy_from] + all_params, all_params)
- start_button.click(do_train, all_params, output)
- stop_button.click(do_interrupt, None, None, queue=False)
- higher_rank_limit.change(change_rank_limit, [higher_rank_limit], [lora_rank, lora_alpha])
-
- # Evaluation events. For some reason, the interrupt event
- # doesn't work with the .then() syntax, so I write them one
- # by one in this ugly but functional way.
- ev = start_evaluation.click(calculate_perplexity, [models, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
- start_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
-
- tmp = gr.State('')
- start_current_evaluation.click(lambda: ['current model'], None, tmp)
- ev_cur = start_current_evaluation.click(calculate_perplexity, [tmp, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
- start_current_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
-
- stop_evaluation.click(None, None, None, cancels=[ev, ev_cur], queue=False)
- refresh_table.click(generate_markdown_table, None, evaluation_table, show_progress=True)
- save_comments.click(
- save_past_evaluations, evaluation_table, None).then(
- lambda: "Comments saved.", None, evaluation_log, show_progress=False)
-
-
-def do_interrupt():
- global WANT_INTERRUPT
- WANT_INTERRUPT = True
-
-
-def do_copy_params(lora_name: str, *args):
- f_name = f"{shared.args.lora_dir}/{clean_path(None, lora_name)}/training_parameters.json"
- if Path(f_name).is_file():
- with open(f_name, 'r', encoding='utf-8') as format_file:
- params: dict[str, str] = json.load(format_file)
- else:
- params = {}
-
- result = list()
- for i in range(0, len(PARAMETERS)):
- key = PARAMETERS[i]
- if key in params:
- result.append(params[key])
- else:
- result.append(args[i])
-
- return result
-
-
-def change_rank_limit(use_higher_ranks: bool):
- mult = 2 if use_higher_ranks else 1
- return {"maximum": 1024 * mult, "__type__": "update"}, {"maximum": 2048 * mult, "__type__": "update"}
-
-
-def clean_path(base_path: str, path: str):
- """Strips unusual symbols and forcibly builds a path as relative to the intended directory."""
- # TODO: Probably could do with a security audit to guarantee there's no ways this can be bypassed to target an unwanted path.
- # Or swap it to a strict whitelist of [a-zA-Z_0-9]
- path = path.replace('\\', '/').replace('..', '_')
- if base_path is None:
- return path
-
- return f'{Path(base_path).absolute()}/{path}'
-
-
-def backup_adapter(input_folder):
- # Get the creation date of the file adapter_model.bin
- try:
- adapter_file = Path(f"{input_folder}/adapter_model.bin")
- if adapter_file.is_file():
-
- logger.info("Backing up existing LoRA adapter...")
- creation_date = datetime.fromtimestamp(adapter_file.stat().st_ctime)
- creation_date_str = creation_date.strftime("Backup-%Y-%m-%d")
-
- # Create the new subfolder
- subfolder_path = Path(f"{input_folder}/{creation_date_str}")
- subfolder_path.mkdir(parents=True, exist_ok=True)
-
- # Check if the file already exists in the subfolder
- backup_adapter_file = Path(f"{input_folder}/{creation_date_str}/adapter_model.bin")
- if backup_adapter_file.is_file():
- print(" - Backup already exists. Skipping backup process.")
- return
-
- # Copy existing files to the new subfolder
- existing_files = Path(input_folder).iterdir()
- for file in existing_files:
- if file.is_file():
- shutil.copy2(file, subfolder_path)
- except Exception as e:
- print("An error occurred in backup_adapter:", str(e))
-
-
-def do_train(lora_name: str, always_override: bool, save_steps: int, micro_batch_size: int, batch_size: int, epochs: int, learning_rate: str, lr_scheduler_type: str, lora_rank: int, lora_alpha: int, lora_dropout: float, cutoff_len: int, dataset: str, eval_dataset: str, format: str, eval_steps: int, raw_text_file: str, overlap_len: int, newline_favor_len: int, higher_rank_limit: bool, warmup_steps: int, optimizer: str, hard_cut_string: str, train_only_after: str, stop_at_loss: float):
-
- if shared.args.monkey_patch:
- from monkeypatch.peft_tuners_lora_monkey_patch import (
- replace_peft_model_with_gptq_lora_model
- )
- replace_peft_model_with_gptq_lora_model()
-
- global WANT_INTERRUPT
- WANT_INTERRUPT = False
-
- # == Input validation / processing ==
- yield "Prepping..."
- lora_file_path = clean_path(None, lora_name)
- if lora_file_path.strip() == '':
- yield "Missing or invalid LoRA file name input."
- return
-
- lora_file_path = f"{shared.args.lora_dir}/{lora_file_path}"
- actual_lr = float(learning_rate)
- model_type = type(shared.model).__name__
-
- if model_type in MODEL_CLASSES:
- model_id = MODEL_CLASSES[model_type]
- else:
- model_id = "llama"
- if model_type == "PeftModelForCausalLM":
- if len(shared.args.lora_names) > 0:
- yield "You are trying to train a LoRA while you already have another LoRA loaded. This will work, but may have unexpected effects. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logger.warning("Training LoRA over top of another LoRA. May have unexpected effects.")
- else:
- yield "Model ID not matched due to LoRA loading. Consider reloading base model. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logger.warning("Model ID not matched due to LoRA loading. Consider reloading base model.")
- else:
- yield "LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. Unexpected errors may follow. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logger.warning(f"LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. (Found model type: {model_type})")
-
- time.sleep(5)
-
- if shared.args.wbits > 0 and not shared.args.monkey_patch:
- yield "LoRA training with GPTQ models requires loading with `--monkey-patch`"
- return
-
- elif not (shared.args.load_in_8bit or shared.args.load_in_4bit) and shared.args.wbits <= 0:
- yield "It is highly recommended you use `--load-in-8bit` for LoRA training. *(Will continue anyway in 2 seconds, press `Interrupt` to stop.)*"
- logger.warning("It is highly recommended you use `--load-in-8bit` for LoRA training.")
- time.sleep(2) # Give it a moment for the message to show in UI before continuing
-
- if cutoff_len <= 0 or micro_batch_size <= 0 or batch_size <= 0 or actual_lr <= 0 or lora_rank <= 0 or lora_alpha <= 0:
- yield "Cannot input zeroes."
- return
-
- gradient_accumulation_steps = batch_size // micro_batch_size
- shared.tokenizer.pad_token_id = 0
- shared.tokenizer.padding_side = "left"
-
- def encode(text, add_bos_token):
- result = shared.tokenizer.encode(text, truncation=True, max_length=cutoff_len)
- if not add_bos_token and result[0] == shared.tokenizer.bos_token_id:
- result = result[1:]
- return result
-
- def tokenize(prompt):
-
- if train_only_after == '' or train_only_after not in prompt:
- input_ids = encode(prompt, True)
- input_ids = [shared.tokenizer.pad_token_id] * (cutoff_len - len(input_ids)) + input_ids
- labels = [1] * len(input_ids)
-
- else:
- ind = prompt.index(train_only_after) + len(train_only_after)
- before_tokens = encode(prompt[:ind], True)
- after_tokens = encode(prompt[ind:], False)
-
- full_length = len(after_tokens) + len(before_tokens)
- if full_length > cutoff_len:
- after_tokens = after_tokens[:cutoff_len - len(before_tokens)]
- else:
- before_tokens = [shared.tokenizer.pad_token_id] * (cutoff_len - full_length) + before_tokens
-
- input_ids = before_tokens + after_tokens
- labels = [-100] * len(before_tokens) + [1] * len(after_tokens)
-
- input_ids = torch.tensor(input_ids)
- return {
- "input_ids": input_ids,
- "labels": labels,
- "attention_mask": input_ids.ne(shared.tokenizer.pad_token_id),
- }
-
- train_template.clear()
-
- # == Prep the dataset, format, etc ==
- if raw_text_file not in ['None', '']:
- logger.info("Loading raw text file dataset...")
-
- train_template["template_type"] = "raw_text"
-
- with open(clean_path('training/datasets', f'{raw_text_file}.txt'), 'r', encoding='utf-8') as file:
- raw_text = file.read().replace('\r', '')
-
- cut_string = hard_cut_string.replace('\\n', '\n')
- out_tokens = []
- for text_part in raw_text.split(cut_string):
- if text_part.strip() == '':
- continue
-
- tokens = shared.tokenizer.encode(text_part)
- step = cutoff_len - overlap_len
- if step <= 0:
- yield f"Error: overlap_len ({overlap_len}) cannot be greater than or equal to cutoff_len ({cutoff_len})"
- return
-
- tokens = list(split_chunks(tokens, step))
- for i in range(1, len(tokens)):
- tokens[i] = tokens[i - 1][-overlap_len:] + tokens[i]
-
- out_tokens.extend(tokens)
- del tokens
-
- del raw_text # Note: could be a gig for a large dataset, so delete redundant data as we go to be safe on RAM
- text_chunks = [shared.tokenizer.decode(x) for x in out_tokens]
- del out_tokens
- if newline_favor_len > 0:
- text_chunks = [cut_chunk_for_newline(x, newline_favor_len) for x in text_chunks]
-
- train_data = Dataset.from_list([tokenize(x) for x in text_chunks])
- del text_chunks
- eval_data = None
- else:
- if dataset in ['None', '']:
- yield "**Missing dataset choice input, cannot continue.**"
- return
-
- if format in ['None', '']:
- yield "**Missing format choice input, cannot continue.**"
- return
-
- train_template["template_type"] = "dataset"
-
- with open(clean_path('training/formats', f'{format}.json'), 'r', encoding='utf-8-sig') as formatFile:
- format_data: dict[str, str] = json.load(formatFile)
-
- # == store training prompt ==
- for _, value in format_data.items():
- prompt_key = f"template_{len(train_template)}"
- train_template[prompt_key] = value
-
- def generate_prompt(data_point: dict[str, str]):
- for options, data in format_data.items():
- if set(options.split(',')) == set(x[0] for x in data_point.items() if (x[1] is not None and len(x[1].strip()) > 0)):
- for key, val in data_point.items():
- if val is not None:
- data = data.replace(f'%{key}%', val)
- return data
- raise RuntimeError(f'Data-point "{data_point}" has no keyset match within format "{list(format_data.keys())}"')
-
- def generate_and_tokenize_prompt(data_point):
- prompt = generate_prompt(data_point)
- return tokenize(prompt)
-
- logger.info("Loading JSON datasets...")
- data = load_dataset("json", data_files=clean_path('training/datasets', f'{dataset}.json'))
- train_data = data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
-
- if eval_dataset == 'None':
- eval_data = None
- else:
- eval_data = load_dataset("json", data_files=clean_path('training/datasets', f'{eval_dataset}.json'))
- eval_data = eval_data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
-
- # == Start prepping the model itself ==
- if not hasattr(shared.model, 'lm_head') or hasattr(shared.model.lm_head, 'weight'):
- logger.info("Getting model ready...")
- prepare_model_for_int8_training(shared.model)
-
- logger.info("Prepping for training...")
- config = LoraConfig(
- r=lora_rank,
- lora_alpha=lora_alpha,
- target_modules=model_to_lora_modules[model_id],
- lora_dropout=lora_dropout,
- bias="none",
- task_type="CAUSAL_LM"
- )
-
- # == Backup the existing adapter ==
- if not always_override:
- backup_adapter(lora_file_path)
-
- try:
- logger.info("Creating LoRA model...")
- lora_model = get_peft_model(shared.model, config)
- if not always_override and Path(f"{lora_file_path}/adapter_model.bin").is_file():
- logger.info("Loading existing LoRA data...")
- state_dict_peft = torch.load(f"{lora_file_path}/adapter_model.bin")
- set_peft_model_state_dict(lora_model, state_dict_peft)
- except:
- yield traceback.format_exc()
- return
-
- if shared.args.monkey_patch:
- for n, m in lora_model.named_modules():
- if '4bit' in str(type(m)):
- if m.is_v1_model:
- m.zeros = m.zeros.half()
-
- m.scales = m.scales.half()
-
- class Tracked():
- def __init__(self):
- self.current_steps = 0
- self.max_steps = 0
- self.did_save = False
-
- tracked = Tracked()
- actual_save_steps = math.ceil(save_steps / gradient_accumulation_steps)
-
- class Callbacks(transformers.TrainerCallback):
- def on_step_begin(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
- tracked.current_steps = state.global_step * gradient_accumulation_steps
- tracked.max_steps = state.max_steps * gradient_accumulation_steps
- if WANT_INTERRUPT:
- control.should_epoch_stop = True
- control.should_training_stop = True
- elif state.global_step > 0 and actual_save_steps > 0 and state.global_step % actual_save_steps == 0:
- lora_model.save_pretrained(f"{lora_file_path}/checkpoint-{tracked.current_steps}/")
- # Save log
- with open(f"{lora_file_path}/checkpoint-{tracked.current_steps}/training_log.json", 'w', encoding='utf-8') as file:
- json.dump(train_log, file, indent=2)
- # == Save training prompt ==
- with open(f"{lora_file_path}/checkpoint-{tracked.current_steps}/training_prompt.json", 'w', encoding='utf-8') as file:
- json.dump(train_template, file, indent=2)
-
- def on_substep_end(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
- tracked.current_steps += 1
- if WANT_INTERRUPT:
- control.should_epoch_stop = True
- control.should_training_stop = True
-
- def on_log(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, logs, **kwargs):
- train_log.update(logs)
- train_log.update({"current_steps": tracked.current_steps})
- if WANT_INTERRUPT:
- print("\033[1;31;1mInterrupted by user\033[0;37;0m")
-
- print(f"\033[1;30;40mStep: {tracked.current_steps} \033[0;37;0m", end='')
- if 'loss' in logs:
- loss = float(logs['loss'])
- if loss <= stop_at_loss:
- control.should_epoch_stop = True
- control.should_training_stop = True
- print(f"\033[1;31;1mStop Loss {stop_at_loss} reached.\033[0;37;0m")
-
- trainer = transformers.Trainer(
- model=lora_model,
- train_dataset=train_data,
- eval_dataset=eval_data,
- args=transformers.TrainingArguments(
- per_device_train_batch_size=micro_batch_size,
- gradient_accumulation_steps=gradient_accumulation_steps,
- warmup_steps=math.ceil(warmup_steps / gradient_accumulation_steps),
- num_train_epochs=epochs,
- learning_rate=actual_lr,
- fp16=False if shared.args.cpu else True,
- optim=optimizer,
- logging_steps=2 if stop_at_loss > 0 else 5,
- evaluation_strategy="steps" if eval_data is not None else "no",
- eval_steps=math.ceil(eval_steps / gradient_accumulation_steps) if eval_data is not None else None,
- save_strategy="steps" if eval_data is not None else "no",
- output_dir=lora_file_path,
- lr_scheduler_type=lr_scheduler_type,
- load_best_model_at_end=eval_data is not None,
- # TODO: Enable multi-device support
- ddp_find_unused_parameters=None,
- no_cuda=shared.args.cpu
- ),
- data_collator=transformers.DataCollatorForLanguageModeling(shared.tokenizer, mlm=False),
- callbacks=list([Callbacks()])
- )
-
- lora_model.config.use_cache = False
-
- if torch.__version__ >= "2" and sys.platform != "win32":
- lora_model = torch.compile(lora_model)
-
- # == Save parameters for reuse ==
- with open(f"{lora_file_path}/training_parameters.json", 'w', encoding='utf-8') as file:
- vars = locals()
- json.dump({x: vars[x] for x in PARAMETERS}, file, indent=2)
-
- # == Save training prompt ==
- with open(f"{lora_file_path}/training_prompt.json", 'w', encoding='utf-8') as file:
- json.dump(train_template, file, indent=2)
-
- # == Main run and monitor loop ==
- logger.info("Starting training...")
- yield "Starting..."
-
- train_log.update({"base_model_name": shared.model_name})
- train_log.update({"base_model_class": shared.model.__class__.__name__})
- train_log.update({"base_loaded_in_4bit": getattr(lora_model, "is_loaded_in_4bit", False)})
- train_log.update({"base_loaded_in_8bit": getattr(lora_model, "is_loaded_in_8bit", False)})
-
- if stop_at_loss > 0:
- print(f"Monitoring loss \033[1;31;1m(Auto-Stop at: {stop_at_loss})\033[0;37;0m")
-
- if WANT_INTERRUPT:
- yield "Interrupted before start."
- return
-
- def threaded_run():
- trainer.train()
- # Note: save in the thread in case the gradio thread breaks (eg browser closed)
- lora_model.save_pretrained(lora_file_path)
- logger.info("LoRA training run is completed and saved.")
- # Save log
- with open(f"{lora_file_path}/training_log.json", 'w', encoding='utf-8') as file:
- json.dump(train_log, file, indent=2)
-
- thread = threading.Thread(target=threaded_run)
- thread.start()
- last_step = 0
- start_time = time.perf_counter()
-
- while thread.is_alive():
- time.sleep(0.5)
- if WANT_INTERRUPT:
- yield "Interrupting, please wait... *(Run will stop after the current training step completes.)*"
-
- elif tracked.current_steps != last_step:
- last_step = tracked.current_steps
- time_elapsed = time.perf_counter() - start_time
- if time_elapsed <= 0:
- timer_info = ""
- total_time_estimate = 999
- else:
- its = tracked.current_steps / time_elapsed
- if its > 1:
- timer_info = f"`{its:.2f}` it/s"
- else:
- timer_info = f"`{1.0/its:.2f}` s/it"
-
- total_time_estimate = (1.0 / its) * (tracked.max_steps)
-
- yield f"Running... **{tracked.current_steps}** / **{tracked.max_steps}** ... {timer_info}, {format_time(time_elapsed)} / {format_time(total_time_estimate)} ... {format_time(total_time_estimate - time_elapsed)} remaining"
-
- # Saving in the train thread might fail if an error occurs, so save here if so.
- if not tracked.did_save:
- logger.info("Training complete, saving...")
- lora_model.save_pretrained(lora_file_path)
-
- if WANT_INTERRUPT:
- logger.info("Training interrupted.")
- yield f"Interrupted. Incomplete LoRA saved to `{lora_file_path}`"
- else:
- logger.info("Training complete!")
- yield f"Done! LoRA saved to `{lora_file_path}`"
-
-
-def split_chunks(arr, step):
- for i in range(0, len(arr), step):
- yield arr[i:i + step]
-
-
-def cut_chunk_for_newline(chunk: str, max_length: int):
- if '\n' not in chunk:
- return chunk
-
- first_newline = chunk.index('\n')
- if first_newline < max_length:
- chunk = chunk[first_newline + 1:]
-
- if '\n' not in chunk:
- return chunk
-
- last_newline = chunk.rindex('\n')
- if len(chunk) - last_newline < max_length:
- chunk = chunk[:last_newline]
-
- return chunk
-
-
-def format_time(seconds: float):
- if seconds < 120:
- return f"`{seconds:.0f}` seconds"
-
- minutes = seconds / 60
- if minutes < 120:
- return f"`{minutes:.0f}` minutes"
-
- hours = minutes / 60
- return f"`{hours:.0f}` hours"
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/tps/README.md b/spaces/EuroPython2022/mmocr-demo/configs/textrecog/tps/README.md
deleted file mode 100644
index 0066fb154bf7f2fa26a3ac00acaddb2ed4d30f03..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/tps/README.md
+++ /dev/null
@@ -1,52 +0,0 @@
-# CRNN-STN
-
-
-
-## Abstract
-
-Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.
-
-
-
-
diff --git a/spaces/Firefly777a/openai-moderation-api-demo/app.py b/spaces/Firefly777a/openai-moderation-api-demo/app.py
deleted file mode 100644
index e035adce80c36e3e8bc0ec8afe9b766ab6711df5..0000000000000000000000000000000000000000
--- a/spaces/Firefly777a/openai-moderation-api-demo/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import os
-import tiktoken
-
-import gradio as gr
-import openai
-
-openai.api_key = os.environ['OPENAI_API_KEY']
-
-# Tokenizer (Used solely for counting tokens here)
-enc = tiktoken.encoding_for_model("text-davinci-003")
-
-def token_count(text):
- num_tokens = len(enc.encode(text))
- return f"There are {num_tokens} tokens."
-
-def moderation(text):
- response = openai.Moderation.create(input=text)
- output = response["results"][0]
- return output
-
-def main(text):
- return moderation(text), token_count(text)
-
-iface = gr.Interface(
- fn=main,
- inputs=gr.inputs.Textbox(lines=10, label="Text"),
- outputs=["text","text"],
- title="OpenAI Moderation API",
- description="This is a demo of the OpenAI Moderation API. Enter text in the box below and click submit to see the output.",
- allow_flagging=False,
- layout="vertical",
- theme="huggingface",
-)
-
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/utils.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/utils.py
deleted file mode 100644
index 0e3c1702d8160f8d348e8b07596a9d172da624eb..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-pcr/utils.py
+++ /dev/null
@@ -1,446 +0,0 @@
-import os
-import glob
-import re
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import warnings
-import random
-import functools
-
-import librosa
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-from torch.nn import functional as F
-from modules.commons import sequence_mask
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.WARN)
-logger = logging
-
-f0_bin = 256
-f0_max = 1100.0
-f0_min = 50.0
-f0_mel_min = 1127 * np.log(1 + f0_min / 700)
-f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
-def normalize_f0(f0, x_mask, uv, random_scale=True):
- # calculate means based on x_mask
- uv_sum = torch.sum(uv, dim=1, keepdim=True)
- uv_sum[uv_sum == 0] = 9999
- means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum
-
- if random_scale:
- factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device)
- else:
- factor = torch.ones(f0.shape[0], 1).to(f0.device)
- # normalize f0 based on means and factor
- f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1)
- if torch.isnan(f0_norm).any():
- exit(0)
- return f0_norm * x_mask
-
-def plot_data_to_numpy(x, y):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- plt.plot(x)
- plt.plot(y)
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def f0_to_coarse(f0):
- is_torch = isinstance(f0, torch.Tensor)
- f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1
-
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1
- f0_coarse = (f0_mel + 0.5).int() if is_torch else np.rint(f0_mel).astype(np.int)
- assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min())
- return f0_coarse
-
-def get_content(cmodel, y):
- with torch.no_grad():
- c = cmodel.extract_features(y.squeeze(1))[0]
- c = c.transpose(1, 2)
- return c
-
-def get_f0_predictor(f0_predictor,hop_length,sampling_rate,**kargs):
- if f0_predictor == "pm":
- from modules.F0Predictor.PMF0Predictor import PMF0Predictor
- f0_predictor_object = PMF0Predictor(hop_length=hop_length,sampling_rate=sampling_rate)
- elif f0_predictor == "crepe":
- from modules.F0Predictor.CrepeF0Predictor import CrepeF0Predictor
- f0_predictor_object = CrepeF0Predictor(hop_length=hop_length,sampling_rate=sampling_rate,device=kargs["device"],threshold=kargs["threshold"])
- elif f0_predictor == "harvest":
- from modules.F0Predictor.HarvestF0Predictor import HarvestF0Predictor
- f0_predictor_object = HarvestF0Predictor(hop_length=hop_length,sampling_rate=sampling_rate)
- elif f0_predictor == "dio":
- from modules.F0Predictor.DioF0Predictor import DioF0Predictor
- f0_predictor_object = DioF0Predictor(hop_length=hop_length,sampling_rate=sampling_rate)
- else:
- raise Exception("Unknown f0 predictor")
- return f0_predictor_object
-
-def get_speech_encoder(speech_encoder,device=None,**kargs):
- if speech_encoder == "vec768l12":
- from vencoder.ContentVec768L12 import ContentVec768L12
- speech_encoder_object = ContentVec768L12(device = device)
- elif speech_encoder == "vec256l9":
- from vencoder.ContentVec256L9 import ContentVec256L9
- speech_encoder_object = ContentVec256L9(device = device)
- elif speech_encoder == "vec256l9-onnx":
- from vencoder.ContentVec256L9_Onnx import ContentVec256L9_Onnx
- speech_encoder_object = ContentVec256L9(device = device)
- elif speech_encoder == "vec256l12-onnx":
- from vencoder.ContentVec256L12_Onnx import ContentVec256L12_Onnx
- speech_encoder_object = ContentVec256L9(device = device)
- elif speech_encoder == "vec768l9-onnx":
- from vencoder.ContentVec768L9_Onnx import ContentVec768L9_Onnx
- speech_encoder_object = ContentVec256L9(device = device)
- elif speech_encoder == "vec768l12-onnx":
- from vencoder.ContentVec768L12_Onnx import ContentVec768L12_Onnx
- speech_encoder_object = ContentVec256L9(device = device)
- elif speech_encoder == "hubertsoft-onnx":
- from vencoder.HubertSoft_Onnx import HubertSoft_Onnx
- speech_encoder_object = HubertSoft(device = device)
- elif speech_encoder == "hubertsoft":
- from vencoder.HubertSoft import HubertSoft
- speech_encoder_object = HubertSoft(device = device)
- elif speech_encoder == "whisper-ppg":
- from vencoder.WhisperPPG import WhisperPPG
- speech_encoder_object = WhisperPPG(device = device)
- else:
- raise Exception("Unknown speech encoder")
- return speech_encoder_object
-
-def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- # assert "dec" in k or "disc" in k
- # print("load", k)
- new_state_dict[k] = saved_state_dict[k]
- assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape)
- except:
- print("error, %s is not in the checkpoint" % k)
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- print("load ")
- logger.info("Loaded checkpoint '{}' (iteration {})".format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info("Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path))
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict,
- 'iteration': iteration,
- 'optimizer': optimizer.state_dict(),
- 'learning_rate': learning_rate}, checkpoint_path)
-
-def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True):
- """Freeing up space by deleting saved ckpts
-
- Arguments:
- path_to_models -- Path to the model directory
- n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth
- sort_by_time -- True -> chronologically delete ckpts
- False -> lexicographically delete ckpts
- """
- ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))]
- name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1)))
- time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f)))
- sort_key = time_key if sort_by_time else name_key
- x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key)
- to_del = [os.path.join(path_to_models, fn) for fn in
- (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])]
- del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}")
- del_routine = lambda x: [os.remove(x), del_info(x)]
- rs = [del_routine(fn) for fn in to_del]
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats='HWC')
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10,2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/config.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
- hparams =HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-def repeat_expand_2d(content, target_len):
- # content : [h, t]
-
- src_len = content.shape[-1]
- target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device)
- temp = torch.arange(src_len+1) * target_len / src_len
- current_pos = 0
- for i in range(target_len):
- if i < temp[current_pos+1]:
- target[:, i] = content[:, current_pos]
- else:
- current_pos += 1
- target[:, i] = content[:, current_pos]
-
- return target
-
-
-def mix_model(model_paths,mix_rate,mode):
- mix_rate = torch.FloatTensor(mix_rate)/100
- model_tem = torch.load(model_paths[0])
- models = [torch.load(path)["model"] for path in model_paths]
- if mode == 0:
- mix_rate = F.softmax(mix_rate,dim=0)
- for k in model_tem["model"].keys():
- model_tem["model"][k] = torch.zeros_like(model_tem["model"][k])
- for i,model in enumerate(models):
- model_tem["model"][k] += model[k]*mix_rate[i]
- torch.save(model_tem,os.path.join(os.path.curdir,"output.pth"))
- return os.path.join(os.path.curdir,"output.pth")
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
-
- def get(self,index):
- return self.__dict__.get(index)
-
-class Volume_Extractor:
- def __init__(self, hop_size = 512):
- self.hop_size = hop_size
-
- def extract(self, audio): # audio: 2d tensor array
- if not isinstance(audio,torch.Tensor):
- audio = torch.Tensor(audio)
- n_frames = int(audio.size(-1) // self.hop_size)
- audio2 = audio ** 2
- audio2 = torch.nn.functional.pad(audio2, (int(self.hop_size // 2), int((self.hop_size + 1) // 2)), mode = 'reflect')
- volume = torch.FloatTensor([torch.mean(audio2[:,int(n * self.hop_size) : int((n + 1) * self.hop_size)]) for n in range(n_frames)])
- volume = torch.sqrt(volume)
- return volume
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/demucs/test.py b/spaces/FridaZuley/RVC_HFKawaii/demucs/test.py
deleted file mode 100644
index 4140914ddbff3543b4056ca0cb1b5e887434a40a..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/demucs/test.py
+++ /dev/null
@@ -1,109 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import gzip
-import sys
-from concurrent import futures
-
-import musdb
-import museval
-import torch as th
-import tqdm
-from scipy.io import wavfile
-from torch import distributed
-
-from .audio import convert_audio
-from .utils import apply_model
-
-
-def evaluate(model,
- musdb_path,
- eval_folder,
- workers=2,
- device="cpu",
- rank=0,
- save=False,
- shifts=0,
- split=False,
- overlap=0.25,
- is_wav=False,
- world_size=1):
- """
- Evaluate model using museval. Run the model
- on a single GPU, the bottleneck being the call to museval.
- """
-
- output_dir = eval_folder / "results"
- output_dir.mkdir(exist_ok=True, parents=True)
- json_folder = eval_folder / "results/test"
- json_folder.mkdir(exist_ok=True, parents=True)
-
- # we load tracks from the original musdb set
- test_set = musdb.DB(musdb_path, subsets=["test"], is_wav=is_wav)
- src_rate = 44100 # hardcoded for now...
-
- for p in model.parameters():
- p.requires_grad = False
- p.grad = None
-
- pendings = []
- with futures.ProcessPoolExecutor(workers or 1) as pool:
- for index in tqdm.tqdm(range(rank, len(test_set), world_size), file=sys.stdout):
- track = test_set.tracks[index]
-
- out = json_folder / f"{track.name}.json.gz"
- if out.exists():
- continue
-
- mix = th.from_numpy(track.audio).t().float()
- ref = mix.mean(dim=0) # mono mixture
- mix = (mix - ref.mean()) / ref.std()
- mix = convert_audio(mix, src_rate, model.samplerate, model.audio_channels)
- estimates = apply_model(model, mix.to(device),
- shifts=shifts, split=split, overlap=overlap)
- estimates = estimates * ref.std() + ref.mean()
-
- estimates = estimates.transpose(1, 2)
- references = th.stack(
- [th.from_numpy(track.targets[name].audio).t() for name in model.sources])
- references = convert_audio(references, src_rate,
- model.samplerate, model.audio_channels)
- references = references.transpose(1, 2).numpy()
- estimates = estimates.cpu().numpy()
- win = int(1. * model.samplerate)
- hop = int(1. * model.samplerate)
- if save:
- folder = eval_folder / "wav/test" / track.name
- folder.mkdir(exist_ok=True, parents=True)
- for name, estimate in zip(model.sources, estimates):
- wavfile.write(str(folder / (name + ".wav")), 44100, estimate)
-
- if workers:
- pendings.append((track.name, pool.submit(
- museval.evaluate, references, estimates, win=win, hop=hop)))
- else:
- pendings.append((track.name, museval.evaluate(
- references, estimates, win=win, hop=hop)))
- del references, mix, estimates, track
-
- for track_name, pending in tqdm.tqdm(pendings, file=sys.stdout):
- if workers:
- pending = pending.result()
- sdr, isr, sir, sar = pending
- track_store = museval.TrackStore(win=44100, hop=44100, track_name=track_name)
- for idx, target in enumerate(model.sources):
- values = {
- "SDR": sdr[idx].tolist(),
- "SIR": sir[idx].tolist(),
- "ISR": isr[idx].tolist(),
- "SAR": sar[idx].tolist()
- }
-
- track_store.add_target(target_name=target, values=values)
- json_path = json_folder / f"{track_name}.json.gz"
- gzip.open(json_path, "w").write(track_store.json.encode('utf-8'))
- if world_size > 1:
- distributed.barrier()
diff --git a/spaces/FridaZuley/RVC_HFKawaii/tools/infer/train-index.py b/spaces/FridaZuley/RVC_HFKawaii/tools/infer/train-index.py
deleted file mode 100644
index 44b447ef32148c181eb4bcd9013a22a82371b82c..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/tools/infer/train-index.py
+++ /dev/null
@@ -1,42 +0,0 @@
-"""
-格式:直接cid为自带的index位;aid放不下了,通过字典来查,反正就5w个
-"""
-import os
-import logging
-
-logger = logging.getLogger(__name__)
-
-import faiss
-import numpy as np
-
-# ###########如果是原始特征要先写save
-inp_root = r"E:\codes\py39\dataset\mi\2-co256"
-npys = []
-for name in sorted(list(os.listdir(inp_root))):
- phone = np.load("%s/%s" % (inp_root, name))
- npys.append(phone)
-big_npy = np.concatenate(npys, 0)
-logger.debug(big_npy.shape) # (6196072, 192)#fp32#4.43G
-np.save("infer/big_src_feature_mi.npy", big_npy)
-
-##################train+add
-# big_npy=np.load("/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/inference_f0/big_src_feature_mi.npy")
-logger.debug(big_npy.shape)
-index = faiss.index_factory(256, "IVF512,Flat") # mi
-logger.info("Training...")
-index_ivf = faiss.extract_index_ivf(index) #
-index_ivf.nprobe = 9
-index.train(big_npy)
-faiss.write_index(index, "infer/trained_IVF512_Flat_mi_baseline_src_feat.index")
-logger.info("Adding...")
-index.add(big_npy)
-faiss.write_index(index, "infer/added_IVF512_Flat_mi_baseline_src_feat.index")
-"""
-大小(都是FP32)
-big_src_feature 2.95G
- (3098036, 256)
-big_emb 4.43G
- (6196072, 192)
-big_emb双倍是因为求特征要repeat后再加pitch
-
-"""
diff --git "a/spaces/Frorozcol/financIA/pages/3_\360\237\217\227 Arquitectura.py" "b/spaces/Frorozcol/financIA/pages/3_\360\237\217\227 Arquitectura.py"
deleted file mode 100644
index 80cf7997be703d329342c27beb22cb3bd9c63f8c..0000000000000000000000000000000000000000
--- "a/spaces/Frorozcol/financIA/pages/3_\360\237\217\227 Arquitectura.py"
+++ /dev/null
@@ -1,32 +0,0 @@
-import streamlit as st
-
-st.header("Arquitectura del proyecto")
-st.write("""
-La arquitectura de la aplicación de inteligencia artificial se basa en una estructura de capas. Comenzamos con el usuario, que se conecta a través de una computadora o celular, y accede a la capa de presentación o front-end. Esta consta de dos APIs principales: "login/registro" y "suscripción". Estas APIs separan los procesos relacionados con el inicio de sesión, el registro y la gestión de suscripciones.
-
-Además, en esta arquitectura, se utiliza una función lambda que se encarga de gestionar el envío de notificaciones a los usuarios. Esto permite una mayor flexibilidad y escalabilidad.
-
-En cuanto al alojamiento de la aplicación, todos estos componentes, incluyendo las APIs y la función lambda, se encuentran alojados en AWS (Amazon Web Services). Utilizar AWS para alojar la aplicación proporciona beneficios como escalabilidad, alta disponibilidad y facilidad de implementación.
-""")
-
-st.image("0-1 Pysentimiento_files/arquitectura.jpg", caption="Arquitectura del proyecto")
-
-st.write("""
-El soporte técnico puede involucrar un equipo que brindan asistencia y resuelven cualquier problema que pueda surgir.
-
-Después de la capa de presentación, la arquitectura de la aplicación se integra con el backend, que es el encargado de unificar todos los componentes principales del proyecto. El backend está dividido en tres grandes subgrupos.
-
-El primer subgrupo se encuentra alojado en la nube de AWS y está encargado de almacenar y gestionar los datos de la aplicación. Esta parte del backend se encarga de manejar el almacenamiento de información relevante, como perfiles de usuarios, historiales de suscripción, entre otros.
-
-El segundo subgrupo del backend es la parte de inteligencia artificial. Aquí es donde se utiliza para realizar las predicciones.
-
-El tercer subgrupo consiste en una herramienta de web scraping. Esta herramienta se utiliza para recopilar información relevante de diversas fuentes en línea y alimentar al modelo de IA. Esta parte del backend funciona en una instancia de AWS y se encarga de extraer y preparar los datos necesarios para el funcionamiento del modelo.
-
-Dentro de la parte alojada en la nube, también se incluyen funcionalidades adicionales como la integración con una pasarela de pago. Esta integración permite gestionar los cobros de suscripción. Para esto último, se utiliza una función lambda que se ejecuta cuando finaliza una suscripción, permitiendo realizar el cobro correspondiente.
-
-Por último, la arquitectura de la aplicación incluye la gestión de la base de datos. Esta se almacena en la nube para garantizar que la información sea segura y no se pierda. Dado que esta base de datos contiene datos críticos para el funcionamiento de la aplicación.
-
-Además, se guarda el modelo entrenado en un formato específico, como pkl, h5, pb, u otros. Este formato permite conservar el modelo de IA en un estado óptimo para su posterior uso y realizar predicciones de manera eficiente. Al elegir un formato adecuado, se facilita la carga y el despliegue del modelo en el entorno de producción.
-
-Asimismo, los datos recopilados a través del web scraping y otros medios se almacenan en una base de datos elástica. Esta base de datos está diseñada para manejar un crecimiento constante en la cantidad de datos almacenados. La elasticidad permite adaptarse a medida que la aplicación recopila más información con el tiempo, asegurando que no se vea afectada la capacidad de almacenamiento y consulta de los datos.
- """)
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/corner_block_challenge.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/corner_block_challenge.py
deleted file mode 100644
index a2a9c1d450ad1864ecc88eeafa64f38a59185ac3..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/corner_block_challenge.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-
-class CornerBlockChallenge(Task):
- """Construct two columns using eight cubes - four red, two green, and two blue.
- The columns should be constructed at two distinct marked corners of the tabletop
- using the 'corner/corner-template.urdf' asset. The first column should be constructed
- with the red cubes and the second column should use the green and blue cubes,
- with blue at the base."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 20
- self.lang_template = "construct two columns using eight cubes - four red, two green, and two blue"
- self.task_completed_desc = "done constructing columns."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add corners.
- corner_size = (0.15, 0.15, 0.01)
- corner_urdf = 'corner/corner-template.urdf'
- corner_poses = []
- for _ in range(2):
- corner_pose = self.get_random_pose(env, corner_size)
- env.add_object(corner_urdf, corner_pose, 'fixed')
- corner_poses.append(corner_pose)
-
- # Add blocks.
- block_size = (0.04, 0.04, 0.04)
- block_urdf = 'block/block.urdf'
- block_colors = [utils.COLORS['red']] * 4 + [utils.COLORS['green']] * 2 + [utils.COLORS['blue']] * 2
- blocks = []
- for i in range(8):
- block_pose = self.get_random_pose(env, block_size)
- block_id = env.add_object(block_urdf, block_pose, color=block_colors[i])
- blocks.append(block_id)
-
- # Goal: each block is stacked in the correct corner in the correct order.
- for i in range(4):
- self.add_goal(objs=[blocks[i]], matches=np.ones((1, 1)), targ_poses=[corner_poses[0]], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1/8,
- language_goal=self.lang_template)
-
- for i in range(4, 8):
- self.add_goal(objs=[blocks[i]], matches=np.ones((1, 1)), targ_poses=[corner_poses[1]], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1/8,
- language_goal=self.lang_template)
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts_finetuning/pretrain0_finetune_2.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts_finetuning/pretrain0_finetune_2.sh
deleted file mode 100644
index b719b40eb34812e1a637316aefd4ba0d0c7c20e0..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts_finetuning/pretrain0_finetune_2.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-#SBATCH -c 10
-#SBATCH -n 1
-#SBATCH -o logs/%j.out
-#SBATCH --exclusive
-STEPS=${1-'50000'}
-
-
-sh scripts/traintest_scripts/train_test_multi_task_finetune_goal.sh data \
- "[]" \
- "[place-red-in-green,stack-block-pyramid]" \
- gpt0_mixcliport2_finetune
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/pipelines/__init__.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/pipelines/__init__.py
deleted file mode 100644
index c6f424debd1623e7511dd77da464a6639d816745..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/pipelines/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from .auto_augment import (AutoAugment, BrightnessTransform, ColorTransform,
- ContrastTransform, EqualizeTransform, Rotate, Shear,
- Translate)
-from .compose import Compose
-from .formating import (Collect, DefaultFormatBundle, ImageToTensor,
- ToDataContainer, ToTensor, Transpose, to_tensor)
-from .instaboost import InstaBoost
-from .loading import (LoadAnnotations, LoadImageFromFile, LoadImageFromWebcam,
- LoadMultiChannelImageFromFiles, LoadProposals)
-from .test_time_aug import MultiScaleFlipAug
-from .transforms import (Albu, CutOut, Expand, MinIoURandomCrop, Normalize,
- Pad, PhotoMetricDistortion, RandomCenterCropPad,
- RandomCrop, RandomFlip, Resize, SegRescale)
-
-__all__ = [
- 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer',
- 'Transpose', 'Collect', 'DefaultFormatBundle', 'LoadAnnotations',
- 'LoadImageFromFile', 'LoadImageFromWebcam',
- 'LoadMultiChannelImageFromFiles', 'LoadProposals', 'MultiScaleFlipAug',
- 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', 'Normalize', 'SegRescale',
- 'MinIoURandomCrop', 'Expand', 'PhotoMetricDistortion', 'Albu',
- 'InstaBoost', 'RandomCenterCropPad', 'AutoAugment', 'CutOut', 'Shear',
- 'Rotate', 'ColorTransform', 'EqualizeTransform', 'BrightnessTransform',
- 'ContrastTransform', 'Translate'
-]
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/gfl.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/gfl.py
deleted file mode 100644
index 64d65cb2dfb7a56f57e08c3fcad67e1539e1e841..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/gfl.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class GFL(SingleStageDetector):
-
- def __init__(self,
- backbone,
- neck,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(GFL, self).__init__(backbone, neck, bbox_head, train_cfg,
- test_cfg, pretrained)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/emanet/emanet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/emanet/emanet_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 73b7788bf924be2e1588596a88f0155ddc37358e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/emanet/emanet_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/emanet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index 09a5fe5468f0155f8fd0bf2cd1574a33624d8492..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './fcn_r50-d8_512x512_20k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/quantization/__init__.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/quantization/__init__.py
deleted file mode 100644
index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/quantization/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .vq import ResidualVectorQuantizer
-from .base import BaseQuantizer, DummyQuantizer, QuantizedResult
diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/utils/utils_model.py b/spaces/Grezz/generate_human_motion/VQ-Trans/utils/utils_model.py
deleted file mode 100644
index b3653a47ddb96f2ba27aae73b4eef8be904e9bf0..0000000000000000000000000000000000000000
--- a/spaces/Grezz/generate_human_motion/VQ-Trans/utils/utils_model.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import numpy as np
-import torch
-import torch.optim as optim
-import logging
-import os
-import sys
-
-def getCi(accLog):
-
- mean = np.mean(accLog)
- std = np.std(accLog)
- ci95 = 1.96*std/np.sqrt(len(accLog))
-
- return mean, ci95
-
-def get_logger(out_dir):
- logger = logging.getLogger('Exp')
- logger.setLevel(logging.INFO)
- formatter = logging.Formatter("%(asctime)s %(levelname)s %(message)s")
-
- file_path = os.path.join(out_dir, "run.log")
- file_hdlr = logging.FileHandler(file_path)
- file_hdlr.setFormatter(formatter)
-
- strm_hdlr = logging.StreamHandler(sys.stdout)
- strm_hdlr.setFormatter(formatter)
-
- logger.addHandler(file_hdlr)
- logger.addHandler(strm_hdlr)
- return logger
-
-## Optimizer
-def initial_optim(decay_option, lr, weight_decay, net, optimizer) :
-
- if optimizer == 'adamw' :
- optimizer_adam_family = optim.AdamW
- elif optimizer == 'adam' :
- optimizer_adam_family = optim.Adam
- if decay_option == 'all':
- #optimizer = optimizer_adam_family(net.parameters(), lr=lr, betas=(0.9, 0.999), weight_decay=weight_decay)
- optimizer = optimizer_adam_family(net.parameters(), lr=lr, betas=(0.5, 0.9), weight_decay=weight_decay)
-
- elif decay_option == 'noVQ':
- all_params = set(net.parameters())
- no_decay = set([net.vq_layer])
-
- decay = all_params - no_decay
- optimizer = optimizer_adam_family([
- {'params': list(no_decay), 'weight_decay': 0},
- {'params': list(decay), 'weight_decay' : weight_decay}], lr=lr)
-
- return optimizer
-
-
-def get_motion_with_trans(motion, velocity) :
- '''
- motion : torch.tensor, shape (batch_size, T, 72), with the global translation = 0
- velocity : torch.tensor, shape (batch_size, T, 3), contain the information of velocity = 0
-
- '''
- trans = torch.cumsum(velocity, dim=1)
- trans = trans - trans[:, :1] ## the first root is initialized at 0 (just for visualization)
- trans = trans.repeat((1, 1, 21))
- motion_with_trans = motion + trans
- return motion_with_trans
-
\ No newline at end of file
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/roformer/tokenization_roformer.py b/spaces/HaloMaster/chinesesummary/fengshen/models/roformer/tokenization_roformer.py
deleted file mode 100644
index 9b9267367e256b46fccc0ad196c326d28c0ebb0c..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/models/roformer/tokenization_roformer.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from transformers import BertTokenizer as RoFormerTokenizer
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/__init__.py
deleted file mode 100644
index 2dec07c8fb965677ba8c8d3b0a13809d0199d301..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from .ngram_utils import ZenNgramDict, NGRAM_DICT_NAME
-from .modeling import ZenConfig, ZenModel, ZenForPreTraining, ZenForTokenClassification, ZenForSequenceClassification
-from .tokenization import BertTokenizer, BasicTokenizer, WordpieceTokenizer
-version = "0.1.0"
-__all__ = ['ZenNgramDict', 'NGRAM_DICT_NAME', "ZenConfig", "ZenModel", "ZenForPreTraining", "ZenForTokenClassification",
- "ZenForSequenceClassification", "BertTokenizer", "BasicTokenizer", "WordpieceTokenizer"]
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/docs/vctk_example.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/docs/vctk_example.md
deleted file mode 100644
index 2ba78f3f73d6ea30f9de89150fbbc9dd5923b6fa..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/docs/vctk_example.md
+++ /dev/null
@@ -1,51 +0,0 @@
-[[Back]](..)
-
-# VCTK
-
-[VCTK](https://datashare.ed.ac.uk/handle/10283/3443) is an open English speech corpus. We provide examples
-for building [Transformer](https://arxiv.org/abs/1809.08895) models on this dataset.
-
-
-## Data preparation
-Download data, create splits and generate audio manifests with
-```bash
-python -m examples.speech_synthesis.preprocessing.get_vctk_audio_manifest \
- --output-data-root ${AUDIO_DATA_ROOT} \
- --output-manifest-root ${AUDIO_MANIFEST_ROOT}
-```
-
-Then, extract log-Mel spectrograms, generate feature manifest and create data configuration YAML with
-```bash
-python -m examples.speech_synthesis.preprocessing.get_feature_manifest \
- --audio-manifest-root ${AUDIO_MANIFEST_ROOT} \
- --output-root ${FEATURE_MANIFEST_ROOT} \
- --ipa-vocab --use-g2p
-```
-where we use phoneme inputs (`--ipa-vocab --use-g2p`) as example.
-
-To denoise audio and trim leading/trailing silence using signal processing based VAD, run
-```bash
-for SPLIT in dev test train; do
- python -m examples.speech_synthesis.preprocessing.denoise_and_vad_audio \
- --audio-manifest ${AUDIO_MANIFEST_ROOT}/${SPLIT}.audio.tsv \
- --output-dir ${PROCESSED_DATA_ROOT} \
- --denoise --vad --vad-agg-level 3
-done
-```
-
-## Training
-(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#transformer).)
-
-## Inference
-(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#inference).)
-
-## Automatic Evaluation
-(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#automatic-evaluation).)
-
-## Results
-
-| --arch | Params | Test MCD | Model |
-|---|---|---|---|
-| tts_transformer | 54M | 3.4 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/vctk_transformer_phn.tar) |
-
-[[Back]](..)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/fairseq_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/fairseq_dataset.py
deleted file mode 100644
index 23e6992dbaf34e52f2fdcd0c8fc418c93744ea4e..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/fairseq_dataset.py
+++ /dev/null
@@ -1,205 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import numpy as np
-import torch.utils.data
-from fairseq.data import data_utils
-
-logger = logging.getLogger(__name__)
-
-
-class EpochListening:
- """Mixin for receiving updates whenever the epoch increments."""
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- """
- Whether we can reuse the :class:`fairseq.data.EpochBatchIterator` for
- this dataset across epochs.
-
- This needs to return ``False`` if the sample sizes can change across
- epochs, in which case we may need to regenerate batches at each epoch.
- If your dataset relies in ``set_epoch`` then you should consider setting
- this to ``False``.
- """
- return True
-
- def set_epoch(self, epoch):
- """Will receive the updated epoch number at the beginning of the epoch."""
- pass
-
-
-class FairseqDataset(torch.utils.data.Dataset, EpochListening):
- """A dataset that provides helpers for batching."""
-
- def __getitem__(self, index):
- raise NotImplementedError
-
- def __len__(self):
- raise NotImplementedError
-
- def collater(self, samples):
- """Merge a list of samples to form a mini-batch.
-
- Args:
- samples (List[dict]): samples to collate
-
- Returns:
- dict: a mini-batch suitable for forwarding with a Model
- """
- raise NotImplementedError
-
- def num_tokens(self, index):
- """Return the number of tokens in a sample. This value is used to
- enforce ``--max-tokens`` during batching."""
- raise NotImplementedError
-
- def num_tokens_vec(self, indices):
- """Return the number of tokens for a set of positions defined by indices.
- This value is used to enforce ``--max-tokens`` during batching."""
- raise NotImplementedError
-
- def size(self, index):
- """Return an example's size as a float or tuple. This value is used when
- filtering a dataset with ``--max-positions``."""
- raise NotImplementedError
-
- def ordered_indices(self):
- """Return an ordered list of indices. Batches will be constructed based
- on this order."""
- return np.arange(len(self), dtype=np.int64)
-
- @property
- def supports_prefetch(self):
- """Whether this dataset supports prefetching."""
- return False
-
- def attr(self, attr: str, index: int):
- return getattr(self, attr, None)
-
- def prefetch(self, indices):
- """Prefetch the data required for this epoch."""
- raise NotImplementedError
-
- def get_batch_shapes(self):
- """
- Return a list of valid batch shapes, for example::
-
- [(8, 512), (16, 256), (32, 128)]
-
- The first dimension of each tuple is the batch size and can be ``None``
- to automatically infer the max batch size based on ``--max-tokens``.
- The second dimension of each tuple is the max supported length as given
- by :func:`fairseq.data.FairseqDataset.num_tokens`.
-
- This will be used by :func:`fairseq.data.FairseqDataset.batch_by_size`
- to restrict batch shapes. This is useful on TPUs to avoid too many
- dynamic shapes (and recompilations).
- """
- return None
-
- def batch_by_size(
- self,
- indices,
- max_tokens=None,
- max_sentences=None,
- required_batch_size_multiple=1,
- ):
- """
- Given an ordered set of indices, return batches according to
- *max_tokens*, *max_sentences* and *required_batch_size_multiple*.
- """
- from fairseq.data import data_utils
-
- fixed_shapes = self.get_batch_shapes()
- if fixed_shapes is not None:
-
- def adjust_bsz(bsz, num_tokens):
- if bsz is None:
- assert max_tokens is not None, "Must specify --max-tokens"
- bsz = max_tokens // num_tokens
- if max_sentences is not None:
- bsz = min(bsz, max_sentences)
- elif (
- bsz >= required_batch_size_multiple
- and bsz % required_batch_size_multiple != 0
- ):
- bsz -= bsz % required_batch_size_multiple
- return bsz
-
- fixed_shapes = np.array(
- [
- [adjust_bsz(bsz, num_tokens), num_tokens]
- for (bsz, num_tokens) in fixed_shapes
- ]
- )
-
- try:
- num_tokens_vec = self.num_tokens_vec(indices).astype('int64')
- except NotImplementedError:
- num_tokens_vec = None
-
- return data_utils.batch_by_size(
- indices,
- num_tokens_fn=self.num_tokens,
- num_tokens_vec=num_tokens_vec,
- max_tokens=max_tokens,
- max_sentences=max_sentences,
- required_batch_size_multiple=required_batch_size_multiple,
- fixed_shapes=fixed_shapes,
- )
-
- def filter_indices_by_size(self, indices, max_sizes):
- """
- Filter a list of sample indices. Remove those that are longer than
- specified in *max_sizes*.
-
- WARNING: don't update, override method in child classes
-
- Args:
- indices (np.array): original array of sample indices
- max_sizes (int or list[int] or tuple[int]): max sample size,
- can be defined separately for src and tgt (then list or tuple)
-
- Returns:
- np.array: filtered sample array
- list: list of removed indices
- """
- if isinstance(max_sizes, float) or isinstance(max_sizes, int):
- if hasattr(self, "sizes") and isinstance(self.sizes, np.ndarray):
- ignored = indices[self.sizes[indices] > max_sizes].tolist()
- indices = indices[self.sizes[indices] <= max_sizes]
- elif (
- hasattr(self, "sizes")
- and isinstance(self.sizes, list)
- and len(self.sizes) == 1
- ):
- ignored = indices[self.sizes[0][indices] > max_sizes].tolist()
- indices = indices[self.sizes[0][indices] <= max_sizes]
- else:
- indices, ignored = data_utils._filter_by_size_dynamic(
- indices, self.size, max_sizes
- )
- else:
- indices, ignored = data_utils._filter_by_size_dynamic(
- indices, self.size, max_sizes
- )
- return indices, ignored
-
- @property
- def supports_fetch_outside_dataloader(self):
- """Whether this dataset supports fetching outside the workers of the dataloader."""
- return True
-
-
-class FairseqIterableDataset(torch.utils.data.IterableDataset, EpochListening):
- """
- For datasets that need to be read sequentially, usually because the data is
- being streamed or otherwise can't be manipulated on a single machine.
- """
-
- def __iter__(self):
- raise NotImplementedError
diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/monotonic_align/monotonic_align/__init__.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/monotonic_align/monotonic_align/__init__.py
deleted file mode 100644
index 47a4dbf3177302af6b8e7d08b0b78343b1329efa..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/monotonic_align/monotonic_align/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import pkg_resources
-
-__version__ = pkg_resources.get_distribution("monotonic_align").version
-
-from monotonic_align.mas import *
diff --git a/spaces/Hexamind/QnA/src/model/doc.py b/spaces/Hexamind/QnA/src/model/doc.py
deleted file mode 100644
index 886de8a1987f0aa28522d92610725167f7ab4fad..0000000000000000000000000000000000000000
--- a/spaces/Hexamind/QnA/src/model/doc.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import docx
-
-from src.model.container import Container
-from src.model.paragraph import Paragraph
-
-
-class Doc:
-
- def __init__(self, path='', id_=None):
-
- self.xdoc = docx.Document(path)
- self.title = path.split('/')[-1]
- self.id_ = id(self)
- self.path = path
- paragraphs = [Paragraph(xp, self.id_, i) for (i, xp) in enumerate(self.xdoc.paragraphs)]
- self.container = Container(paragraphs, father=self, level=0)
- self.blocks = self.get_blocks()
-
- @property
- def structure(self):
-
- return self.container.structure
-
- def get_blocks(self):
-
- def from_list_to_str(index_list):
- index_str = str(index_list[0])
- for el in index_list[1:]:
- index_str += '.' + str(el)
- return index_str
-
- blocks = self.container.blocks
- for block in blocks:
- block.doc = self.title
- if block.level == 0:
- blocks.remove(block)
- block.index = from_list_to_str(block.index)
- return blocks
-"""
- current_level = len(current_index)
- if 0 < block.level:
- if block.level == current_level:
- current_index[-1] += 1
- elif current_level < block.level:
- current_index.append(1)
- elif block.level < current_level:
- current_index = current_index[:block.level]
- current_index[-1] += 1
- block.index = from_list_to_str(current_index)
- else:
- block.index = "0"
-"""
-
diff --git a/spaces/HighCWu/anime-colorization-with-hint/app.py b/spaces/HighCWu/anime-colorization-with-hint/app.py
deleted file mode 100644
index c04c45d2323aa84955cbfaabf5885a13e1b00b45..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/app.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import sys
-from typing import Dict
-sys.path.insert(0, 'gradio-modified')
-
-import gradio as gr
-import numpy as np
-
-from PIL import Image
-
-import torch
-
-if torch.cuda.is_available():
- t = torch.cuda.get_device_properties(0).total_memory
- r = torch.cuda.memory_reserved(0)
- a = torch.cuda.memory_allocated(0)
- f = t-a # free inside reserved
- if f < 2**32:
- device = 'cpu'
- else:
- device = 'cuda'
-else:
- device = 'cpu'
- torch._C._jit_set_bailout_depth(0)
-
-print('Use device:', device)
-
-
-net = torch.jit.load(f'weights/pkp-v1.{device}.jit.pt')
-
-
-def resize_original(img: Image.Image):
- if img is None:
- return img
- if isinstance(img, dict):
- img = img["image"]
-
- guide_img = img.convert('L')
- w, h = guide_img.size
- scale = 256 / min(guide_img.size)
- guide_img = guide_img.resize([int(round(s*scale)) for s in guide_img.size], Image.Resampling.LANCZOS)
-
- guide = np.asarray(guide_img)
- h, w = guide.shape[-2:]
- rows = int(np.ceil(h/64))*64
- cols = int(np.ceil(w/64))*64
- ph_1 = (rows-h) // 2
- ph_2 = rows-h - (rows-h) // 2
- pw_1 = (cols-w) // 2
- pw_2 = cols-w - (cols-w) // 2
- guide = np.pad(guide, ((ph_1, ph_2), (pw_1, pw_2)), mode='constant', constant_values=255)
- guide_img = Image.fromarray(guide)
-
- return gr.Image.update(value=guide_img.convert('RGBA')), guide_img.convert('RGBA')
-
-
-def colorize(img: Dict[str, Image.Image], guide_img: Image.Image, seed: int, hint_mode: str):
- if not isinstance(img, dict):
- return gr.update(visible=True)
-
- if hint_mode == "Roughly Hint":
- hint_mode_int = 0
- elif hint_mode == "Precisely Hint":
- hint_mode_int = 1
-
- guide_img = guide_img.convert('L')
- hint_img = img["mask"].convert('RGBA') # I modified gradio to enable it upload colorful mask
-
- guide = torch.from_numpy(np.asarray(guide_img))[None,None].float().to(device) / 255.0 * 2 - 1
- hint = torch.from_numpy(np.asarray(hint_img)).permute(2,0,1)[None].float().to(device) / 255.0 * 2 - 1
- hint_alpha = (hint[:,-1:] > 0.99).float()
- hint = hint[:,:3] * hint_alpha - 2 * (1 - hint_alpha)
-
- np.random.seed(int(seed))
- b, c, h, w = hint.shape
- h //= 8
- w //= 8
- noises = [torch.from_numpy(np.random.randn(b, c, h, w)).float().to(device) for _ in range(16+1)]
-
- with torch.inference_mode():
- sample = net(noises, guide, hint, hint_mode_int)
- out = sample[0].cpu().numpy().transpose([1,2,0])
- out = np.uint8(((out + 1) / 2 * 255).clip(0,255))
-
- return Image.fromarray(out).convert('RGB')
-
-
-with gr.Blocks() as demo:
- gr.Markdown('''
Anime Colorization With Hint
-
Colorize your anime sketches with hint points.
-This is a modified version of
-
-HighCWu/pixel-guide-diffusion-for-anime-colorization
- with hint points inputs.
-''')
- with gr.Row():
- with gr.Column():
- inp = gr.Image(
- source="upload",
- tool="sketch", # tool="color-sketch", # color-sketch upload image mixed with the original
- type="pil",
- label="Sketch",
- interactive=True,
- elem_id="sketch-canvas"
- )
- inp_store = gr.Image(
- type="pil",
- interactive=False
- )
- inp_store.visible = False
- with gr.Column():
- seed = gr.Slider(1, 2**32, step=1, label="Seed", interactive=True, randomize=True)
- hint_mode = gr.Radio(["Roughly Hint", "Precisely Hint"], value="Roughly Hint", label="Hint Mode")
- btn = gr.Button("Run")
- with gr.Column():
- output = gr.Image(type="pil", label="Output", interactive=False)
- gr.Markdown('''
-PS: Worse than the no hint version I thought. Probably because my model is underfitting in the super-resolution part
-I modified a little gradio codes for uploading the colorful hint points.
-''')
- gr.Markdown(
- '
'
- )
- inp.upload(
- resize_original,
- inp,
- [inp, inp_store],
- )
- btn.click(
- colorize,
- [inp, inp_store, seed, hint_mode],
- output
- )
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/HuggingFaceH4/open_llm_leaderboard/src/filters.py b/spaces/HuggingFaceH4/open_llm_leaderboard/src/filters.py
deleted file mode 100644
index 170731f96886dd340cd630dfed65e2092bd04e89..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/open_llm_leaderboard/src/filters.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import huggingface_hub
-import os
-from huggingface_hub import ModelCard
-from transformers import AutoConfig
-
-from datetime import datetime, timedelta, timezone
-
-
-# ht to @Wauplin, thank you for the snippet!
-# See https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/317
-def check_model_card(repo_id: str) -> tuple[bool, str]:
- # Returns operation status, and error message
- try:
- card = ModelCard.load(repo_id)
- except huggingface_hub.utils.EntryNotFoundError:
- return False, "Please add a model card to your model to explain how you trained/fine-tuned it."
-
- # Enforce license metadata
- if card.data.license is None:
- if not ("license_name" in card.data and "license_link" in card.data):
- return False, (
- "License not found. Please add a license to your model card using the `license` metadata or a"
- " `license_name`/`license_link` pair."
- )
-
- # Enforce card content
- if len(card.text) < 200:
- return False, "Please add a description to your model card, it is too short."
-
- return True, ""
-
-
-def is_model_on_hub(model_name: str, revision: str, token: str = None) -> bool:
- try:
- AutoConfig.from_pretrained(model_name, revision=revision, trust_remote_code=False, token=token)
- return True, None
-
- except ValueError:
- return (
- False,
- "needs to be launched with `trust_remote_code=True`. For safety reason, we do not allow these models to be automatically submitted to the leaderboard.",
- )
-
- except Exception:
- return False, "was not found on hub!"
-
-
-def user_submission_permission(submission_name, users_to_submission_dates, rate_limit_period, rate_limit_quota):
- org_or_user, _ = submission_name.split("/")
- if org_or_user not in users_to_submission_dates:
- return True, ""
- submission_dates = sorted(users_to_submission_dates[org_or_user])
-
- time_limit = (datetime.now(timezone.utc) - timedelta(days=rate_limit_period)).strftime("%Y-%m-%dT%H:%M:%SZ")
- submissions_after_timelimit = [d for d in submission_dates if d > time_limit]
-
- num_models_submitted_in_period = len(submissions_after_timelimit)
- if num_models_submitted_in_period > rate_limit_quota:
- error_msg = f"Organisation or user `{org_or_user}`"
- error_msg += f"already has {num_models_submitted_in_period} model requests submitted to the leaderboard "
- error_msg += f"in the last {rate_limit_period} days.\n"
- error_msg += (
- "Please wait a couple of days before resubmitting, so that everybody can enjoy using the leaderboard 🤗"
- )
- return False, error_msg
- return True, ""
-
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/quantization_options.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/quantization_options.py
deleted file mode 100644
index b46d682c0edaeaaf2a230e51d50da2a32d4bda98..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/quantization_options.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-def parse_config_yaml(yaml_data):
- # Initialize to default options.
- quantization_options = {
- "n_centroids": {
- "Linear": ["in_features", {"*": 256}],
- "Embedding": ["embedding_dim", {"*": 256}],
- },
- "block_sizes": {
- "Linear": ["fuzzy_name", {"fc": 8, "attn": 4, "emb": 4}],
- "Embedding": ["fuzzy_name", {"emb": 8}],
- },
- "layers_to_quantize": [
- "decoder\\.layers\\.\\d+\\.fc[12]",
- "decoder\\.embed_tokens\\.embeddings\\.[012]\\.[01]",
- "decoder\\.layers\\.\\d+\\.self_attn\\.(k_proj|v_proj|q_proj|out_proj)",
- ],
- }
-
- if "n_centroids" in yaml_data:
- quantization_options["n_centroids"] = {
- layer: convert_yaml_to_tuple(layer_data)
- for layer, layer_data in yaml_data["n_centroids"].items()
- }
- if "block_sizes" in yaml_data:
- quantization_options["block_sizes"] = {
- layer: convert_yaml_to_tuple(layer_data)
- for layer, layer_data in yaml_data["block_sizes"].items()
- }
- if "layers_to_quantize" in yaml_data:
- quantization_options["layers_to_quantize"] = yaml_data["layers_to_quantize"]
-
- return quantization_options
-
-
-def convert_yaml_to_tuple(yaml_dictionary):
- """Converts a yaml dictionary with two keys: `key` and `value` into a two
- argument tuple of those values."""
- return (yaml_dictionary["key"], yaml_dictionary["value"])
diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/ngu_dialect.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/ngu_dialect.py
deleted file mode 100644
index ce3e12bbf0469426872eed5f681985d3e1be9b26..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/ngu_dialect.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import re
-import opencc
-
-
-dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou',
- 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing',
- 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang',
- 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan',
- 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen',
- 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'}
-
-converters = {}
-
-for dialect in dialects.values():
- try:
- converters[dialect] = opencc.OpenCC(dialect)
- except:
- pass
-
-
-def ngu_dialect_to_ipa(text, dialect):
- dialect = dialects[dialect]
- text = converters[dialect].convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/Ikaros521/moe-tts/attentions.py b/spaces/Ikaros521/moe-tts/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/moe-tts/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_lms_discrete_flax.py b/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_lms_discrete_flax.py
deleted file mode 100644
index 5da43be2ada3d471e4c146538c64d50c3700161f..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_lms_discrete_flax.py
+++ /dev/null
@@ -1,242 +0,0 @@
-# Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import flax
-import jax.numpy as jnp
-from scipy import integrate
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from .scheduling_utils_flax import (
- _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS,
- FlaxSchedulerMixin,
- FlaxSchedulerOutput,
- broadcast_to_shape_from_left,
-)
-
-
-@flax.struct.dataclass
-class LMSDiscreteSchedulerState:
- # setable values
- num_inference_steps: Optional[int] = None
- timesteps: Optional[jnp.ndarray] = None
- sigmas: Optional[jnp.ndarray] = None
- derivatives: jnp.ndarray = jnp.array([])
-
- @classmethod
- def create(cls, num_train_timesteps: int, sigmas: jnp.ndarray):
- return cls(timesteps=jnp.arange(0, num_train_timesteps)[::-1], sigmas=sigmas)
-
-
-@dataclass
-class FlaxLMSSchedulerOutput(FlaxSchedulerOutput):
- state: LMSDiscreteSchedulerState
-
-
-class FlaxLMSDiscreteScheduler(FlaxSchedulerMixin, ConfigMixin):
- """
- Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by
- Katherine Crowson:
- https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear` or `scaled_linear`.
- trained_betas (`jnp.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- """
-
- _compatibles = _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
-
- @property
- def has_state(self):
- return True
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[jnp.ndarray] = None,
- ):
- if trained_betas is not None:
- self.betas = jnp.asarray(trained_betas)
- elif beta_schedule == "linear":
- self.betas = jnp.linspace(beta_start, beta_end, num_train_timesteps, dtype=jnp.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = jnp.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=jnp.float32) ** 2
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = jnp.cumprod(self.alphas, axis=0)
-
- def create_state(self):
- self.state = LMSDiscreteSchedulerState.create(
- num_train_timesteps=self.config.num_train_timesteps,
- sigmas=((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5,
- )
-
- def scale_model_input(self, state: LMSDiscreteSchedulerState, sample: jnp.ndarray, timestep: int) -> jnp.ndarray:
- """
- Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the K-LMS algorithm.
-
- Args:
- state (`LMSDiscreteSchedulerState`):
- the `FlaxLMSDiscreteScheduler` state data class instance.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
- timestep (`int`):
- current discrete timestep in the diffusion chain.
-
- Returns:
- `jnp.ndarray`: scaled input sample
- """
- (step_index,) = jnp.where(state.timesteps == timestep, size=1)
- sigma = state.sigmas[step_index]
- sample = sample / ((sigma**2 + 1) ** 0.5)
- return sample
-
- def get_lms_coefficient(self, state, order, t, current_order):
- """
- Compute a linear multistep coefficient.
-
- Args:
- order (TODO):
- t (TODO):
- current_order (TODO):
- """
-
- def lms_derivative(tau):
- prod = 1.0
- for k in range(order):
- if current_order == k:
- continue
- prod *= (tau - state.sigmas[t - k]) / (state.sigmas[t - current_order] - state.sigmas[t - k])
- return prod
-
- integrated_coeff = integrate.quad(lms_derivative, state.sigmas[t], state.sigmas[t + 1], epsrel=1e-4)[0]
-
- return integrated_coeff
-
- def set_timesteps(
- self, state: LMSDiscreteSchedulerState, num_inference_steps: int, shape: Tuple = ()
- ) -> LMSDiscreteSchedulerState:
- """
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- state (`LMSDiscreteSchedulerState`):
- the `FlaxLMSDiscreteScheduler` state data class instance.
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
- timesteps = jnp.linspace(self.config.num_train_timesteps - 1, 0, num_inference_steps, dtype=jnp.float32)
-
- low_idx = jnp.floor(timesteps).astype(int)
- high_idx = jnp.ceil(timesteps).astype(int)
- frac = jnp.mod(timesteps, 1.0)
- sigmas = jnp.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = (1 - frac) * sigmas[low_idx] + frac * sigmas[high_idx]
- sigmas = jnp.concatenate([sigmas, jnp.array([0.0])]).astype(jnp.float32)
-
- return state.replace(
- num_inference_steps=num_inference_steps,
- timesteps=timesteps.astype(int),
- derivatives=jnp.array([]),
- sigmas=sigmas,
- )
-
- def step(
- self,
- state: LMSDiscreteSchedulerState,
- model_output: jnp.ndarray,
- timestep: int,
- sample: jnp.ndarray,
- order: int = 4,
- return_dict: bool = True,
- ) -> Union[FlaxLMSSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- state (`LMSDiscreteSchedulerState`): the `FlaxLMSDiscreteScheduler` state data class instance.
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
- order: coefficient for multi-step inference.
- return_dict (`bool`): option for returning tuple rather than FlaxLMSSchedulerOutput class
-
- Returns:
- [`FlaxLMSSchedulerOutput`] or `tuple`: [`FlaxLMSSchedulerOutput`] if `return_dict` is True, otherwise a
- `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
- sigma = state.sigmas[timestep]
-
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
- pred_original_sample = sample - sigma * model_output
-
- # 2. Convert to an ODE derivative
- derivative = (sample - pred_original_sample) / sigma
- state = state.replace(derivatives=jnp.append(state.derivatives, derivative))
- if len(state.derivatives) > order:
- state = state.replace(derivatives=jnp.delete(state.derivatives, 0))
-
- # 3. Compute linear multistep coefficients
- order = min(timestep + 1, order)
- lms_coeffs = [self.get_lms_coefficient(state, order, timestep, curr_order) for curr_order in range(order)]
-
- # 4. Compute previous sample based on the derivatives path
- prev_sample = sample + sum(
- coeff * derivative for coeff, derivative in zip(lms_coeffs, reversed(state.derivatives))
- )
-
- if not return_dict:
- return (prev_sample, state)
-
- return FlaxLMSSchedulerOutput(prev_sample=prev_sample, state=state)
-
- def add_noise(
- self,
- state: LMSDiscreteSchedulerState,
- original_samples: jnp.ndarray,
- noise: jnp.ndarray,
- timesteps: jnp.ndarray,
- ) -> jnp.ndarray:
- sigma = state.sigmas[timesteps].flatten()
- sigma = broadcast_to_shape_from_left(sigma, noise.shape)
-
- noisy_samples = original_samples + noise * sigma
-
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/data/data_util.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/data/data_util.py
deleted file mode 100644
index 63b1bce8e089485182c962e830a163d6d0059da8..0000000000000000000000000000000000000000
--- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/data/data_util.py
+++ /dev/null
@@ -1,305 +0,0 @@
-import cv2
-import numpy as np
-import torch
-from os import path as osp
-from torch.nn import functional as F
-
-from basicsr.data.transforms import mod_crop
-from basicsr.utils import img2tensor, scandir
-
-
-def read_img_seq(path, require_mod_crop=False, scale=1):
- """Read a sequence of images from a given folder path.
-
- Args:
- path (list[str] | str): List of image paths or image folder path.
- require_mod_crop (bool): Require mod crop for each image.
- Default: False.
- scale (int): Scale factor for mod_crop. Default: 1.
-
- Returns:
- Tensor: size (t, c, h, w), RGB, [0, 1].
- """
- if isinstance(path, list):
- img_paths = path
- else:
- img_paths = sorted(list(scandir(path, full_path=True)))
- imgs = [cv2.imread(v).astype(np.float32) / 255. for v in img_paths]
- if require_mod_crop:
- imgs = [mod_crop(img, scale) for img in imgs]
- imgs = img2tensor(imgs, bgr2rgb=True, float32=True)
- imgs = torch.stack(imgs, dim=0)
- return imgs
-
-
-def generate_frame_indices(crt_idx, max_frame_num, num_frames, padding='reflection'):
- """Generate an index list for reading `num_frames` frames from a sequence
- of images.
-
- Args:
- crt_idx (int): Current center index.
- max_frame_num (int): Max number of the sequence of images (from 1).
- num_frames (int): Reading num_frames frames.
- padding (str): Padding mode, one of
- 'replicate' | 'reflection' | 'reflection_circle' | 'circle'
- Examples: current_idx = 0, num_frames = 5
- The generated frame indices under different padding mode:
- replicate: [0, 0, 0, 1, 2]
- reflection: [2, 1, 0, 1, 2]
- reflection_circle: [4, 3, 0, 1, 2]
- circle: [3, 4, 0, 1, 2]
-
- Returns:
- list[int]: A list of indices.
- """
- assert num_frames % 2 == 1, 'num_frames should be an odd number.'
- assert padding in ('replicate', 'reflection', 'reflection_circle', 'circle'), f'Wrong padding mode: {padding}.'
-
- max_frame_num = max_frame_num - 1 # start from 0
- num_pad = num_frames // 2
-
- indices = []
- for i in range(crt_idx - num_pad, crt_idx + num_pad + 1):
- if i < 0:
- if padding == 'replicate':
- pad_idx = 0
- elif padding == 'reflection':
- pad_idx = -i
- elif padding == 'reflection_circle':
- pad_idx = crt_idx + num_pad - i
- else:
- pad_idx = num_frames + i
- elif i > max_frame_num:
- if padding == 'replicate':
- pad_idx = max_frame_num
- elif padding == 'reflection':
- pad_idx = max_frame_num * 2 - i
- elif padding == 'reflection_circle':
- pad_idx = (crt_idx - num_pad) - (i - max_frame_num)
- else:
- pad_idx = i - num_frames
- else:
- pad_idx = i
- indices.append(pad_idx)
- return indices
-
-
-def paired_paths_from_lmdb(folders, keys):
- """Generate paired paths from lmdb files.
-
- Contents of lmdb. Taking the `lq.lmdb` for example, the file structure is:
-
- lq.lmdb
- ├── data.mdb
- ├── lock.mdb
- ├── meta_info.txt
-
- The data.mdb and lock.mdb are standard lmdb files and you can refer to
- https://lmdb.readthedocs.io/en/release/ for more details.
-
- The meta_info.txt is a specified txt file to record the meta information
- of our datasets. It will be automatically created when preparing
- datasets by our provided dataset tools.
- Each line in the txt file records
- 1)image name (with extension),
- 2)image shape,
- 3)compression level, separated by a white space.
- Example: `baboon.png (120,125,3) 1`
-
- We use the image name without extension as the lmdb key.
- Note that we use the same key for the corresponding lq and gt images.
-
- Args:
- folders (list[str]): A list of folder path. The order of list should
- be [input_folder, gt_folder].
- keys (list[str]): A list of keys identifying folders. The order should
- be in consistent with folders, e.g., ['lq', 'gt'].
- Note that this key is different from lmdb keys.
-
- Returns:
- list[str]: Returned path list.
- """
- assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. '
- f'But got {len(folders)}')
- assert len(keys) == 2, ('The len of keys should be 2 with [input_key, gt_key]. ' f'But got {len(keys)}')
- input_folder, gt_folder = folders
- input_key, gt_key = keys
-
- if not (input_folder.endswith('.lmdb') and gt_folder.endswith('.lmdb')):
- raise ValueError(f'{input_key} folder and {gt_key} folder should both in lmdb '
- f'formats. But received {input_key}: {input_folder}; '
- f'{gt_key}: {gt_folder}')
- # ensure that the two meta_info files are the same
- with open(osp.join(input_folder, 'meta_info.txt')) as fin:
- input_lmdb_keys = [line.split('.')[0] for line in fin]
- with open(osp.join(gt_folder, 'meta_info.txt')) as fin:
- gt_lmdb_keys = [line.split('.')[0] for line in fin]
- if set(input_lmdb_keys) != set(gt_lmdb_keys):
- raise ValueError(f'Keys in {input_key}_folder and {gt_key}_folder are different.')
- else:
- paths = []
- for lmdb_key in sorted(input_lmdb_keys):
- paths.append(dict([(f'{input_key}_path', lmdb_key), (f'{gt_key}_path', lmdb_key)]))
- return paths
-
-
-def paired_paths_from_meta_info_file(folders, keys, meta_info_file, filename_tmpl):
- """Generate paired paths from an meta information file.
-
- Each line in the meta information file contains the image names and
- image shape (usually for gt), separated by a white space.
-
- Example of an meta information file:
- ```
- 0001_s001.png (480,480,3)
- 0001_s002.png (480,480,3)
- ```
-
- Args:
- folders (list[str]): A list of folder path. The order of list should
- be [input_folder, gt_folder].
- keys (list[str]): A list of keys identifying folders. The order should
- be in consistent with folders, e.g., ['lq', 'gt'].
- meta_info_file (str): Path to the meta information file.
- filename_tmpl (str): Template for each filename. Note that the
- template excludes the file extension. Usually the filename_tmpl is
- for files in the input folder.
-
- Returns:
- list[str]: Returned path list.
- """
- assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. '
- f'But got {len(folders)}')
- assert len(keys) == 2, ('The len of keys should be 2 with [input_key, gt_key]. ' f'But got {len(keys)}')
- input_folder, gt_folder = folders
- input_key, gt_key = keys
-
- with open(meta_info_file, 'r') as fin:
- gt_names = [line.split(' ')[0] for line in fin]
-
- paths = []
- for gt_name in gt_names:
- basename, ext = osp.splitext(osp.basename(gt_name))
- input_name = f'{filename_tmpl.format(basename)}{ext}'
- input_path = osp.join(input_folder, input_name)
- gt_path = osp.join(gt_folder, gt_name)
- paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)]))
- return paths
-
-
-def paired_paths_from_folder(folders, keys, filename_tmpl):
- """Generate paired paths from folders.
-
- Args:
- folders (list[str]): A list of folder path. The order of list should
- be [input_folder, gt_folder].
- keys (list[str]): A list of keys identifying folders. The order should
- be in consistent with folders, e.g., ['lq', 'gt'].
- filename_tmpl (str): Template for each filename. Note that the
- template excludes the file extension. Usually the filename_tmpl is
- for files in the input folder.
-
- Returns:
- list[str]: Returned path list.
- """
- assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. '
- f'But got {len(folders)}')
- assert len(keys) == 2, ('The len of keys should be 2 with [input_key, gt_key]. ' f'But got {len(keys)}')
- input_folder, gt_folder = folders
- input_key, gt_key = keys
-
- input_paths = list(scandir(input_folder))
- gt_paths = list(scandir(gt_folder))
- assert len(input_paths) == len(gt_paths), (f'{input_key} and {gt_key} datasets have different number of images: '
- f'{len(input_paths)}, {len(gt_paths)}.')
- paths = []
- for gt_path in gt_paths:
- basename, ext = osp.splitext(osp.basename(gt_path))
- input_name = f'{filename_tmpl.format(basename)}{ext}'
- input_path = osp.join(input_folder, input_name)
- assert input_name in input_paths, (f'{input_name} is not in ' f'{input_key}_paths.')
- gt_path = osp.join(gt_folder, gt_path)
- paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)]))
- return paths
-
-
-def paths_from_folder(folder):
- """Generate paths from folder.
-
- Args:
- folder (str): Folder path.
-
- Returns:
- list[str]: Returned path list.
- """
-
- paths = list(scandir(folder))
- paths = [osp.join(folder, path) for path in paths]
- return paths
-
-
-def paths_from_lmdb(folder):
- """Generate paths from lmdb.
-
- Args:
- folder (str): Folder path.
-
- Returns:
- list[str]: Returned path list.
- """
- if not folder.endswith('.lmdb'):
- raise ValueError(f'Folder {folder}folder should in lmdb format.')
- with open(osp.join(folder, 'meta_info.txt')) as fin:
- paths = [line.split('.')[0] for line in fin]
- return paths
-
-
-def generate_gaussian_kernel(kernel_size=13, sigma=1.6):
- """Generate Gaussian kernel used in `duf_downsample`.
-
- Args:
- kernel_size (int): Kernel size. Default: 13.
- sigma (float): Sigma of the Gaussian kernel. Default: 1.6.
-
- Returns:
- np.array: The Gaussian kernel.
- """
- from scipy.ndimage import filters as filters
- kernel = np.zeros((kernel_size, kernel_size))
- # set element at the middle to one, a dirac delta
- kernel[kernel_size // 2, kernel_size // 2] = 1
- # gaussian-smooth the dirac, resulting in a gaussian filter
- return filters.gaussian_filter(kernel, sigma)
-
-
-def duf_downsample(x, kernel_size=13, scale=4):
- """Downsamping with Gaussian kernel used in the DUF official code.
-
- Args:
- x (Tensor): Frames to be downsampled, with shape (b, t, c, h, w).
- kernel_size (int): Kernel size. Default: 13.
- scale (int): Downsampling factor. Supported scale: (2, 3, 4).
- Default: 4.
-
- Returns:
- Tensor: DUF downsampled frames.
- """
- assert scale in (2, 3, 4), f'Only support scale (2, 3, 4), but got {scale}.'
-
- squeeze_flag = False
- if x.ndim == 4:
- squeeze_flag = True
- x = x.unsqueeze(0)
- b, t, c, h, w = x.size()
- x = x.view(-1, 1, h, w)
- pad_w, pad_h = kernel_size // 2 + scale * 2, kernel_size // 2 + scale * 2
- x = F.pad(x, (pad_w, pad_w, pad_h, pad_h), 'reflect')
-
- gaussian_filter = generate_gaussian_kernel(kernel_size, 0.4 * scale)
- gaussian_filter = torch.from_numpy(gaussian_filter).type_as(x).unsqueeze(0).unsqueeze(0)
- x = F.conv2d(x, gaussian_filter, stride=scale)
- x = x[:, :, 2:-2, 2:-2]
- x = x.view(b, t, c, x.size(2), x.size(3))
- if squeeze_flag:
- x = x.squeeze(0)
- return x
diff --git a/spaces/Jung/ep_explorer/README.md b/spaces/Jung/ep_explorer/README.md
deleted file mode 100644
index e53ab92abfa83aa5b94521c2b74cf068d0f59647..0000000000000000000000000000000000000000
--- a/spaces/Jung/ep_explorer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Ep Explorer
-emoji: 🚀
-colorFrom: indigo
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.27.2
-app_file: app.py
-pinned: false
-license: cc-by-nc-sa-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/demucs/audio.py b/spaces/Kangarroar/ApplioRVC-Inference/demucs/audio.py
deleted file mode 100644
index b29f156e4afb5fbda32c35777022caeadf50d711..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/demucs/audio.py
+++ /dev/null
@@ -1,172 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-import json
-import subprocess as sp
-from pathlib import Path
-
-import julius
-import numpy as np
-import torch
-
-from .utils import temp_filenames
-
-
-def _read_info(path):
- stdout_data = sp.check_output([
- 'ffprobe', "-loglevel", "panic",
- str(path), '-print_format', 'json', '-show_format', '-show_streams'
- ])
- return json.loads(stdout_data.decode('utf-8'))
-
-
-class AudioFile:
- """
- Allows to read audio from any format supported by ffmpeg, as well as resampling or
- converting to mono on the fly. See :method:`read` for more details.
- """
- def __init__(self, path: Path):
- self.path = Path(path)
- self._info = None
-
- def __repr__(self):
- features = [("path", self.path)]
- features.append(("samplerate", self.samplerate()))
- features.append(("channels", self.channels()))
- features.append(("streams", len(self)))
- features_str = ", ".join(f"{name}={value}" for name, value in features)
- return f"AudioFile({features_str})"
-
- @property
- def info(self):
- if self._info is None:
- self._info = _read_info(self.path)
- return self._info
-
- @property
- def duration(self):
- return float(self.info['format']['duration'])
-
- @property
- def _audio_streams(self):
- return [
- index for index, stream in enumerate(self.info["streams"])
- if stream["codec_type"] == "audio"
- ]
-
- def __len__(self):
- return len(self._audio_streams)
-
- def channels(self, stream=0):
- return int(self.info['streams'][self._audio_streams[stream]]['channels'])
-
- def samplerate(self, stream=0):
- return int(self.info['streams'][self._audio_streams[stream]]['sample_rate'])
-
- def read(self,
- seek_time=None,
- duration=None,
- streams=slice(None),
- samplerate=None,
- channels=None,
- temp_folder=None):
- """
- Slightly more efficient implementation than stempeg,
- in particular, this will extract all stems at once
- rather than having to loop over one file multiple times
- for each stream.
-
- Args:
- seek_time (float): seek time in seconds or None if no seeking is needed.
- duration (float): duration in seconds to extract or None to extract until the end.
- streams (slice, int or list): streams to extract, can be a single int, a list or
- a slice. If it is a slice or list, the output will be of size [S, C, T]
- with S the number of streams, C the number of channels and T the number of samples.
- If it is an int, the output will be [C, T].
- samplerate (int): if provided, will resample on the fly. If None, no resampling will
- be done. Original sampling rate can be obtained with :method:`samplerate`.
- channels (int): if 1, will convert to mono. We do not rely on ffmpeg for that
- as ffmpeg automatically scale by +3dB to conserve volume when playing on speakers.
- See https://sound.stackexchange.com/a/42710.
- Our definition of mono is simply the average of the two channels. Any other
- value will be ignored.
- temp_folder (str or Path or None): temporary folder to use for decoding.
-
-
- """
- streams = np.array(range(len(self)))[streams]
- single = not isinstance(streams, np.ndarray)
- if single:
- streams = [streams]
-
- if duration is None:
- target_size = None
- query_duration = None
- else:
- target_size = int((samplerate or self.samplerate()) * duration)
- query_duration = float((target_size + 1) / (samplerate or self.samplerate()))
-
- with temp_filenames(len(streams)) as filenames:
- command = ['ffmpeg', '-y']
- command += ['-loglevel', 'panic']
- if seek_time:
- command += ['-ss', str(seek_time)]
- command += ['-i', str(self.path)]
- for stream, filename in zip(streams, filenames):
- command += ['-map', f'0:{self._audio_streams[stream]}']
- if query_duration is not None:
- command += ['-t', str(query_duration)]
- command += ['-threads', '1']
- command += ['-f', 'f32le']
- if samplerate is not None:
- command += ['-ar', str(samplerate)]
- command += [filename]
-
- sp.run(command, check=True)
- wavs = []
- for filename in filenames:
- wav = np.fromfile(filename, dtype=np.float32)
- wav = torch.from_numpy(wav)
- wav = wav.view(-1, self.channels()).t()
- if channels is not None:
- wav = convert_audio_channels(wav, channels)
- if target_size is not None:
- wav = wav[..., :target_size]
- wavs.append(wav)
- wav = torch.stack(wavs, dim=0)
- if single:
- wav = wav[0]
- return wav
-
-
-def convert_audio_channels(wav, channels=2):
- """Convert audio to the given number of channels."""
- *shape, src_channels, length = wav.shape
- if src_channels == channels:
- pass
- elif channels == 1:
- # Case 1:
- # The caller asked 1-channel audio, but the stream have multiple
- # channels, downmix all channels.
- wav = wav.mean(dim=-2, keepdim=True)
- elif src_channels == 1:
- # Case 2:
- # The caller asked for multiple channels, but the input file have
- # one single channel, replicate the audio over all channels.
- wav = wav.expand(*shape, channels, length)
- elif src_channels >= channels:
- # Case 3:
- # The caller asked for multiple channels, and the input file have
- # more channels than requested. In that case return the first channels.
- wav = wav[..., :channels, :]
- else:
- # Case 4: What is a reasonable choice here?
- raise ValueError('The audio file has less channels than requested but is not mono.')
- return wav
-
-
-def convert_audio(wav, from_samplerate, to_samplerate, channels):
- wav = convert_audio_channels(wav, channels)
- return julius.resample_frac(wav, from_samplerate, to_samplerate)
diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/data_objects/speaker_batch.py b/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/data_objects/speaker_batch.py
deleted file mode 100644
index 56651dba5804a0c59c334e49ac18f8f5a4bfa444..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/data_objects/speaker_batch.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import numpy as np
-from typing import List
-from encoder.data_objects.speaker import Speaker
-
-class SpeakerBatch:
- def __init__(self, speakers: List[Speaker], utterances_per_speaker: int, n_frames: int):
- self.speakers = speakers
- self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
-
- # Array of shape (n_speakers * n_utterances, n_frames, mel_n), e.g. for 3 speakers with
- # 4 utterances each of 160 frames of 40 mel coefficients: (12, 160, 40)
- self.data = np.array([frames for s in speakers for _, frames, _ in self.partials[s]])
diff --git a/spaces/KevlarVK/content_summarizer/process_media.py b/spaces/KevlarVK/content_summarizer/process_media.py
deleted file mode 100644
index 566f5f3341ae40ea172b53d9258e8f679aa2afad..0000000000000000000000000000000000000000
--- a/spaces/KevlarVK/content_summarizer/process_media.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import io
-import wave
-import tensorflow as tf
-import tensorflow_io as tfio
-from pydub import AudioSegment
-from transformers import AutoProcessor, TFWhisperForConditionalGeneration
-
-# tf.config.run_functions_eagerly(True)
-
-class MediaProcessor:
-
- def __init__(self):
- self.processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en")
- self.model = TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
-
- def load_wav_16k_mono(self, file_bytes):
- """ Load a WAV file, convert it to a float tensor, resample to 16 kHz single-channel audio. """
- wav, sample_rate = tf.audio.decode_wav(
- file_bytes,
- desired_channels=1)
- wav = tf.squeeze(wav, axis=-1)
- sample_rate = tf.cast(sample_rate, dtype=tf.int64)
- wav = tfio.audio.resample(wav, rate_in=sample_rate, rate_out=16000)
- return wav.numpy()
-
- def get_text_from_audio(self, resampled_audio_data):
- # Split the resampled audio data into 30-second chunks
- chunk_size = 30 * 16000
- audio_chunks = [resampled_audio_data[i:i+chunk_size] for i in range(0, len(resampled_audio_data), chunk_size)]
-
- text = []
- for chunk in audio_chunks:
- inputs = self.processor(chunk, sampling_rate=16000, return_tensors="tf").input_features
- predicted_ids = self.model.generate(inputs, max_new_tokens=500)
- transcription = self.processor.batch_decode(predicted_ids, skip_special_tokens=True)
- text.append(transcription[0])
-
- return " ".join(text)
-
- def get_audio_from_video(self, video_buffer):
- buffer = io.BytesIO(video_buffer)
- video_file = AudioSegment.from_file(buffer)
- audio = video_file.set_channels(1)
- with io.BytesIO() as wav_buffer:
- audio.export(wav_buffer, format="wav")
- wav_bytes = wav_buffer.getvalue()
- return wav_bytes
-
- def get_wav_from_audio(self, audio_buffer):
- buffer = io.BytesIO(audio_buffer)
- audio_file = AudioSegment.from_mp3(buffer)
- raw_data = audio_file.raw_data
- with io.BytesIO() as wav_buffer:
- with wave.open(wav_buffer, "wb") as wav_file:
- wav_file.setnchannels(audio_file.channels)
- wav_file.setsampwidth(audio_file.sample_width)
- wav_file.setframerate(audio_file.frame_rate)
- wav_file.writeframes(raw_data)
- wav_bytes = wav_buffer.getvalue()
- return wav_bytes
-
- def process_audio(self, audio_bytes):
- resampled_audio_data = self.load_wav_16k_mono(audio_bytes)
- return self.get_text_from_audio(resampled_audio_data)
-
- def process_video(self, buffer):
- audio_bytes = self.get_audio_from_video(buffer)
- return self.process_audio(audio_bytes)
-
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/paa_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/paa_head.py
deleted file mode 100644
index 3c1f453d2788b354970254e8875068e824c370d4..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/paa_head.py
+++ /dev/null
@@ -1,730 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List, Optional, Tuple
-
-import numpy as np
-import torch
-from mmengine.structures import InstanceData
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from mmdet.structures.bbox import bbox_overlaps
-from mmdet.utils import (ConfigType, InstanceList, OptConfigType,
- OptInstanceList)
-from ..layers import multiclass_nms
-from ..utils import levels_to_images, multi_apply
-from . import ATSSHead
-
-EPS = 1e-12
-try:
- import sklearn.mixture as skm
-except ImportError:
- skm = None
-
-
-@MODELS.register_module()
-class PAAHead(ATSSHead):
- """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU
- Prediction for Object Detection.
-
- Code is modified from the `official github repo
- `_.
-
- More details can be found in the `paper
- `_ .
-
- Args:
- topk (int): Select topk samples with smallest loss in
- each level.
- score_voting (bool): Whether to use score voting in post-process.
- covariance_type : String describing the type of covariance parameters
- to be used in :class:`sklearn.mixture.GaussianMixture`.
- It must be one of:
-
- - 'full': each component has its own general covariance matrix
- - 'tied': all components share the same general covariance matrix
- - 'diag': each component has its own diagonal covariance matrix
- - 'spherical': each component has its own single variance
- Default: 'diag'. From 'full' to 'spherical', the gmm fitting
- process is faster yet the performance could be influenced. For most
- cases, 'diag' should be a good choice.
- """
-
- def __init__(self,
- *args,
- topk: int = 9,
- score_voting: bool = True,
- covariance_type: str = 'diag',
- **kwargs):
- # topk used in paa reassign process
- self.topk = topk
- self.with_score_voting = score_voting
- self.covariance_type = covariance_type
- super().__init__(*args, **kwargs)
-
- def loss_by_feat(
- self,
- cls_scores: List[Tensor],
- bbox_preds: List[Tensor],
- iou_preds: List[Tensor],
- batch_gt_instances: InstanceList,
- batch_img_metas: List[dict],
- batch_gt_instances_ignore: OptInstanceList = None) -> dict:
- """Calculate the loss based on the features extracted by the detection
- head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- iou_preds (list[Tensor]): iou_preds for each scale
- level with shape (N, num_anchors * 1, H, W)
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes`` and ``labels``
- attributes.
- batch_img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- batch_gt_instances_ignore (list[:obj:`InstanceData`], optional):
- Batch of gt_instances_ignore. It includes ``bboxes`` attribute
- data that is ignored during training and testing.
- Defaults to None.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss gmm_assignment.
- """
-
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.prior_generator.num_levels
-
- device = cls_scores[0].device
- anchor_list, valid_flag_list = self.get_anchors(
- featmap_sizes, batch_img_metas, device=device)
- cls_reg_targets = self.get_targets(
- anchor_list,
- valid_flag_list,
- batch_gt_instances,
- batch_img_metas,
- batch_gt_instances_ignore=batch_gt_instances_ignore,
- )
- (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds,
- pos_gt_index) = cls_reg_targets
- cls_scores = levels_to_images(cls_scores)
- cls_scores = [
- item.reshape(-1, self.cls_out_channels) for item in cls_scores
- ]
- bbox_preds = levels_to_images(bbox_preds)
- bbox_preds = [item.reshape(-1, 4) for item in bbox_preds]
- iou_preds = levels_to_images(iou_preds)
- iou_preds = [item.reshape(-1, 1) for item in iou_preds]
- pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list,
- cls_scores, bbox_preds, labels,
- labels_weight, bboxes_target,
- bboxes_weight, pos_inds)
-
- with torch.no_grad():
- reassign_labels, reassign_label_weight, \
- reassign_bbox_weights, num_pos = multi_apply(
- self.paa_reassign,
- pos_losses_list,
- labels,
- labels_weight,
- bboxes_weight,
- pos_inds,
- pos_gt_index,
- anchor_list)
- num_pos = sum(num_pos)
- # convert all tensor list to a flatten tensor
- cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1))
- bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1))
- iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1))
- labels = torch.cat(reassign_labels, 0).view(-1)
- flatten_anchors = torch.cat(
- [torch.cat(item, 0) for item in anchor_list])
- labels_weight = torch.cat(reassign_label_weight, 0).view(-1)
- bboxes_target = torch.cat(bboxes_target,
- 0).view(-1, bboxes_target[0].size(-1))
-
- pos_inds_flatten = ((labels >= 0)
- &
- (labels < self.num_classes)).nonzero().reshape(-1)
-
- losses_cls = self.loss_cls(
- cls_scores,
- labels,
- labels_weight,
- avg_factor=max(num_pos, len(batch_img_metas))) # avoid num_pos=0
- if num_pos:
- pos_bbox_pred = self.bbox_coder.decode(
- flatten_anchors[pos_inds_flatten],
- bbox_preds[pos_inds_flatten])
- pos_bbox_target = bboxes_target[pos_inds_flatten]
- iou_target = bbox_overlaps(
- pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True)
- losses_iou = self.loss_centerness(
- iou_preds[pos_inds_flatten],
- iou_target.unsqueeze(-1),
- avg_factor=num_pos)
- losses_bbox = self.loss_bbox(
- pos_bbox_pred,
- pos_bbox_target,
- iou_target.clamp(min=EPS),
- avg_factor=iou_target.sum())
- else:
- losses_iou = iou_preds.sum() * 0
- losses_bbox = bbox_preds.sum() * 0
-
- return dict(
- loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou)
-
- def get_pos_loss(self, anchors: List[Tensor], cls_score: Tensor,
- bbox_pred: Tensor, label: Tensor, label_weight: Tensor,
- bbox_target: dict, bbox_weight: Tensor,
- pos_inds: Tensor) -> Tensor:
- """Calculate loss of all potential positive samples obtained from first
- match process.
-
- Args:
- anchors (list[Tensor]): Anchors of each scale.
- cls_score (Tensor): Box scores of single image with shape
- (num_anchors, num_classes)
- bbox_pred (Tensor): Box energies / deltas of single image
- with shape (num_anchors, 4)
- label (Tensor): classification target of each anchor with
- shape (num_anchors,)
- label_weight (Tensor): Classification loss weight of each
- anchor with shape (num_anchors).
- bbox_target (dict): Regression target of each anchor with
- shape (num_anchors, 4).
- bbox_weight (Tensor): Bbox weight of each anchor with shape
- (num_anchors, 4).
- pos_inds (Tensor): Index of all positive samples got from
- first assign process.
-
- Returns:
- Tensor: Losses of all positive samples in single image.
- """
- if not len(pos_inds):
- return cls_score.new([]),
- anchors_all_level = torch.cat(anchors, 0)
- pos_scores = cls_score[pos_inds]
- pos_bbox_pred = bbox_pred[pos_inds]
- pos_label = label[pos_inds]
- pos_label_weight = label_weight[pos_inds]
- pos_bbox_target = bbox_target[pos_inds]
- pos_bbox_weight = bbox_weight[pos_inds]
- pos_anchors = anchors_all_level[pos_inds]
- pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred)
-
- # to keep loss dimension
- loss_cls = self.loss_cls(
- pos_scores,
- pos_label,
- pos_label_weight,
- avg_factor=1.0,
- reduction_override='none')
-
- loss_bbox = self.loss_bbox(
- pos_bbox_pred,
- pos_bbox_target,
- pos_bbox_weight,
- avg_factor=1.0, # keep same loss weight before reassign
- reduction_override='none')
-
- loss_cls = loss_cls.sum(-1)
- pos_loss = loss_bbox + loss_cls
- return pos_loss,
-
- def paa_reassign(self, pos_losses: Tensor, label: Tensor,
- label_weight: Tensor, bbox_weight: Tensor,
- pos_inds: Tensor, pos_gt_inds: Tensor,
- anchors: List[Tensor]) -> tuple:
- """Fit loss to GMM distribution and separate positive, ignore, negative
- samples again with GMM model.
-
- Args:
- pos_losses (Tensor): Losses of all positive samples in
- single image.
- label (Tensor): classification target of each anchor with
- shape (num_anchors,)
- label_weight (Tensor): Classification loss weight of each
- anchor with shape (num_anchors).
- bbox_weight (Tensor): Bbox weight of each anchor with shape
- (num_anchors, 4).
- pos_inds (Tensor): Index of all positive samples got from
- first assign process.
- pos_gt_inds (Tensor): Gt_index of all positive samples got
- from first assign process.
- anchors (list[Tensor]): Anchors of each scale.
-
- Returns:
- tuple: Usually returns a tuple containing learning targets.
-
- - label (Tensor): classification target of each anchor after
- paa assign, with shape (num_anchors,)
- - label_weight (Tensor): Classification loss weight of each
- anchor after paa assign, with shape (num_anchors).
- - bbox_weight (Tensor): Bbox weight of each anchor with shape
- (num_anchors, 4).
- - num_pos (int): The number of positive samples after paa
- assign.
- """
- if not len(pos_inds):
- return label, label_weight, bbox_weight, 0
- label = label.clone()
- label_weight = label_weight.clone()
- bbox_weight = bbox_weight.clone()
- num_gt = pos_gt_inds.max() + 1
- num_level = len(anchors)
- num_anchors_each_level = [item.size(0) for item in anchors]
- num_anchors_each_level.insert(0, 0)
- inds_level_interval = np.cumsum(num_anchors_each_level)
- pos_level_mask = []
- for i in range(num_level):
- mask = (pos_inds >= inds_level_interval[i]) & (
- pos_inds < inds_level_interval[i + 1])
- pos_level_mask.append(mask)
- pos_inds_after_paa = [label.new_tensor([])]
- ignore_inds_after_paa = [label.new_tensor([])]
- for gt_ind in range(num_gt):
- pos_inds_gmm = []
- pos_loss_gmm = []
- gt_mask = pos_gt_inds == gt_ind
- for level in range(num_level):
- level_mask = pos_level_mask[level]
- level_gt_mask = level_mask & gt_mask
- value, topk_inds = pos_losses[level_gt_mask].topk(
- min(level_gt_mask.sum(), self.topk), largest=False)
- pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds])
- pos_loss_gmm.append(value)
- pos_inds_gmm = torch.cat(pos_inds_gmm)
- pos_loss_gmm = torch.cat(pos_loss_gmm)
- # fix gmm need at least two sample
- if len(pos_inds_gmm) < 2:
- continue
- device = pos_inds_gmm.device
- pos_loss_gmm, sort_inds = pos_loss_gmm.sort()
- pos_inds_gmm = pos_inds_gmm[sort_inds]
- pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy()
- min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max()
- means_init = np.array([min_loss, max_loss]).reshape(2, 1)
- weights_init = np.array([0.5, 0.5])
- precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full
- if self.covariance_type == 'spherical':
- precisions_init = precisions_init.reshape(2)
- elif self.covariance_type == 'diag':
- precisions_init = precisions_init.reshape(2, 1)
- elif self.covariance_type == 'tied':
- precisions_init = np.array([[1.0]])
- if skm is None:
- raise ImportError('Please run "pip install sklearn" '
- 'to install sklearn first.')
- gmm = skm.GaussianMixture(
- 2,
- weights_init=weights_init,
- means_init=means_init,
- precisions_init=precisions_init,
- covariance_type=self.covariance_type)
- gmm.fit(pos_loss_gmm)
- gmm_assignment = gmm.predict(pos_loss_gmm)
- scores = gmm.score_samples(pos_loss_gmm)
- gmm_assignment = torch.from_numpy(gmm_assignment).to(device)
- scores = torch.from_numpy(scores).to(device)
-
- pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme(
- gmm_assignment, scores, pos_inds_gmm)
- pos_inds_after_paa.append(pos_inds_temp)
- ignore_inds_after_paa.append(ignore_inds_temp)
-
- pos_inds_after_paa = torch.cat(pos_inds_after_paa)
- ignore_inds_after_paa = torch.cat(ignore_inds_after_paa)
- reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1)
- reassign_ids = pos_inds[reassign_mask]
- label[reassign_ids] = self.num_classes
- label_weight[ignore_inds_after_paa] = 0
- bbox_weight[reassign_ids] = 0
- num_pos = len(pos_inds_after_paa)
- return label, label_weight, bbox_weight, num_pos
-
- def gmm_separation_scheme(self, gmm_assignment: Tensor, scores: Tensor,
- pos_inds_gmm: Tensor) -> Tuple[Tensor, Tensor]:
- """A general separation scheme for gmm model.
-
- It separates a GMM distribution of candidate samples into three
- parts, 0 1 and uncertain areas, and you can implement other
- separation schemes by rewriting this function.
-
- Args:
- gmm_assignment (Tensor): The prediction of GMM which is of shape
- (num_samples,). The 0/1 value indicates the distribution
- that each sample comes from.
- scores (Tensor): The probability of sample coming from the
- fit GMM distribution. The tensor is of shape (num_samples,).
- pos_inds_gmm (Tensor): All the indexes of samples which are used
- to fit GMM model. The tensor is of shape (num_samples,)
-
- Returns:
- tuple[Tensor, Tensor]: The indices of positive and ignored samples.
-
- - pos_inds_temp (Tensor): Indices of positive samples.
- - ignore_inds_temp (Tensor): Indices of ignore samples.
- """
- # The implementation is (c) in Fig.3 in origin paper instead of (b).
- # You can refer to issues such as
- # https://github.com/kkhoot/PAA/issues/8 and
- # https://github.com/kkhoot/PAA/issues/9.
- fgs = gmm_assignment == 0
- pos_inds_temp = fgs.new_tensor([], dtype=torch.long)
- ignore_inds_temp = fgs.new_tensor([], dtype=torch.long)
- if fgs.nonzero().numel():
- _, pos_thr_ind = scores[fgs].topk(1)
- pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1]
- ignore_inds_temp = pos_inds_gmm.new_tensor([])
- return pos_inds_temp, ignore_inds_temp
-
- def get_targets(self,
- anchor_list: List[List[Tensor]],
- valid_flag_list: List[List[Tensor]],
- batch_gt_instances: InstanceList,
- batch_img_metas: List[dict],
- batch_gt_instances_ignore: OptInstanceList = None,
- unmap_outputs: bool = True) -> tuple:
- """Get targets for PAA head.
-
- This method is almost the same as `AnchorHead.get_targets()`. We direct
- return the results from _get_targets_single instead map it to levels
- by images_to_levels function.
-
- Args:
- anchor_list (list[list[Tensor]]): Multi level anchors of each
- image. The outer list indicates images, and the inner list
- corresponds to feature levels of the image. Each element of
- the inner list is a tensor of shape (num_anchors, 4).
- valid_flag_list (list[list[Tensor]]): Multi level valid flags of
- each image. The outer list indicates images, and the inner list
- corresponds to feature levels of the image. Each element of
- the inner list is a tensor of shape (num_anchors, )
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes`` and ``labels``
- attributes.
- batch_img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- batch_gt_instances_ignore (list[:obj:`InstanceData`], optional):
- Batch of gt_instances_ignore. It includes ``bboxes`` attribute
- data that is ignored during training and testing.
- Defaults to None.
- unmap_outputs (bool): Whether to map outputs back to the original
- set of anchors. Defaults to True.
-
- Returns:
- tuple: Usually returns a tuple containing learning targets.
-
- - labels (list[Tensor]): Labels of all anchors, each with
- shape (num_anchors,).
- - label_weights (list[Tensor]): Label weights of all anchor.
- each with shape (num_anchors,).
- - bbox_targets (list[Tensor]): BBox targets of all anchors.
- each with shape (num_anchors, 4).
- - bbox_weights (list[Tensor]): BBox weights of all anchors.
- each with shape (num_anchors, 4).
- - pos_inds (list[Tensor]): Contains all index of positive
- sample in all anchor.
- - gt_inds (list[Tensor]): Contains all gt_index of positive
- sample in all anchor.
- """
-
- num_imgs = len(batch_img_metas)
- assert len(anchor_list) == len(valid_flag_list) == num_imgs
- concat_anchor_list = []
- concat_valid_flag_list = []
- for i in range(num_imgs):
- assert len(anchor_list[i]) == len(valid_flag_list[i])
- concat_anchor_list.append(torch.cat(anchor_list[i]))
- concat_valid_flag_list.append(torch.cat(valid_flag_list[i]))
-
- # compute targets for each image
- if batch_gt_instances_ignore is None:
- batch_gt_instances_ignore = [None] * num_imgs
- results = multi_apply(
- self._get_targets_single,
- concat_anchor_list,
- concat_valid_flag_list,
- batch_gt_instances,
- batch_img_metas,
- batch_gt_instances_ignore,
- unmap_outputs=unmap_outputs)
-
- (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds,
- valid_neg_inds, sampling_result) = results
-
- # Due to valid flag of anchors, we have to calculate the real pos_inds
- # in origin anchor set.
- pos_inds = []
- for i, single_labels in enumerate(labels):
- pos_mask = (0 <= single_labels) & (
- single_labels < self.num_classes)
- pos_inds.append(pos_mask.nonzero().view(-1))
-
- gt_inds = [item.pos_assigned_gt_inds for item in sampling_result]
- return (labels, label_weights, bbox_targets, bbox_weights, pos_inds,
- gt_inds)
-
- def _get_targets_single(self,
- flat_anchors: Tensor,
- valid_flags: Tensor,
- gt_instances: InstanceData,
- img_meta: dict,
- gt_instances_ignore: Optional[InstanceData] = None,
- unmap_outputs: bool = True) -> tuple:
- """Compute regression and classification targets for anchors in a
- single image.
-
- This method is same as `AnchorHead._get_targets_single()`.
- """
- assert unmap_outputs, 'We must map outputs back to the original' \
- 'set of anchors in PAAhead'
- return super(ATSSHead, self)._get_targets_single(
- flat_anchors,
- valid_flags,
- gt_instances,
- img_meta,
- gt_instances_ignore,
- unmap_outputs=True)
-
- def predict_by_feat(self,
- cls_scores: List[Tensor],
- bbox_preds: List[Tensor],
- score_factors: Optional[List[Tensor]] = None,
- batch_img_metas: Optional[List[dict]] = None,
- cfg: OptConfigType = None,
- rescale: bool = False,
- with_nms: bool = True) -> InstanceList:
- """Transform a batch of output features extracted from the head into
- bbox results.
-
- This method is same as `BaseDenseHead.get_results()`.
- """
- assert with_nms, 'PAA only supports "with_nms=True" now and it ' \
- 'means PAAHead does not support ' \
- 'test-time augmentation'
- return super().predict_by_feat(
- cls_scores=cls_scores,
- bbox_preds=bbox_preds,
- score_factors=score_factors,
- batch_img_metas=batch_img_metas,
- cfg=cfg,
- rescale=rescale,
- with_nms=with_nms)
-
- def _predict_by_feat_single(self,
- cls_score_list: List[Tensor],
- bbox_pred_list: List[Tensor],
- score_factor_list: List[Tensor],
- mlvl_priors: List[Tensor],
- img_meta: dict,
- cfg: OptConfigType = None,
- rescale: bool = False,
- with_nms: bool = True) -> InstanceData:
- """Transform a single image's features extracted from the head into
- bbox results.
-
- Args:
- cls_score_list (list[Tensor]): Box scores from all scale
- levels of a single image, each item has shape
- (num_priors * num_classes, H, W).
- bbox_pred_list (list[Tensor]): Box energies / deltas from
- all scale levels of a single image, each item has shape
- (num_priors * 4, H, W).
- score_factor_list (list[Tensor]): Score factors from all scale
- levels of a single image, each item has shape
- (num_priors * 1, H, W).
- mlvl_priors (list[Tensor]): Each element in the list is
- the priors of a single level in feature pyramid, has shape
- (num_priors, 4).
- img_meta (dict): Image meta info.
- cfg (:obj:`ConfigDict` or dict, optional): Test / postprocessing
- configuration, if None, test_cfg would be used.
- rescale (bool): If True, return boxes in original image space.
- Default: False.
- with_nms (bool): If True, do nms before return boxes.
- Default: True.
-
- Returns:
- :obj:`InstanceData`: Detection results of each image
- after the post process.
- Each item usually contains following keys.
-
- - scores (Tensor): Classification scores, has a shape
- (num_instance, )
- - labels (Tensor): Labels of bboxes, has a shape
- (num_instances, ).
- - bboxes (Tensor): Has a shape (num_instances, 4),
- the last dimension 4 arrange as (x1, y1, x2, y2).
- """
- cfg = self.test_cfg if cfg is None else cfg
- img_shape = img_meta['img_shape']
- nms_pre = cfg.get('nms_pre', -1)
-
- mlvl_bboxes = []
- mlvl_scores = []
- mlvl_score_factors = []
- for level_idx, (cls_score, bbox_pred, score_factor, priors) in \
- enumerate(zip(cls_score_list, bbox_pred_list,
- score_factor_list, mlvl_priors)):
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
-
- scores = cls_score.permute(1, 2, 0).reshape(
- -1, self.cls_out_channels).sigmoid()
- bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4)
- score_factor = score_factor.permute(1, 2, 0).reshape(-1).sigmoid()
-
- if 0 < nms_pre < scores.shape[0]:
- max_scores, _ = (scores *
- score_factor[:, None]).sqrt().max(dim=1)
- _, topk_inds = max_scores.topk(nms_pre)
- priors = priors[topk_inds, :]
- bbox_pred = bbox_pred[topk_inds, :]
- scores = scores[topk_inds, :]
- score_factor = score_factor[topk_inds]
-
- bboxes = self.bbox_coder.decode(
- priors, bbox_pred, max_shape=img_shape)
- mlvl_bboxes.append(bboxes)
- mlvl_scores.append(scores)
- mlvl_score_factors.append(score_factor)
-
- results = InstanceData()
- results.bboxes = torch.cat(mlvl_bboxes)
- results.scores = torch.cat(mlvl_scores)
- results.score_factors = torch.cat(mlvl_score_factors)
-
- return self._bbox_post_process(results, cfg, rescale, with_nms,
- img_meta)
-
- def _bbox_post_process(self,
- results: InstanceData,
- cfg: ConfigType,
- rescale: bool = False,
- with_nms: bool = True,
- img_meta: Optional[dict] = None):
- """bbox post-processing method.
-
- The boxes would be rescaled to the original image scale and do
- the nms operation. Usually with_nms is False is used for aug test.
-
- Args:
- results (:obj:`InstaceData`): Detection instance results,
- each item has shape (num_bboxes, ).
- cfg (:obj:`ConfigDict` or dict): Test / postprocessing
- configuration, if None, test_cfg would be used.
- rescale (bool): If True, return boxes in original image space.
- Default: False.
- with_nms (bool): If True, do nms before return boxes.
- Default: True.
- img_meta (dict, optional): Image meta info. Defaults to None.
-
- Returns:
- :obj:`InstanceData`: Detection results of each image
- after the post process.
- Each item usually contains following keys.
-
- - scores (Tensor): Classification scores, has a shape
- (num_instance, )
- - labels (Tensor): Labels of bboxes, has a shape
- (num_instances, ).
- - bboxes (Tensor): Has a shape (num_instances, 4),
- the last dimension 4 arrange as (x1, y1, x2, y2).
- """
- if rescale:
- results.bboxes /= results.bboxes.new_tensor(
- img_meta['scale_factor']).repeat((1, 2))
- # Add a dummy background class to the backend when using sigmoid
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
- # BG cat_id: num_class
- padding = results.scores.new_zeros(results.scores.shape[0], 1)
- mlvl_scores = torch.cat([results.scores, padding], dim=1)
-
- mlvl_nms_scores = (mlvl_scores * results.score_factors[:, None]).sqrt()
- det_bboxes, det_labels = multiclass_nms(
- results.bboxes,
- mlvl_nms_scores,
- cfg.score_thr,
- cfg.nms,
- cfg.max_per_img,
- score_factors=None)
- if self.with_score_voting and len(det_bboxes) > 0:
- det_bboxes, det_labels = self.score_voting(det_bboxes, det_labels,
- results.bboxes,
- mlvl_nms_scores,
- cfg.score_thr)
- nms_results = InstanceData()
- nms_results.bboxes = det_bboxes[:, :-1]
- nms_results.scores = det_bboxes[:, -1]
- nms_results.labels = det_labels
- return nms_results
-
- def score_voting(self, det_bboxes: Tensor, det_labels: Tensor,
- mlvl_bboxes: Tensor, mlvl_nms_scores: Tensor,
- score_thr: float) -> Tuple[Tensor, Tensor]:
- """Implementation of score voting method works on each remaining boxes
- after NMS procedure.
-
- Args:
- det_bboxes (Tensor): Remaining boxes after NMS procedure,
- with shape (k, 5), each dimension means
- (x1, y1, x2, y2, score).
- det_labels (Tensor): The label of remaining boxes, with shape
- (k, 1),Labels are 0-based.
- mlvl_bboxes (Tensor): All boxes before the NMS procedure,
- with shape (num_anchors,4).
- mlvl_nms_scores (Tensor): The scores of all boxes which is used
- in the NMS procedure, with shape (num_anchors, num_class)
- score_thr (float): The score threshold of bboxes.
-
- Returns:
- tuple: Usually returns a tuple containing voting results.
-
- - det_bboxes_voted (Tensor): Remaining boxes after
- score voting procedure, with shape (k, 5), each
- dimension means (x1, y1, x2, y2, score).
- - det_labels_voted (Tensor): Label of remaining bboxes
- after voting, with shape (num_anchors,).
- """
- candidate_mask = mlvl_nms_scores > score_thr
- candidate_mask_nonzeros = candidate_mask.nonzero(as_tuple=False)
- candidate_inds = candidate_mask_nonzeros[:, 0]
- candidate_labels = candidate_mask_nonzeros[:, 1]
- candidate_bboxes = mlvl_bboxes[candidate_inds]
- candidate_scores = mlvl_nms_scores[candidate_mask]
- det_bboxes_voted = []
- det_labels_voted = []
- for cls in range(self.cls_out_channels):
- candidate_cls_mask = candidate_labels == cls
- if not candidate_cls_mask.any():
- continue
- candidate_cls_scores = candidate_scores[candidate_cls_mask]
- candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask]
- det_cls_mask = det_labels == cls
- det_cls_bboxes = det_bboxes[det_cls_mask].view(
- -1, det_bboxes.size(-1))
- det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4],
- candidate_cls_bboxes)
- for det_ind in range(len(det_cls_bboxes)):
- single_det_ious = det_candidate_ious[det_ind]
- pos_ious_mask = single_det_ious > 0.01
- pos_ious = single_det_ious[pos_ious_mask]
- pos_bboxes = candidate_cls_bboxes[pos_ious_mask]
- pos_scores = candidate_cls_scores[pos_ious_mask]
- pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) *
- pos_scores)[:, None]
- voted_box = torch.sum(
- pis * pos_bboxes, dim=0) / torch.sum(
- pis, dim=0)
- voted_score = det_cls_bboxes[det_ind][-1:][None, :]
- det_bboxes_voted.append(
- torch.cat((voted_box[None, :], voted_score), dim=1))
- det_labels_voted.append(cls)
-
- det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0)
- det_labels_voted = det_labels.new_tensor(det_labels_voted)
- return det_bboxes_voted, det_labels_voted
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/necks/ssd_neck.py b/spaces/KyanChen/RSPrompter/mmdet/models/necks/ssd_neck.py
deleted file mode 100644
index 17ba319370b988b9c7e2d98c2f10607ff8f8b5c3..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/necks/ssd_neck.py
+++ /dev/null
@@ -1,129 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule
-from mmengine.model import BaseModule
-
-from mmdet.registry import MODELS
-
-
-@MODELS.register_module()
-class SSDNeck(BaseModule):
- """Extra layers of SSD backbone to generate multi-scale feature maps.
-
- Args:
- in_channels (Sequence[int]): Number of input channels per scale.
- out_channels (Sequence[int]): Number of output channels per scale.
- level_strides (Sequence[int]): Stride of 3x3 conv per level.
- level_paddings (Sequence[int]): Padding size of 3x3 conv per level.
- l2_norm_scale (float|None): L2 normalization layer init scale.
- If None, not use L2 normalization on the first input feature.
- last_kernel_size (int): Kernel size of the last conv layer.
- Default: 3.
- use_depthwise (bool): Whether to use DepthwiseSeparableConv.
- Default: False.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Dictionary to construct and config norm layer.
- Default: None.
- act_cfg (dict): Config dict for activation layer.
- Default: dict(type='ReLU').
- init_cfg (dict or list[dict], optional): Initialization config dict.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- level_strides,
- level_paddings,
- l2_norm_scale=20.,
- last_kernel_size=3,
- use_depthwise=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=dict(type='ReLU'),
- init_cfg=[
- dict(
- type='Xavier', distribution='uniform',
- layer='Conv2d'),
- dict(type='Constant', val=1, layer='BatchNorm2d'),
- ]):
- super(SSDNeck, self).__init__(init_cfg)
- assert len(out_channels) > len(in_channels)
- assert len(out_channels) - len(in_channels) == len(level_strides)
- assert len(level_strides) == len(level_paddings)
- assert in_channels == out_channels[:len(in_channels)]
-
- if l2_norm_scale:
- self.l2_norm = L2Norm(in_channels[0], l2_norm_scale)
- self.init_cfg += [
- dict(
- type='Constant',
- val=self.l2_norm.scale,
- override=dict(name='l2_norm'))
- ]
-
- self.extra_layers = nn.ModuleList()
- extra_layer_channels = out_channels[len(in_channels):]
- second_conv = DepthwiseSeparableConvModule if \
- use_depthwise else ConvModule
-
- for i, (out_channel, stride, padding) in enumerate(
- zip(extra_layer_channels, level_strides, level_paddings)):
- kernel_size = last_kernel_size \
- if i == len(extra_layer_channels) - 1 else 3
- per_lvl_convs = nn.Sequential(
- ConvModule(
- out_channels[len(in_channels) - 1 + i],
- out_channel // 2,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg),
- second_conv(
- out_channel // 2,
- out_channel,
- kernel_size,
- stride=stride,
- padding=padding,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
- self.extra_layers.append(per_lvl_convs)
-
- def forward(self, inputs):
- """Forward function."""
- outs = [feat for feat in inputs]
- if hasattr(self, 'l2_norm'):
- outs[0] = self.l2_norm(outs[0])
-
- feat = outs[-1]
- for layer in self.extra_layers:
- feat = layer(feat)
- outs.append(feat)
- return tuple(outs)
-
-
-class L2Norm(nn.Module):
-
- def __init__(self, n_dims, scale=20., eps=1e-10):
- """L2 normalization layer.
-
- Args:
- n_dims (int): Number of dimensions to be normalized
- scale (float, optional): Defaults to 20..
- eps (float, optional): Used to avoid division by zero.
- Defaults to 1e-10.
- """
- super(L2Norm, self).__init__()
- self.n_dims = n_dims
- self.weight = nn.Parameter(torch.Tensor(self.n_dims))
- self.eps = eps
- self.scale = scale
-
- def forward(self, x):
- """Forward function."""
- # normalization layer convert to FP32 in FP16 training
- x_float = x.float()
- norm = x_float.pow(2).sum(1, keepdim=True).sqrt() + self.eps
- return (self.weight[None, :, None, None].float().expand_as(x_float) *
- x_float / norm).type_as(x)
diff --git a/spaces/LanguageBind/LanguageBind/t_cls/precision.py b/spaces/LanguageBind/LanguageBind/t_cls/precision.py
deleted file mode 100644
index a63b92256518d13afd57261df1568e26b1622201..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/t_cls/precision.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import torch
-from contextlib import suppress
-
-
-def get_autocast(precision):
- if precision == 'amp':
- return torch.cuda.amp.autocast
- elif precision == 'amp_bfloat16' or precision == 'amp_bf16':
- # amp_bfloat16 is more stable than amp float16 for clip training
- return lambda: torch.cuda.amp.autocast(dtype=torch.bfloat16)
- else:
- return suppress
diff --git a/spaces/Lbin123/Lbingo/src/lib/isomorphic/index.ts b/spaces/Lbin123/Lbingo/src/lib/isomorphic/index.ts
deleted file mode 100644
index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000
--- a/spaces/Lbin123/Lbingo/src/lib/isomorphic/index.ts
+++ /dev/null
@@ -1,17 +0,0 @@
-'use client'
-
-import Default from './browser'
-
-let exportsModel: any = {}
-
-if (process.browser) {
- Object.assign(exportsModel, require('./browser').default)
-} else {
- Object.assign(exportsModel, require('./node').default)
-}
-
-export default exportsModel! as typeof Default
-
-export const fetch: typeof Default.fetch = exportsModel!.fetch
-export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket
-export const debug: typeof Default.debug = exportsModel!.debug
diff --git a/spaces/LuxOAI/guanaco-playground-tgi/README.md b/spaces/LuxOAI/guanaco-playground-tgi/README.md
deleted file mode 100644
index c091604d57cc7b1b3542e3a8e7ab9c88c4f74a85..0000000000000000000000000000000000000000
--- a/spaces/LuxOAI/guanaco-playground-tgi/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Guanaco Playground Tgi
-emoji: 📊
-colorFrom: pink
-colorTo: green
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-duplicated_from: uwnlp/guanaco-playground-tgi
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MAPS-research/GEMRec-Gallery/pages/Summary.py b/spaces/MAPS-research/GEMRec-Gallery/pages/Summary.py
deleted file mode 100644
index 877706700348a6f1a3a6b7fec72c145948b90a26..0000000000000000000000000000000000000000
--- a/spaces/MAPS-research/GEMRec-Gallery/pages/Summary.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import json
-import os
-
-import datasets
-import numpy as np
-import pandas as pd
-import pymysql.cursors
-import streamlit as st
-
-from datetime import datetime
-from streamlit_elements import elements, mui, html, dashboard, nivo
-from streamlit_extras.switch_page_button import switch_page
-from streamlit_extras.metric_cards import style_metric_cards
-from streamlit_extras.stylable_container import stylable_container
-from st_clickable_images import clickable_images
-
-from pages.Gallery import load_hf_dataset
-from Home import connect_to_db
-
-
-class DashboardApp:
- def __init__(self, roster, promptBook, session_finished):
- self.roster = roster
- self.promptBook = promptBook
- self.session_finished = session_finished
-
- # init modelVersion_standings
- if 'modelVersion_standings' not in st.session_state:
- st.session_state.modelVersion_standings = {}
-
- def sidebar(self, tags, mode):
- with st.sidebar:
- # tag = st.selectbox('Select a tag', tags, key='tag')
- # st.write('---')
- with st.form('summary_sidebar_form'):
- st.write('## Want a more comprehensive summary?')
- st.write('Jump back to gallery and select more images to rank!')
- back_to_gallery = st.form_submit_button('🖼️ Go to Gallery')
- if back_to_gallery:
- switch_page('gallery')
- back_to_ranking = st.form_submit_button('🎖️ Go to Ranking')
- if back_to_ranking:
- switch_page('ranking')
-
- with st.form('overall_feedback'):
- comment = st.text_area('Please leave your comments here.', key='comment')
- submit_feedback = st.form_submit_button('Submit Feedback')
- if submit_feedback:
- commenttime = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
- curser = RANKING_CONN.cursor()
- # parse the comment to at most 300 to avoid SQL injection
- for i in range(0, len(comment), 300):
- curser.execute(f"INSERT INTO comments (username, timestamp, comment, commenttime) VALUES ('{st.session_state.user_id[0]}', '{st.session_state.user_id[1]}', '{comment[i:i+300]}', '{commenttime}')")
- RANKING_CONN.commit()
- curser.close()
-
- st.sidebar.info('🙏 **Thanks for your feedback! We will take it into consideration in our future work.**')
-
- def leaderboard(self, tag, db_table):
- tag = '%' if tag == 'overview' else tag
-
- # print('tag', tag)
-
- # get the ranking results of the current user with the lastest epoch
- curser = RANKING_CONN.cursor()
- # curser.execute(f"SELECT * FROM {db_table} WHERE username = '{st.session_state.user_id[0]}' AND timestamp = '{st.session_state.user_id[1]}' AND tag LIKE '{tag}'")
- # curser.execute(f"SELECT * FROM {db_table} WHERE username = '{st.session_state.user_id[0]}' AND timestamp = '{st.session_state.user_id[1]}' AND tag LIKE '{tag}' ORDER BY epoch DESC LIMIT 1")
- curser.execute(
- f"SELECT * FROM {db_table}\
- WHERE username = '{st.session_state.user_id[0]}'\
- AND timestamp = '{st.session_state.user_id[1]}'\
- AND tag LIKE '{tag}'\
- AND epoch =\
- (SELECT MAX(epoch) FROM {db_table}\
- WHERE username = '{st.session_state.user_id[0]}'\
- AND timestamp = '{st.session_state.user_id[1]}'\
- AND tag LIKE '{tag}')")
-
-
- results = curser.fetchall()
- curser.close()
-
- # print('results', results, len(results))
-
- if tag not in st.session_state.modelVersion_standings:
- st.session_state.modelVersion_standings[tag] = self.score_calculator(results, db_table)
-
- # sort the modelVersion_standings by value into a list of tuples in descending order
- st.session_state.modelVersion_standings[tag] = sorted(st.session_state.modelVersion_standings[tag].items(), key=lambda x: x[1], reverse=True)
- print(st.session_state.modelVersion_standings[tag])
- example_prompts = []
- # get example images
- for key, value in st.session_state.selected_dict.items():
- for model in st.session_state.modelVersion_standings[tag]:
- if model[0] in value:
- example_prompts.append(key)
-
- self.podium_expander(tag, n=len(st.session_state.modelVersion_standings[tag]), summary_mode='display', example_prompts=example_prompts)
-
- st.write('---')
- st.write('**Detailed information of all selected models**')
- detailed_info = pd.merge(pd.DataFrame(st.session_state.modelVersion_standings[tag], columns=['modelVersion_id', 'ranking_score']), self.roster, on='modelVersion_id')
-
- detailed_info = detailed_info[['model_name', 'modelVersion_name', 'model_download_count', 'tag', 'baseModel']]
-
- st.data_editor(detailed_info, hide_index=False, disabled=True)
- st.caption('You can click the header to sort the table by that column.')
-
- def podium_expander(self, tag, example_prompts, n=3, summary_mode: ['display', 'edit'] = 'display'):
- self.save_summary(tag)
-
- for i in range(n):
- modelVersion_id = st.session_state.modelVersion_standings[tag][i][0]
- winning_times = st.session_state.modelVersion_standings[tag][i][1]
-
- model_id, model_name, modelVersion_name, url = self.roster[self.roster['modelVersion_id'] == modelVersion_id][['model_id', 'model_name', 'modelVersion_name', 'modelVersion_url']].values[0]
-
- icon = '🥇'if i == 0 else '🥈' if i == 1 else '🥉' if i == 2 else '🎈'
- podium_display = st.columns([1, 14], gap='medium')
- with podium_display[0]:
- # st.title(f'{icon}')
- st.write(f'# {icon}')
- # if summary_mode == 'display':
- # st.title(f'{icon}')
- # elif summary_mode == 'edit':
- settop = st.button('🔝', key=f'settop_{modelVersion_id}', help='Set this model to the top', disabled=i == 0, on_click=self.switch_order, args=(tag, i, 0))
- moveup = st.button('⬆', key=f'moveup_{modelVersion_id}', help='Move this model up', disabled=i == 0, on_click=self.switch_order, args=(tag, i, i - 1))
- movedown = st.button('⬇', key=f'movedown_{modelVersion_id}', help='Move this model down', disabled=i == n - 1, on_click=self.switch_order, args=(tag, i, i + 1))
- with podium_display[1]:
- title_display = st.columns([4, 1, 1])
- with title_display[0]:
- st.write(f'##### {model_name}, {modelVersion_name}')
- # st.write(f'Ranking Score: {winning_times}')
- with title_display[1]:
- st.link_button('Download', url, use_container_width=True)
- with title_display[2]:
- st.link_button('Civitai', f'https://civitai.com/models/{model_id}?modelVersionId={modelVersion_id}', use_container_width=True, type='primary')
- # st.write(f'[Civitai Page](https://civitai.com/models/{model_id}?modelVersionId={modelVersion_id}), [Model Download Link]({url}), Ranking Score: {winning_times}')
- # with st.expander(f'**{icon} {model_name}, [{modelVersion_name}](https://civitai.com/models/{model_id}?modelVersionId={modelVersion_id})**, Ranking Score: {winning_times}'):
-
- image_display = st.toggle('Show all images', key=f'image_display_{modelVersion_id}')
- if not image_display:
- example_images = self.promptBook[self.promptBook['prompt_id'].isin(example_prompts) & (self.promptBook['modelVersion_id']==modelVersion_id)]['image_id'].values
- example_images = [f"https://modelcofferbucket.s3-accelerate.amazonaws.com/{image}.png" for image in example_images]
- clickable_images(
- example_images,
- img_style={"margin": "5px", "height": "120px"},
- )
-
- else:
- # with st.expander(f'Show Images'):
- images = self.promptBook[self.promptBook['modelVersion_id'] == modelVersion_id]['image_id'].values
-
- # safety_check = st.toggle('Include potentially unsafe or offensive images', value=False, key=modelVersion_id)
- # unsafe_prompts = json.load(open('data/unsafe_prompts.json', 'r'))
- # # merge dict values into one list
- # unsafe_prompts = [item for sublist in unsafe_prompts.values() for item in sublist]
- # unsafe_images = self.promptBook[self.promptBook['prompt_id'].isin(unsafe_prompts)]['image_id'].values
- #
- # if not safety_check:
- # # exclude unsafe prompts from images
- # images = [image for image in images if image not in unsafe_images]
-
- images = [f"https://modelcofferbucket.s3-accelerate.amazonaws.com/{image}.png" for image in images]
- clickable_images(
- images,
- img_style={"margin": "5px", "height": "120px"}
- )
- st.write('🐌 It may take a while to load all images. Please be patient, and **NEVER USE THE REFRESH BUTTON ON YOUR BROWSER**.')
-
- # # st.write(f'### Images generated with {icon} {model_name}, {modelVersion_name}')
- # col_num = 4
- # image_cols = st.columns(col_num)
- #
- # for j in range(len(images)):
- # with image_cols[j % col_num]:
- # image = f"https://modelcofferbucket.s3-accelerate.amazonaws.com/{images[j]}.png"
- # st.image(image, use_column_width=True)
- #
- if i != n - 1:
- st.write('---')
-
- def save_summary(self, tag):
- # get the lastest summary_results epoch of the current user
- tag_name = 'overview' if tag == '%' else tag
- curser = RANKING_CONN.cursor()
- curser.execute(f"SELECT epoch FROM summary_results WHERE username = '{st.session_state.user_id[0]}' AND timestamp = '{st.session_state.user_id[1]}' AND tag = '{tag_name}' ORDER BY epoch DESC LIMIT 1")
- latest_epoch = curser.fetchone()
- curser.close()
- # print('latest_epoch',latest_epoch)
- if latest_epoch is None or latest_epoch['epoch'] < st.session_state.epoch['summary'][tag_name]:
- # save the current ranking results to the database
- summarytime = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
- curser = RANKING_CONN.cursor()
- for i in range(len(st.session_state.modelVersion_standings[tag])):
- curser.execute(f"INSERT INTO summary_results (username, timestamp, tag, modelVersion_id, position, ranking_score, summarytime, epoch, customized) VALUES ('{st.session_state.user_id[0]}', '{st.session_state.user_id[1]}', '{tag_name}', '{st.session_state.modelVersion_standings[tag][i][0]}', {i+1}, {st.session_state.modelVersion_standings[tag][i][1]}, '{summarytime}', {st.session_state.epoch['summary'][tag_name]}, 0)")
- RANKING_CONN.commit()
- curser.close()
-
- def switch_order(self, tag, current, target):
- # insert the current before the target
- st.session_state.modelVersion_standings[tag].insert(target, st.session_state.modelVersion_standings[tag].pop(current))
- tag_name = 'overview' if tag == '%' else tag
- summarytime = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
- curser = RANKING_CONN.cursor()
- # clear the current user's ranking results
- curser.execute(f"DELETE FROM summary_results WHERE username = '{st.session_state.user_id[0]}' AND timestamp = '{st.session_state.user_id[1]}' AND tag = '{tag_name}' AND epoch = {st.session_state.epoch['summary'][tag_name]}")
- for i in range(len(st.session_state.modelVersion_standings[tag])):
- curser.execute(f"INSERT INTO summary_results (username, timestamp, tag, modelVersion_id, position, ranking_score, summarytime, epoch, customized) VALUES ('{st.session_state.user_id[0]}', '{st.session_state.user_id[1]}', '{tag_name}', '{st.session_state.modelVersion_standings[tag][i][0]}', {i+1}, {st.session_state.modelVersion_standings[tag][i][1]}, '{summarytime}', {st.session_state.epoch['summary'][tag_name]}, 1)")
- RANKING_CONN.commit()
- curser.close()
-
- def score_calculator(self, results, db_table):
- modelVersion_standings = {}
- if db_table == 'battle_results':
- # sort results by battle time
- results = sorted(results, key=lambda x: x['battletime'])
-
- for record in results:
- modelVersion_standings[record['winner']] = modelVersion_standings.get(record['winner'], 0) + 1
- # add the loser who never wins
- if record['loser'] not in modelVersion_standings:
- modelVersion_standings[record['loser']] = 0
-
- # add the winning time of the loser to the winner
- modelVersion_standings[record['winner']] += modelVersion_standings[record['loser']]
-
- elif db_table == 'sort_results':
- pts_map = {'position1': 5, 'position2': 3, 'position3': 1, 'position4': 0}
- for record in results:
- for i in range(1, 5):
- modelVersion_standings[record[f'position{i}']] = modelVersion_standings.get(record[f'position{i}'], 0) + pts_map[f'position{i}']
-
- return modelVersion_standings
-
- def app(self):
- st.write('### Your Preferred Models')
-
- # mode = st.sidebar.radio('Ranking mode', ['Drag and Sort', 'Battle'], horizontal=True, index=1)
- mode = st.session_state.assigned_rank_mode
- # get tags from database of the current user
- db_table = 'sort_results' if mode == 'Drag and Sort' else 'battle_results'
-
- tags = []
- curser = RANKING_CONN.cursor()
- curser.execute(
- f"SELECT DISTINCT tag FROM {db_table} WHERE username = '{st.session_state.user_id[0]}' AND timestamp = '{st.session_state.user_id[1]}'")
- for row in curser.fetchall():
- tags.append(row['tag'])
- curser.close()
-
- if len(tags) == 0:
- st.info(f'No rankings are finished with {mode} mode yet.')
-
- else:
- # tags = tags[1:2] if len(tags) == 2 else tags
- tag = ['overview'] + tags if len(tags) > 1 else tags
- tag = st.radio('Select a tag', tags, index=0, horizontal=True, label_visibility='collapsed')
- self.sidebar(tags, mode)
- self.leaderboard(tag, db_table)
-
-
-if __name__ == "__main__":
- st.set_page_config(layout="wide")
-
- if 'user_id' not in st.session_state:
- st.warning('Please log in first.')
- home_btn = st.button('Go to Home Page')
- if home_btn:
- switch_page("home")
-
- elif 'progress' not in st.session_state:
- st.info('You have not checked any image yet. Please go back to the gallery page and check some images.')
- gallery_btn = st.button('🖼️ Go to Gallery')
- if gallery_btn:
- switch_page('gallery')
-
- else:
- session_finished = []
-
- for key, value in st.session_state.progress.items():
- if value == 'finished':
- session_finished.append(key)
-
- if len(session_finished) == 0:
- st.info('A dashboard showing your preferred models will appear after you finish any ranking session.')
- ranking_btn = st.button('🎖️ Go to Ranking')
- if ranking_btn:
- switch_page('ranking')
- gallery_btn = st.button('🖼️ Go to Gallery')
- if gallery_btn:
- switch_page('gallery')
-
- else:
- roster, promptBook, images_ds = load_hf_dataset(st.session_state.show_NSFW)
- RANKING_CONN = connect_to_db()
- app = DashboardApp(roster, promptBook, session_finished)
- app.app()
-
- with open('./css/style.css') as f:
- st.markdown(f'', unsafe_allow_html=True)
-
-
diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/__init__.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/__init__.py
deleted file mode 100644
index 69b37d60475f526d06ea18f3fc8d7c05f11f3a69..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/__init__.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# encoding: utf-8
-"""
-This package includes audio handling functionality and low-level features.
-The definition of "low" may vary, but all "high"-level features (e.g. beats,
-onsets, etc. -- basically everything you want to evaluate) should be in the
-`madmom.features` package.
-
-Notes
------
-Almost all functionality blocks are split into two classes:
-
-1) A data class: instances are signal dependent, i.e. they operate directly on
- the signal and show different values for different signals.
-2) A processor class: for every data class there should be a processor class
- with the exact same name and a "Processor" suffix. This class must inherit
- from madmom.Processor and define a process() method which returns a data
- class or inherit from madmom.SequentialProcessor or ParallelProcessor.
-
-The data classes should be either sub-classed from numpy arrays or be indexable
-and iterable. This way they can be used identically to numpy arrays.
-
-"""
-
-from __future__ import absolute_import, division, print_function
-
-# import the submodules
-from . import comb_filters, filters, signal, spectrogram, stft
-# import classes used often
-from .chroma import DeepChromaProcessor
-from .signal import (FramedSignal, FramedSignalProcessor, Signal,
- SignalProcessor, )
-from .spectrogram import (FilteredSpectrogram, FilteredSpectrogramProcessor,
- LogarithmicFilteredSpectrogram,
- LogarithmicFilteredSpectrogramProcessor,
- LogarithmicSpectrogram,
- LogarithmicSpectrogramProcessor,
- MultiBandSpectrogramProcessor, Spectrogram,
- SpectrogramDifference,
- SpectrogramDifferenceProcessor,
- SpectrogramProcessor, )
-from .stft import ShortTimeFourierTransform, ShortTimeFourierTransformProcessor
diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/io/audio.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/io/audio.py
deleted file mode 100644
index 066bea2b314d75554a55f6948e42b1d00aa549ce..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/feature_extraction/madmom/io/audio.py
+++ /dev/null
@@ -1,790 +0,0 @@
-# encoding: utf-8
-# pylint: disable=no-member
-# pylint: disable=invalid-name
-# pylint: disable=too-many-arguments
-"""
-This module contains audio input/output functionality.
-
-"""
-
-from __future__ import absolute_import, division, print_function
-
-import errno
-import os
-import subprocess
-import sys
-import tempfile
-
-import numpy as np
-
-from ..utils import string_types, file_types
-from ..audio.signal import Signal
-
-
-# error classes
-class LoadAudioFileError(Exception):
- """
- Exception to be raised whenever an audio file could not be loaded.
-
- """
- # pylint: disable=super-init-not-called
-
- def __init__(self, value=None):
- if value is None:
- value = 'Could not load audio file.'
- self.value = value
-
- def __str__(self):
- return repr(self.value)
-
-
-# functions for loading audio files with ffmpeg
-def _ffmpeg_fmt(dtype):
- """
- Convert numpy dtypes to format strings understood by ffmpeg.
-
- Parameters
- ----------
- dtype : numpy dtype
- Data type to be converted.
-
- Returns
- -------
- str
- ffmpeg format string.
-
- """
- # convert dtype to sample type
- dtype = np.dtype(dtype)
- # Note: list with all ffmpeg PCM sample types: ffmpeg -formats | grep PCM
- # - unsigned int, signed int, floating point:
- fmt = {'u': 'u', 'i': 's', 'f': 'f'}.get(dtype.kind)
- # - sample size in bits:
- fmt += str(8 * dtype.itemsize)
- # - little endian or big endian:
- if dtype.byteorder == '=':
- fmt += sys.byteorder[0] + 'e'
- else:
- fmt += {'|': '', '<': 'le', '>': 'be'}.get(dtype.byteorder)
- return str(fmt)
-
-
-def _ffmpeg_call(infile, output, fmt='f32le', sample_rate=None, num_channels=1,
- channel=None, skip=None, max_len=None, cmd='ffmpeg',
- replaygain_mode=None, replaygain_preamp=0.0):
- """
- Create a sequence of strings indicating ffmpeg how to be called as well as
- the parameters necessary to decode the given input (file) to the given
- format, at the given offset and for the given length to the given output.
-
- Parameters
- ----------
- infile : str
- Name of the audio sound file to decode.
- output : str
- Where to decode to.
- fmt : {'f32le', 's16le'}, optional
- Format of the samples:
- - 'f32le' for float32, little-endian,
- - 's16le' for signed 16-bit int, little-endian.
- sample_rate : int, optional
- Sample rate to re-sample the signal to (if set) [Hz].
- num_channels : int, optional
- Number of channels to reduce the signal to.
- If 'None', return the signal with its original channels,
- or whatever is selected by `channel`.
- channel : int, optional
- When reducing a signal to `num_channels` of 1, use this channel,
- or 'None' to return the average across all channels.
- skip : float, optional
- Number of seconds to skip at beginning of file.
- max_len : float, optional
- Maximum length in seconds to decode.
- cmd : {'ffmpeg','avconv'}, optional
- Decoding command (defaults to ffmpeg, alternatively supports avconv).
- replaygain_mode : {None, 'track','album'}, optional
- Specify the ReplayGain volume-levelling mode (None to disable).
- replaygain_preamp : float, optional
- ReplayGain preamp volume change level (in dB).
-
- Returns
- -------
- list
- ffmpeg call.
-
- Notes
- -----
- 'avconv' rounds decoding positions and decodes in blocks of 4096 length
- resulting in incorrect start and stop positions. Thus it should only be
- used to decode complete files.
-
- """
- # Note: avconv rounds decoding positions and decodes in blocks of 4096
- # length resulting in incorrect start and stop positions
- if cmd == 'avconv' and skip is not None and max_len is not None:
- raise RuntimeError('avconv has a bug, which results in wrong audio '
- 'slices! Decode the audio files to .wav first or '
- 'use ffmpeg.')
- # general options
- call = [cmd, "-v", "quiet", "-y"]
- # input options
- if skip:
- # use "%f" to avoid scientific float notation
- call.extend(["-ss", "%f" % float(skip)])
- # if we decode from STDIN, the format must be specified
- if isinstance(infile, Signal):
- in_fmt = _ffmpeg_fmt(infile.dtype)
- in_ac = str(int(infile.num_channels))
- in_ar = str(int(infile.sample_rate))
- infile = "pipe:0"
- call.extend(["-f", in_fmt, "-ac", in_ac, "-ar", in_ar])
- elif isinstance(infile, file_types):
- infile = "pipe:0"
- else:
- infile = str(infile)
- call.extend(["-i", infile])
- if replaygain_mode:
- audio_filter = ("volume=replaygain=%s:replaygain_preamp=%.1f"
- % (replaygain_mode, replaygain_preamp))
- call.extend(["-af", audio_filter])
- # output options
- call.extend(["-f", str(fmt)])
- if max_len:
- # use "%f" to avoid scientific float notation
- call.extend(["-t", "%f" % float(max_len)])
- # output options
- if num_channels:
- call.extend(["-ac", str(int(num_channels))])
- if channel is not None and (num_channels == 1 or num_channels is None):
- # Calling with channel=x and num_channels
- call.extend(["-af", "pan=mono|c0=c%d" % int(channel)])
- if sample_rate:
- call.extend(["-ar", str(int(sample_rate))])
- call.append(output)
- return call
-
-
-def decode_to_disk(infile, fmt='f32le', sample_rate=None, num_channels=1,
- channel=None, skip=None, max_len=None, outfile=None,
- tmp_dir=None, tmp_suffix=None, cmd='ffmpeg',
- replaygain_mode=None, replaygain_preamp=0.0):
- """
- Decode the given audio file to another file.
-
- Parameters
- ----------
- infile : str
- Name of the audio sound file to decode.
- fmt : {'f32le', 's16le'}, optional
- Format of the samples:
- - 'f32le' for float32, little-endian,
- - 's16le' for signed 16-bit int, little-endian.
- sample_rate : int, optional
- Sample rate to re-sample the signal to (if set) [Hz].
- num_channels : int, optional
- Number of channels to reduce the signal to.
- If 'None', return the signal with its original channels,
- or whatever is selected by `channel`.
- channel : int, optional
- When reducing a signal to `num_channels` of 1, use this channel,
- or 'None' to return the average across all channels.
- skip : float, optional
- Number of seconds to skip at beginning of file.
- max_len : float, optional
- Maximum length in seconds to decode.
- outfile : str, optional
- The file to decode the sound file to; if not given, a temporary file
- will be created.
- tmp_dir : str, optional
- The directory to create the temporary file in (if no `outfile` is
- given).
- tmp_suffix : str, optional
- The file suffix for the temporary file if no `outfile` is given; e.g.
- ".pcm" (including the dot).
- cmd : {'ffmpeg', 'avconv'}, optional
- Decoding command (defaults to ffmpeg, alternatively supports avconv).
- replaygain_mode : {None, 'track','album'}, optional
- Specify the ReplayGain volume-levelling mode (None to disable).
- replaygain_preamp : float, optional
- ReplayGain preamp volume change level (in dB).
-
- Returns
- -------
- outfile : str
- The output file name.
-
- """
- # check input file type
- if not isinstance(infile, string_types):
- raise ValueError("only file names are supported as `infile`, not %s."
- % infile)
- # create temp file if no outfile is given
- if outfile is None:
- # looks stupid, but is recommended over tempfile.mktemp()
- f = tempfile.NamedTemporaryFile(delete=False, dir=tmp_dir,
- suffix=tmp_suffix)
- f.close()
- outfile = f.name
- delete_on_fail = True
- else:
- delete_on_fail = False
- # check output file type
- if not isinstance(outfile, string_types):
- raise ValueError("only file names are supported as `outfile`, not %s."
- % outfile)
- # call ffmpeg (throws exception on error)
- try:
- call = _ffmpeg_call(infile, outfile, fmt, sample_rate, num_channels,
- channel, skip, max_len, cmd,
- replaygain_mode=replaygain_mode,
- replaygain_preamp=replaygain_preamp)
- subprocess.check_call(call)
- except Exception:
- if delete_on_fail:
- os.unlink(outfile)
- raise
- return outfile
-
-
-def decode_to_pipe(infile, fmt='f32le', sample_rate=None, num_channels=1,
- channel=None, skip=None, max_len=None, buf_size=-1,
- cmd='ffmpeg', replaygain_mode=None, replaygain_preamp=0.0):
- """
- Decode the given audio and return a file-like object for reading the
- samples, as well as a process object.
-
- Parameters
- ----------
- infile : str
- Name of the audio sound file to decode.
- fmt : {'f32le', 's16le'}, optional
- Format of the samples:
- - 'f32le' for float32, little-endian,
- - 's16le' for signed 16-bit int, little-endian.
- sample_rate : int, optional
- Sample rate to re-sample the signal to (if set) [Hz].
- num_channels : int, optional
- Number of channels to reduce the signal to.
- If 'None', return the signal with its original channels,
- or whatever is selected by `channel`.
- channel : int, optional
- When reducing a signal to `num_channels` of 1, use this channel,
- or 'None' to return the average across all channels.
- skip : float, optional
- Number of seconds to skip at beginning of file.
- max_len : float, optional
- Maximum length in seconds to decode.
- buf_size : int, optional
- Size of buffer for the file-like object:
- - '-1' means OS default (default),
- - '0' means unbuffered,
- - '1' means line-buffered, any other value is the buffer size in bytes.
- cmd : {'ffmpeg','avconv'}, optional
- Decoding command (defaults to ffmpeg, alternatively supports avconv).
- replaygain_mode : {None, 'track','album'}, optional
- Specify the ReplayGain volume-levelling mode (None to disable).
- replaygain_preamp : float, optional
- ReplayGain preamp volume change level (in dB).
-
- Returns
- -------
- pipe : file-like object
- File-like object for reading the decoded samples.
- proc : process object
- Process object for the decoding process.
-
- Notes
- -----
- To stop decoding the file, call close() on the returned file-like object,
- then call wait() on the returned process object.
-
- """
- # check input file type
- if not isinstance(infile, (string_types, file_types, Signal)):
- raise ValueError("only file names, file objects or Signal instances "
- "are supported as `infile`, not %s." % infile)
- # Note: closing the file-like object only stops decoding because ffmpeg
- # reacts on that. A cleaner solution would be calling proc.terminate
- # explicitly, but this is only available in Python 2.6+. proc.wait
- # needs to be called in any case.
- call = _ffmpeg_call(infile, "pipe:1", fmt, sample_rate, num_channels,
- channel, skip, max_len, cmd,
- replaygain_mode=replaygain_mode,
- replaygain_preamp=replaygain_preamp)
- # redirect stdout to a pipe and buffer as requested
- if isinstance(infile, (Signal, file_types)):
- proc = subprocess.Popen(call, stdin=subprocess.PIPE,
- stdout=subprocess.PIPE, bufsize=buf_size)
- else:
- proc = subprocess.Popen(call, stdout=subprocess.PIPE, bufsize=buf_size)
- return proc.stdout, proc
-
-
-def nonstreamable_mp4_file_object(infile, cmd="ffprobe"):
- """
- Test if the given audio file object is a non-streamable MPEG4 container.
- Decode the given audio and return a file-like object for reading the
- samples, as well as a process object.
-
- Parameters
- ----------
- infile : file_types
- File object to decode.
- cmd : {'ffprobe', 'avprobe'}, optional
- Probing command (defaults to ffprobe, alternatively supports avprobe).
-
- Returns
- -------
- boolean
- True when audio is non-streamable container, False otherwise.
-
- """
- call = [cmd, "-v", "debug", "-hide_banner",
- "-show_entries", "format=format_name",
- "-print_format", "default=nokey=1:noprint_wrappers=1",
- "pipe:0"]
- proc = subprocess.Popen(call, stdin=subprocess.PIPE,
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
- out, err = proc.communicate(infile.read())
- retcode = proc.poll()
- infile.seek(0)
- if retcode:
- raise subprocess.CalledProcessError(retcode, call, output=err)
- container_format = out.decode().strip()
- partial_file = "partial file" in err.decode()
- if container_format == "mov,mp4,m4a,3gp,3g2,mj2" and partial_file:
- return True
- else:
- return False
-
-
-def decode_to_memory(infile, fmt='f32le', sample_rate=None, num_channels=1,
- channel=None, skip=None, max_len=None,
- cmd_decode='ffmpeg', cmd_probe='ffprobe',
- replaygain_mode=None, replaygain_preamp=0.0):
- """
- Decode the given audio and return it as a binary string representation.
-
- Parameters
- ----------
- infile : str
- Name of the audio sound file to decode.
- fmt : {'f32le', 's16le'}, optional
- Format of the samples:
- - 'f32le' for float32, little-endian,
- - 's16le' for signed 16-bit int, little-endian.
- sample_rate : int, optional
- Sample rate to re-sample the signal to (if set) [Hz].
- num_channels : int, optional
- Number of channels to reduce the signal to.
- If 'None', return the signal with its original channels,
- or whatever is selected by `channel`.
- channel : int, optional
- When reducing a signal to `num_channels` of 1, use this channel,
- or 'None' to return the average across all channels.
- skip : float, optional
- Number of seconds to skip at beginning of file.
- max_len : float, optional
- Maximum length in seconds to decode.
- cmd_decode : {'ffmpeg', 'avconv'}, optional
- Decoding command (defaults to ffmpeg, alternatively supports avconv).
- cmd_probe : {'ffprobe', 'avprobe'}, optional
- Probing command (defaults to ffprobe, alternatively supports avprobe).
- replaygain_mode : {None, 'track','album'}, optional
- Specify the ReplayGain volume-levelling mode (None to disable).
- replaygain_preamp : float, optional
- ReplayGain preamp volume change level (in dB).
-
- Returns
- -------
- samples : str
- Binary string representation of the audio samples.
-
- """
- # check input file type
- if not isinstance(infile, (string_types, file_types, Signal)):
- raise ValueError("only file names, file objects or Signal instances "
- "are supported as `infile`, not %s." % infile)
- # prepare decoding to pipe
- _, proc = decode_to_pipe(infile, fmt=fmt, sample_rate=sample_rate,
- num_channels=num_channels, channel=channel,
- skip=skip, max_len=max_len, cmd=cmd_decode,
- replaygain_mode=replaygain_mode,
- replaygain_preamp=replaygain_preamp)
- # decode the input to memory
- if isinstance(infile, Signal):
- # Note: np.getbuffer was removed in Python 3, but Python 2 memoryviews
- # do not have the cast() method
- try:
- signal, _ = proc.communicate(np.getbuffer(infile))
- except AttributeError:
- mv = memoryview(infile)
- signal, _ = proc.communicate(mv.cast('b'))
- elif isinstance(infile, file_types):
- signal, _ = proc.communicate(infile.read())
- infile.seek(0)
- # handle non-streamable MP4 container, which silently returns an empty
- # signal
- if not signal and nonstreamable_mp4_file_object(infile, cmd_probe):
- try:
- delete_file = False
- try:
- # pass its path if the file exists on disk
- path = infile.name
- except AttributeError:
- # otherwise store it as a temporary file
- with tempfile.NamedTemporaryFile(mode="wb", delete=False,
- suffix=".mp4") as f:
- f.write(infile.read())
- infile.seek(0)
- path = f.name
- delete_file = True
- # retry by passing a path to ffmpeg instead of piping the
- # audio to stdin (allows multiple reading passes)
- signal = decode_to_memory(path, fmt, sample_rate, num_channels,
- channel, skip, max_len, cmd_decode,
- cmd_probe, replaygain_mode,
- replaygain_preamp)
- finally:
- if delete_file:
- os.remove(path)
- else:
- signal, _ = proc.communicate()
- if proc.returncode != 0:
- raise subprocess.CalledProcessError(proc.returncode, cmd_decode)
- return signal
-
-
-def get_file_info(infile, cmd='ffprobe'):
- """
- Extract and return information about audio files.
-
- Parameters
- ----------
- infile : str
- Name of the audio file.
- cmd : {'ffprobe', 'avprobe'}, optional
- Probing command (defaults to ffprobe, alternatively supports avprobe).
-
- Returns
- -------
- dict
- Audio file information.
-
- """
- # init dictionary
- info = {'num_channels': None, 'sample_rate': None}
- if isinstance(infile, Signal):
- info['num_channels'] = infile.num_channels
- info['sample_rate'] = infile.sample_rate
- else:
- # call ffprobe
- if isinstance(infile, file_types):
- call = [cmd, "-v", "quiet", "-show_streams", "pipe:0"]
- proc = subprocess.Popen(call, stdin=subprocess.PIPE,
- stdout=subprocess.PIPE)
- output, _ = proc.communicate(infile.read())
- retcode = proc.poll()
- infile.seek(0)
- if retcode:
- raise subprocess.CalledProcessError(retcode, call,
- output=output)
- else:
- output = subprocess.check_output([cmd, "-v", "quiet",
- "-show_streams", infile])
- # parse information
- for line in output.split():
- if line.startswith(b'channels='):
- info['num_channels'] = int(line[len('channels='):])
- if line.startswith(b'sample_rate='):
- # the int(float(...)) conversion is necessary because
- # avprobe returns sample_rate as floating point number
- # which int() can't handle.
- info['sample_rate'] = int(float(line[len('sample_rate='):]))
- # return the dictionary
- return info
-
-
-def load_ffmpeg_file(filename, sample_rate=None, num_channels=None,
- channel=None, start=None, stop=None, dtype=None,
- cmd_decode='ffmpeg', cmd_probe='ffprobe',
- replaygain_mode=None, replaygain_preamp=0.0):
- """
- Load the audio data from the given file and return it as a numpy array.
-
- This uses ffmpeg (or avconv) and thus supports a lot of different file
- formats, resampling and channel conversions. The file will be fully decoded
- into memory if no start and stop positions are given.
-
- Parameters
- ----------
- filename : str
- Name of the audio sound file to load.
- sample_rate : int, optional
- Sample rate to re-sample the signal to [Hz]; 'None' returns the signal
- in its original rate.
- num_channels : int, optional
- Reduce or expand the signal to `num_channels` channels.
- If 'None', return the signal with its original channels,
- or whatever is selected by `channel`.
- channel : int, optional
- When reducing a signal to `num_channels` of 1, use this channel,
- or 'None' to return the average across all channels.
- start : float, optional
- Start position [seconds].
- stop : float, optional
- Stop position [seconds].
- dtype : numpy dtype, optional
- Numpy dtype to return the signal in (supports signed and unsigned
- 8/16/32-bit integers, and single and double precision floats,
- each in little or big endian). If 'None', np.int16 is used.
- cmd_decode : {'ffmpeg', 'avconv'}, optional
- Decoding command (defaults to ffmpeg, alternatively supports avconv).
- cmd_probe : {'ffprobe', 'avprobe'}, optional
- Probing command (defaults to ffprobe, alternatively supports avprobe).
- replaygain_mode : {None, 'track','album'}, optional
- Specify the ReplayGain volume-levelling mode (None to disable).
- replaygain_preamp : float, optional
- ReplayGain preamp volume change level (in dB).
-
- Returns
- -------
- signal : numpy array
- Audio samples.
- sample_rate : int
- Sample rate of the audio samples.
-
- """
- # set default dtype
- if dtype is None:
- dtype = np.int16
- # ffmpeg output format
- fmt = _ffmpeg_fmt(dtype)
- # start and stop position
- if start is None:
- start = 0
- max_len = None
- if stop is not None:
- max_len = stop - start
- # convert the audio signal using ffmpeg
- signal = np.frombuffer(
- decode_to_memory(filename, fmt=fmt, sample_rate=sample_rate,
- num_channels=num_channels, channel=channel,
- skip=start, max_len=max_len,
- cmd_decode=cmd_decode, cmd_probe=cmd_probe,
- replaygain_mode=replaygain_mode,
- replaygain_preamp=replaygain_preamp
- ),
- dtype=dtype)
- # get the needed information from the file
- if sample_rate is None or num_channels is None:
- info = get_file_info(filename, cmd=cmd_probe)
- if sample_rate is None:
- sample_rate = info['sample_rate']
- if num_channels is None:
- num_channels = info['num_channels']
- # reshape the audio signal
- if num_channels > 1:
- signal = signal.reshape((-1, num_channels))
- return signal, sample_rate
-
-
-# functions for loading/saving wave files
-def load_wave_file(filename, sample_rate=None, num_channels=None, channel=None,
- start=None, stop=None, dtype=None):
- """
- Load the audio data from the given file and return it as a numpy array.
-
- Only supports wave files, does not support re-sampling or arbitrary
- channel number conversions. Reads the data as a memory-mapped file with
- copy-on-write semantics to defer I/O costs until needed.
-
- Parameters
- ----------
- filename : str
- Name of the file.
- sample_rate : int, optional
- Desired sample rate of the signal [Hz], or 'None' to return the
- signal in its original rate.
- num_channels : int, optional
- Reduce or expand the signal to `num_channels` channels
- If 'None', return the signal with its original channels,
- or whichever is selected by `channel`.
- channel : int, optional
- When reducing a signal to `num_channels` of 1, use this channel,
- or 'None' to return the average across all channels.
- start : float, optional
- Start position [seconds].
- stop : float, optional
- Stop position [seconds].
- dtype : numpy data type, optional
- The data is returned with the given dtype. If 'None', it is returned
- with its original dtype, otherwise the signal gets rescaled. Integer
- dtypes use the complete value range, float dtypes the range [-1, +1].
-
-
- Returns
- -------
- signal : numpy array
- Audio signal.
- sample_rate : int
- Sample rate of the signal [Hz].
-
- Notes
- -----
- The `start` and `stop` positions are rounded to the closest sample; the
- sample corresponding to the `stop` value is not returned, thus consecutive
- segment starting with the previous `stop` can be concatenated to obtain
- the original signal without gaps or overlaps.
-
- """
- from scipy.io import wavfile
- file_sample_rate, signal = wavfile.read(filename, mmap=True)
- # if the sample rate is not the desired one, raise exception
- if sample_rate is not None and sample_rate != file_sample_rate:
- raise ValueError('Requested sample rate of %f Hz, but got %f Hz and '
- 're-sampling is not implemented.' %
- (sample_rate, file_sample_rate))
- # same for the data type
- if dtype is not None and signal.dtype != dtype:
- raise ValueError('Requested dtype %s, but got %s and re-scaling is '
- 'not implemented.' % (dtype, signal.dtype))
- # only request the desired part of the signal
- if start is not None:
- start = int(start * file_sample_rate)
- if stop is not None:
- stop = min(len(signal), int(stop * file_sample_rate))
- if start is not None or stop is not None:
- signal = signal[start: stop]
- if channel is not None and num_channels is None:
- # It's clear what the caller means here
- num_channels = 1
- if num_channels is not None:
- from ..audio.signal import remix
- signal = remix(signal, num_channels, channel)
- # return the signal
- return signal, file_sample_rate
-
-
-def write_wave_file(signal, filename, sample_rate=None):
- """
- Write the signal to disk as a .wav file.
-
- Parameters
- ----------
- signal : numpy array or Signal
- The signal to be written to file.
- filename : str
- Name of the file.
- sample_rate : int, optional
- Sample rate of the signal [Hz].
-
- Returns
- -------
- filename : str
- Name of the file.
-
- Notes
- -----
- `sample_rate` can be 'None' if `signal` is a :class:`Signal` instance. If
- set, the given `sample_rate` is used instead of the signal's sample rate.
- Must be given if `signal` is a ndarray.
-
- """
- from scipy.io import wavfile
- if isinstance(signal, Signal) and sample_rate is None:
- sample_rate = int(signal.sample_rate)
- wavfile.write(filename, rate=sample_rate, data=signal)
- return filename
-
-
-# function for automatically determining how to open audio files
-def load_audio_file(filename, sample_rate=None, num_channels=None,
- channel=None, start=None, stop=None, dtype=None,
- replaygain_mode=None, replaygain_preamp=0.0):
- """
- Load the audio data from the given file and return it as a numpy array.
- This tries load_wave_file() load_ffmpeg_file() (for ffmpeg and avconv).
-
- Parameters
- ----------
- filename : str or file handle
- Name of the file or file handle.
- sample_rate : int, optional
- Desired sample rate of the signal [Hz], or 'None' to return the
- signal in its original rate.
- num_channels : int, optional
- Reduce or expand the signal to `num_channels` channels.
- If 'None', return the signal with its original channels,
- or whatever is selected by `channel`.
- channel : int, optional
- When reducing a signal to `num_channels` of 1, use this channel,
- or 'None' to return the average across all channels.
- start : float, optional
- Start position [seconds].
- stop : float, optional
- Stop position [seconds].
- dtype : numpy data type, optional
- The data is returned with the given dtype. If 'None', it is returned
- with its original dtype, otherwise the signal gets rescaled. Integer
- dtypes use the complete value range, float dtypes the range [-1, +1].
- replaygain_mode : {None, 'track','album'}, optional
- Specify the ReplayGain volume-levelling mode (None to disable).
- replaygain_preamp : float, optional
- ReplayGain preamp volume change level (in dB).
-
- Returns
- -------
- signal : numpy array
- Audio signal.
- sample_rate : int
- Sample rate of the signal [Hz].
-
- Notes
- -----
- For wave files, the `start` and `stop` positions are rounded to the closest
- sample; the sample corresponding to the `stop` value is not returned, thus
- consecutive segment starting with the previous `stop` can be concatenated
- to obtain the original signal without gaps or overlaps.
- For all other audio files, this can not be guaranteed.
-
- """
- # try reading as a wave file
- error = "All attempts to load audio file %r failed." % filename
- try:
- return load_wave_file(filename, sample_rate=sample_rate,
- num_channels=num_channels, channel=channel,
- start=start, stop=stop, dtype=dtype)
- except ValueError:
- pass
- # not a wave file (or other sample rate requested), try ffmpeg
- try:
- return load_ffmpeg_file(filename, sample_rate=sample_rate,
- num_channels=num_channels, channel=channel,
- start=start, stop=stop, dtype=dtype,
- replaygain_mode=replaygain_mode,
- replaygain_preamp=replaygain_preamp)
- except OSError as e:
- # if it's not a file not found error, raise it!
- if e.errno != errno.ENOENT:
- raise
-
- # ffmpeg is not present, try avconv
- try:
- return load_ffmpeg_file(filename, sample_rate=sample_rate,
- num_channels=num_channels, channel=channel,
- start=start, stop=stop, dtype=dtype,
- cmd_decode='avconv', cmd_probe='avprobe',
- replaygain_mode=replaygain_mode,
- replaygain_preamp=replaygain_preamp)
- except OSError as e:
- if e.errno == errno.ENOENT:
- error += " Try installing ffmpeg (or avconv on Ubuntu Linux)."
- else:
- raise
- except subprocess.CalledProcessError:
- pass
- except subprocess.CalledProcessError:
- pass
- raise LoadAudioFileError(error)
diff --git a/spaces/Mecca/whisper-webui/src/vad.py b/spaces/Mecca/whisper-webui/src/vad.py
deleted file mode 100644
index 9b5ae606a9efdcc34dada47d0613bb8194d2f269..0000000000000000000000000000000000000000
--- a/spaces/Mecca/whisper-webui/src/vad.py
+++ /dev/null
@@ -1,560 +0,0 @@
-from abc import ABC, abstractmethod
-from collections import Counter, deque
-import time
-
-from typing import Any, Deque, Iterator, List, Dict
-
-from pprint import pprint
-from src.hooks.progressListener import ProgressListener
-from src.hooks.subTaskProgressListener import SubTaskProgressListener
-from src.hooks.whisperProgressHook import create_progress_listener_handle
-from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache
-
-from src.segments import merge_timestamps
-from src.whisper.abstractWhisperContainer import AbstractWhisperCallback
-
-# Workaround for https://github.com/tensorflow/tensorflow/issues/48797
-try:
- import tensorflow as tf
-except ModuleNotFoundError:
- # Error handling
- pass
-
-import torch
-
-import ffmpeg
-import numpy as np
-
-from src.utils import format_timestamp
-from enum import Enum
-
-class NonSpeechStrategy(Enum):
- """
- Ignore non-speech frames segments.
- """
- SKIP = 1
- """
- Just treat non-speech segments as speech.
- """
- CREATE_SEGMENT = 2
- """
- Expand speech segments into subsequent non-speech segments.
- """
- EXPAND_SEGMENT = 3
-
-# Defaults for Silero
-SPEECH_TRESHOLD = 0.3
-
-# Minimum size of segments to process
-MIN_SEGMENT_DURATION = 1
-
-# The maximum time for texts from old segments to be used in the next segment
-MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled)
-PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this
-
-VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio
-
-class TranscriptionConfig(ABC):
- def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1):
- self.non_speech_strategy = non_speech_strategy
- self.segment_padding_left = segment_padding_left
- self.segment_padding_right = segment_padding_right
- self.max_silent_period = max_silent_period
- self.max_merge_size = max_merge_size
- self.max_prompt_window = max_prompt_window
- self.initial_segment_index = initial_segment_index
-
-class PeriodicTranscriptionConfig(TranscriptionConfig):
- def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1):
- super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window, initial_segment_index)
- self.periodic_duration = periodic_duration
-
-class AbstractTranscription(ABC):
- def __init__(self, sampling_rate: int = 16000):
- self.sampling_rate = sampling_rate
-
- def get_audio_segment(self, str, start_time: str = None, duration: str = None):
- return load_audio(str, self.sampling_rate, start_time, duration)
-
- def is_transcribe_timestamps_fast(self):
- """
- Determine if get_transcribe_timestamps is fast enough to not need parallelization.
- """
- return False
-
- @abstractmethod
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float):
- """
- Get the start and end timestamps of the sections that should be transcribed by this VAD method.
-
- Parameters
- ----------
- audio: str
- The audio file.
- config: TranscriptionConfig
- The transcription configuration.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
- return
-
- def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: TranscriptionConfig, total_duration: float):
- """
- Get the start and end timestamps of the sections that should be transcribed by this VAD method,
- after merging the given segments using the specified configuration.
-
- Parameters
- ----------
- audio: str
- The audio file.
- config: TranscriptionConfig
- The transcription configuration.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
- merged = merge_timestamps(timestamps, config.max_silent_period, config.max_merge_size,
- config.segment_padding_left, config.segment_padding_right)
-
- if config.non_speech_strategy != NonSpeechStrategy.SKIP:
- # Expand segments to include the gaps between them
- if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT):
- # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size
- merged = self.fill_gaps(merged, total_duration=total_duration, max_expand_size=config.max_merge_size)
- elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT:
- # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment)
- merged = self.expand_gaps(merged, total_duration=total_duration)
- else:
- raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy))
-
- print("Transcribing non-speech:")
- pprint(merged)
- return merged
-
- def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig,
- progressListener: ProgressListener = None):
- """
- Transcribe the given audo file.
-
- Parameters
- ----------
- audio: str
- The audio file.
- whisperCallable: WhisperCallback
- A callback object to call to transcribe each segment.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
-
- try:
- max_audio_duration = self.get_audio_duration(audio, config)
- timestamp_segments = self.get_transcribe_timestamps(audio, config, 0, max_audio_duration)
-
- # Get speech timestamps from full audio file
- merged = self.get_merged_timestamps(timestamp_segments, config, max_audio_duration)
-
- # A deque of transcribed segments that is passed to the next segment as a prompt
- prompt_window = deque()
-
- print("Processing timestamps:")
- pprint(merged)
-
- result = {
- 'text': "",
- 'segments': [],
- 'language': ""
- }
- languageCounter = Counter()
- detected_language = None
-
- segment_index = config.initial_segment_index
-
- # Calculate progress
- progress_start_offset = merged[0]['start'] if len(merged) > 0 else 0
- progress_total_duration = sum([segment['end'] - segment['start'] for segment in merged])
-
- # For each time segment, run whisper
- for segment in merged:
- segment_index += 1
- segment_start = segment['start']
- segment_end = segment['end']
- segment_expand_amount = segment.get('expand_amount', 0)
- segment_gap = segment.get('gap', False)
-
- segment_duration = segment_end - segment_start
-
- if segment_duration < MIN_SEGMENT_DURATION:
- continue
-
- # Audio to run on Whisper
- segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration))
- # Previous segments to use as a prompt
- segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None
-
- # Detected language
- detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None
-
- print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ",
- segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language)
-
- perf_start_time = time.perf_counter()
-
- scaled_progress_listener = SubTaskProgressListener(progressListener, base_task_total=progress_total_duration,
- sub_task_start=segment_start - progress_start_offset, sub_task_total=segment_duration)
- segment_result = whisperCallable.invoke(segment_audio, segment_index, segment_prompt, detected_language, progress_listener=scaled_progress_listener)
-
- perf_end_time = time.perf_counter()
- print("Whisper took {} seconds".format(perf_end_time - perf_start_time))
-
- adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration)
-
- # Propagate expand amount to the segments
- if (segment_expand_amount > 0):
- segment_without_expansion = segment_duration - segment_expand_amount
-
- for adjusted_segment in adjusted_segments:
- adjusted_segment_end = adjusted_segment['end']
-
- # Add expand amount if the segment got expanded
- if (adjusted_segment_end > segment_without_expansion):
- adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion
-
- # Append to output
- result['text'] += segment_result['text']
- result['segments'].extend(adjusted_segments)
-
- # Increment detected language
- if not segment_gap:
- languageCounter[segment_result['language']] += 1
-
- # Update prompt window
- self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config)
-
- if detected_language is not None:
- result['language'] = detected_language
- finally:
- # Notify progress listener that we are done
- if progressListener is not None:
- progressListener.on_finished()
- return result
-
- def get_audio_duration(self, audio: str, config: TranscriptionConfig):
- return get_audio_duration(audio)
-
- def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig):
- if (config.max_prompt_window is not None and config.max_prompt_window > 0):
- # Add segments to the current prompt window (unless it is a speech gap)
- if not segment_gap:
- for segment in adjusted_segments:
- if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB:
- prompt_window.append(segment)
-
- while (len(prompt_window) > 0):
- first_end_time = prompt_window[0].get('end', 0)
- # Time expanded in the segments should be discounted from the prompt window
- first_expand_time = prompt_window[0].get('expand_amount', 0)
-
- if (first_end_time - first_expand_time < segment_end - config.max_prompt_window):
- prompt_window.popleft()
- else:
- break
-
- def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float):
- result = []
- last_end_time = 0
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- if (last_end_time != segment_start):
- delta = segment_start - last_end_time
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } )
-
- last_end_time = segment_end
- result.append(segment)
-
- # Also include total duration if specified
- if (total_duration is not None and last_end_time < total_duration):
- delta = total_duration - segment_start
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } )
-
- return result
-
- # Expand the end time of each segment to the start of the next segment
- def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- # Expand if the gap actually exists
- if (delta >= 0):
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
-
- result.append(current_segment)
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- if (last_segment['end'] < total_duration):
- last_segment = last_segment.copy()
- last_segment['end'] = total_duration
- result[-1] = last_segment
-
- return result
-
- def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- expanded = False
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- if (max_expand_size is not None and delta <= max_expand_size):
- # Just expand the current segment
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
- expanded = True
-
- result.append(current_segment)
-
- # Add a gap to the next segment if needed
- if (delta >= 0 and not expanded):
- result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } )
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- delta = total_duration - last_segment['end']
-
- if (delta > 0):
- if (max_expand_size is not None and delta <= max_expand_size):
- # Expand the last segment
- last_segment = last_segment.copy()
- last_segment['expand_amount'] = delta
- last_segment['end'] = total_duration
- result[-1] = last_segment
- else:
- result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } )
-
- return result
-
- def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None):
- result = []
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- # Filter segments?
- if (max_source_time is not None):
- if (segment_start > max_source_time):
- continue
- segment_end = min(max_source_time, segment_end)
-
- new_segment = segment.copy()
-
- # Add to start and end
- new_segment['start'] = segment_start + adjust_seconds
- new_segment['end'] = segment_end + adjust_seconds
- result.append(new_segment)
- return result
-
- def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float):
- result = []
-
- for entry in timestamps:
- start = entry['start']
- end = entry['end']
-
- result.append({
- 'start': start * factor,
- 'end': end * factor
- })
- return result
-
-
-class VadSileroTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000, cache: ModelCache = None):
- super().__init__(sampling_rate=sampling_rate)
- self.model = None
- self.cache = cache
- self._initialize_model()
-
- def _initialize_model(self):
- if (self.cache is not None):
- model_key = "VadSileroTranscription"
- self.model, self.get_speech_timestamps = self.cache.get(model_key, self._create_model)
- print("Loaded Silerio model from cache.")
- else:
- self.model, self.get_speech_timestamps = self._create_model()
- print("Created Silerio model")
-
- def _create_model(self):
- model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad')
-
- # Silero does not benefit from multi-threading
- torch.set_num_threads(1) # JIT
- (get_speech_timestamps, _, _, _, _) = utils
-
- return model, get_speech_timestamps
-
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float):
- result = []
-
- print("Getting timestamps from audio file: {}, start: {}, duration: {}".format(audio, start_time, end_time))
- perf_start_time = time.perf_counter()
-
- # Divide procesisng of audio into chunks
- chunk_start = start_time
-
- while (chunk_start < end_time):
- chunk_duration = min(end_time - chunk_start, VAD_MAX_PROCESSING_CHUNK)
-
- print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration)))
- wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration))
-
- sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD)
- seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate)
- adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration)
-
- #pprint(adjusted)
-
- result.extend(adjusted)
- chunk_start += chunk_duration
-
- perf_end_time = time.perf_counter()
- print("VAD processing took {} seconds".format(perf_end_time - perf_start_time))
-
- return result
-
- def __getstate__(self):
- # We only need the sampling rate
- return { 'sampling_rate': self.sampling_rate }
-
- def __setstate__(self, state):
- self.sampling_rate = state['sampling_rate']
- self.model = None
- # Use the global cache
- self.cache = GLOBAL_MODEL_CACHE
- self._initialize_model()
-
-# A very simple VAD that just marks every N seconds as speech
-class VadPeriodicTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000):
- super().__init__(sampling_rate=sampling_rate)
-
- def is_transcribe_timestamps_fast(self):
- # This is a very fast VAD - no need to parallelize it
- return True
-
- def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig, start_time: float, end_time: float):
- result = []
-
- # Generate a timestamp every N seconds
- start_timestamp = start_time
-
- while (start_timestamp < end_time):
- end_timestamp = min(start_timestamp + config.periodic_duration, end_time)
- segment_duration = end_timestamp - start_timestamp
-
- # Minimum duration is 1 second
- if (segment_duration >= 1):
- result.append( { 'start': start_timestamp, 'end': end_timestamp } )
-
- start_timestamp = end_timestamp
-
- return result
-
-def get_audio_duration(file: str):
- return float(ffmpeg.probe(file)["format"]["duration"])
-
-def load_audio(file: str, sample_rate: int = 16000,
- start_time: str = None, duration: str = None):
- """
- Open an audio file and read as mono waveform, resampling as necessary
-
- Parameters
- ----------
- file: str
- The audio file to open
-
- sr: int
- The sample rate to resample the audio if necessary
-
- start_time: str
- The start time, using the standard FFMPEG time duration syntax, or None to disable.
-
- duration: str
- The duration, using the standard FFMPEG time duration syntax, or None to disable.
-
- Returns
- -------
- A NumPy array containing the audio waveform, in float32 dtype.
- """
- try:
- inputArgs = {'threads': 0}
-
- if (start_time is not None):
- inputArgs['ss'] = start_time
- if (duration is not None):
- inputArgs['t'] = duration
-
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
- out, _ = (
- ffmpeg.input(file, **inputArgs)
- .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate)
- .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True)
- )
- except ffmpeg.Error as e:
- raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}")
-
- return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/sam.py b/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/sam.py
deleted file mode 100644
index 67837d93b58745114ea1b32fcae4a9826aa2a37a..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/sam.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import os
-from typing import List
-
-import numpy as np
-import onnxruntime as ort
-import pooch
-from PIL import Image
-from PIL.Image import Image as PILImage
-
-from .base import BaseSession
-
-
-def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int):
- scale = long_side_length * 1.0 / max(oldh, oldw)
- newh, neww = oldh * scale, oldw * scale
- neww = int(neww + 0.5)
- newh = int(newh + 0.5)
- return (newh, neww)
-
-
-def apply_coords(coords: np.ndarray, original_size, target_length) -> np.ndarray:
- old_h, old_w = original_size
- new_h, new_w = get_preprocess_shape(
- original_size[0], original_size[1], target_length
- )
- coords = coords.copy().astype(float)
- coords[..., 0] = coords[..., 0] * (new_w / old_w)
- coords[..., 1] = coords[..., 1] * (new_h / old_h)
- return coords
-
-
-def resize_longes_side(img: PILImage, size=1024):
- w, h = img.size
- if h > w:
- new_h, new_w = size, int(w * size / h)
- else:
- new_h, new_w = int(h * size / w), size
-
- return img.resize((new_w, new_h))
-
-
-def pad_to_square(img: np.ndarray, size=1024):
- h, w = img.shape[:2]
- padh = size - h
- padw = size - w
- img = np.pad(img, ((0, padh), (0, padw), (0, 0)), mode="constant")
- img = img.astype(np.float32)
- return img
-
-
-class SamSession(BaseSession):
- def __init__(self, model_name: str, sess_opts: ort.SessionOptions, *args, **kwargs):
- self.model_name = model_name
- paths = self.__class__.download_models()
- self.encoder = ort.InferenceSession(
- str(paths[0]),
- providers=ort.get_available_providers(),
- sess_options=sess_opts,
- )
- self.decoder = ort.InferenceSession(
- str(paths[1]),
- providers=ort.get_available_providers(),
- sess_options=sess_opts,
- )
-
- def normalize(
- self,
- img: np.ndarray,
- mean=(123.675, 116.28, 103.53),
- std=(58.395, 57.12, 57.375),
- size=(1024, 1024),
- *args,
- **kwargs,
- ):
- pixel_mean = np.array([*mean]).reshape(1, 1, -1)
- pixel_std = np.array([*std]).reshape(1, 1, -1)
- x = (img - pixel_mean) / pixel_std
- return x
-
- def predict(
- self,
- img: PILImage,
- *args,
- **kwargs,
- ) -> List[PILImage]:
- # Preprocess image
- image = resize_longes_side(img)
- image = np.array(image)
- image = self.normalize(image)
- image = pad_to_square(image)
-
- input_labels = kwargs.get("input_labels")
- input_points = kwargs.get("input_points")
-
- if input_labels is None:
- raise ValueError("input_labels is required")
- if input_points is None:
- raise ValueError("input_points is required")
-
- # Transpose
- image = image.transpose(2, 0, 1)[None, :, :, :]
- # Run encoder (Image embedding)
- encoded = self.encoder.run(None, {"x": image})
- image_embedding = encoded[0]
-
- # Add a batch index, concatenate a padding point, and transform.
- onnx_coord = np.concatenate([input_points, np.array([[0.0, 0.0]])], axis=0)[
- None, :, :
- ]
- onnx_label = np.concatenate([input_labels, np.array([-1])], axis=0)[
- None, :
- ].astype(np.float32)
- onnx_coord = apply_coords(onnx_coord, img.size[::1], 1024).astype(np.float32)
-
- # Create an empty mask input and an indicator for no mask.
- onnx_mask_input = np.zeros((1, 1, 256, 256), dtype=np.float32)
- onnx_has_mask_input = np.zeros(1, dtype=np.float32)
-
- decoder_inputs = {
- "image_embeddings": image_embedding,
- "point_coords": onnx_coord,
- "point_labels": onnx_label,
- "mask_input": onnx_mask_input,
- "has_mask_input": onnx_has_mask_input,
- "orig_im_size": np.array(img.size[::-1], dtype=np.float32),
- }
-
- masks, _, low_res_logits = self.decoder.run(None, decoder_inputs)
- masks = masks > 0.0
- masks = [
- Image.fromarray((masks[i, 0] * 255).astype(np.uint8))
- for i in range(masks.shape[0])
- ]
-
- return masks
-
- @classmethod
- def download_models(cls, *args, **kwargs):
- fname_encoder = f"{cls.name()}_encoder.onnx"
- fname_decoder = f"{cls.name()}_decoder.onnx"
-
- pooch.retrieve(
- "https://github.com/danielgatis/rembg/releases/download/v0.0.0/vit_b-encoder-quant.onnx",
- "md5:13d97c5c79ab13ef86d67cbde5f1b250",
- fname=fname_encoder,
- path=cls.u2net_home(),
- progressbar=True,
- )
-
- pooch.retrieve(
- "https://github.com/danielgatis/rembg/releases/download/v0.0.0/vit_b-decoder-quant.onnx",
- "md5:fa3d1c36a3187d3de1c8deebf33dd127",
- fname=fname_decoder,
- path=cls.u2net_home(),
- progressbar=True,
- )
-
- return (
- os.path.join(cls.u2net_home(), fname_encoder),
- os.path.join(cls.u2net_home(), fname_decoder),
- )
-
- @classmethod
- def name(cls, *args, **kwargs):
- return "sam"
diff --git a/spaces/MichaelT8093/Mandarin-TTS/attentions.py b/spaces/MichaelT8093/Mandarin-TTS/attentions.py
deleted file mode 100644
index 84759e83a75dccbf4d9e84c7d4c4141725ba462a..0000000000000000000000000000000000000000
--- a/spaces/MichaelT8093/Mandarin-TTS/attentions.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=4,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Mishyface/image-to-video-film-3-kazuk-hugorowan-mishyface/README.md b/spaces/Mishyface/image-to-video-film-3-kazuk-hugorowan-mishyface/README.md
deleted file mode 100644
index c12ba36b26da9a2e9af6ec3fd3425a104c56bf1d..0000000000000000000000000000000000000000
--- a/spaces/Mishyface/image-to-video-film-3-kazuk-hugorowan-mishyface/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Images to Video
-emoji: 👁
-colorFrom: pink
-colorTo: green
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
-license: unknown
-duplicated_from: Hugorowan/image-to-video-film-2-og-by-kazuk
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MohamedSherif/Skin_Cancer_detection/app.py b/spaces/MohamedSherif/Skin_Cancer_detection/app.py
deleted file mode 100644
index cac10694d925787a875bdb89232092be79121f23..0000000000000000000000000000000000000000
--- a/spaces/MohamedSherif/Skin_Cancer_detection/app.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import gradio as gr
-import numpy as np
-from skimage.transform import resize
-from tensorflow.keras.models import Sequential, load_model
-from tensorflow.keras.layers import Conv2D, MaxPool2D, Dropout, Dense, Flatten, BatchNormalization
-
-class SkinCancer :
- def __init__ (self):
- self.model = self.load_model()
-
- def build_model (self) :
- model = Sequential()
- model.add(Conv2D(filters = 128, kernel_size = (4,4), input_shape = (32, 32, 3), activation = 'relu'))
- model.add(MaxPool2D(pool_size = (4,4)))
- model.add(Conv2D(filters = 64, kernel_size = (2,2), activation = 'relu'))
- model.add(MaxPool2D(pool_size = (2,2)))
- model.add(BatchNormalization())
- #model.add(GlobalAveragePooling2D())
- model.add(Flatten())
- model.add(Dense(128, activation = 'relu'))
- model.add(Dropout(0.2))
- model.add(Dense(2, activation = 'sigmoid')) # sigmoid is better for binary classification
- #model.summary()
- return model
-
- def load_model(self):
- model = self.build_model()
- model = load_model("Normal_skin_cancer_model.h5")
- return model
-
- def preprocess_image(self,img):
- img = resize(img, (32,32))
- img = img.reshape(1,32,32,3)
- return img
-
- def predict(self,img):
- real_labels = ["benign", "malignant"]
- img = self.preprocess_image(img)
- res = np.argmax(self.model.predict(img))
- return real_labels[res]
-
-def Test(img):
- model_new = SkinCancer()
- res = model_new.predict(img)
- return res
-#interface
-interface = gr.Interface(fn = Test,
- inputs = gr.inputs.Image(shape=(200,200)),
- outputs=["text"],
- title="Skin Cancer detection")
-interface.launch()
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/necks/fpnf.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/necks/fpnf.py
deleted file mode 100644
index 17887e66b8c74b1f60383479e5df8f01b528a40b..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/necks/fpnf.py
+++ /dev/null
@@ -1,136 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict, List, Optional, Union
-
-import torch
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule
-from mmengine.model import BaseModule, ModuleList
-from torch import Tensor
-
-from mmocr.registry import MODELS
-
-
-@MODELS.register_module()
-class FPNF(BaseModule):
- """FPN-like fusion module in Shape Robust Text Detection with Progressive
- Scale Expansion Network.
-
- Args:
- in_channels (list[int]): A list of number of input channels.
- Defaults to [256, 512, 1024, 2048].
- out_channels (int): The number of output channels.
- Defaults to 256.
- fusion_type (str): Type of the final feature fusion layer. Available
- options are "concat" and "add". Defaults to "concat".
- init_cfg (dict or list[dict], optional): Initialization configs.
- Defaults to
- dict(type='Xavier', layer='Conv2d', distribution='uniform')
- """
-
- def __init__(
- self,
- in_channels: List[int] = [256, 512, 1024, 2048],
- out_channels: int = 256,
- fusion_type: str = 'concat',
- init_cfg: Optional[Union[Dict, List[Dict]]] = dict(
- type='Xavier', layer='Conv2d', distribution='uniform')
- ) -> None:
- super().__init__(init_cfg=init_cfg)
- conv_cfg = None
- norm_cfg = dict(type='BN')
- act_cfg = dict(type='ReLU')
-
- self.in_channels = in_channels
- self.out_channels = out_channels
-
- self.lateral_convs = ModuleList()
- self.fpn_convs = ModuleList()
- self.backbone_end_level = len(in_channels)
- for i in range(self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.lateral_convs.append(l_conv)
-
- if i < self.backbone_end_level - 1:
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.fpn_convs.append(fpn_conv)
-
- self.fusion_type = fusion_type
-
- if self.fusion_type == 'concat':
- feature_channels = 1024
- elif self.fusion_type == 'add':
- feature_channels = 256
- else:
- raise NotImplementedError
-
- self.output_convs = ConvModule(
- feature_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=None,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
-
- def forward(self, inputs: List[Tensor]) -> Tensor:
- """
- Args:
- inputs (list[Tensor]): Each tensor has the shape of
- :math:`(N, C_i, H_i, W_i)`. It usually expects 4 tensors
- (C2-C5 features) from ResNet.
-
- Returns:
- Tensor: A tensor of shape :math:`(N, C_{out}, H_0, W_0)` where
- :math:`C_{out}` is ``out_channels``.
- """
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- # step 1: upsample to level i-1 size and add level i-1
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] = laterals[i - 1] + F.interpolate(
- laterals[i], size=prev_shape, mode='nearest')
- # step 2: smooth level i-1
- laterals[i - 1] = self.fpn_convs[i - 1](laterals[i - 1])
-
- # upsample and cat
- bottom_shape = laterals[0].shape[2:]
- for i in range(1, used_backbone_levels):
- laterals[i] = F.interpolate(
- laterals[i], size=bottom_shape, mode='nearest')
-
- if self.fusion_type == 'concat':
- out = torch.cat(laterals, 1)
- elif self.fusion_type == 'add':
- out = laterals[0]
- for i in range(1, used_backbone_levels):
- out += laterals[i]
- else:
- raise NotImplementedError
- out = self.output_convs(out)
-
- return out
diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/div_utils.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/div_utils.py
deleted file mode 100644
index a757eb7b2184767f8ea2351b30cce6601a45be78..0000000000000000000000000000000000000000
--- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/div_utils.py
+++ /dev/null
@@ -1,38 +0,0 @@
-from random import uniform
-import numpy as np
-from collections import OrderedDict, defaultdict
-from itertools import tee
-import time
-
-# -----------------------------------------------
-def find_ngrams(input_list, n):
- return zip(*[input_list[i:] for i in range(n)])
-
-def compute_div_n(caps,n=1):
- aggr_div = []
- for k in caps:
- all_ngrams = set()
- lenT = 0.
- for c in caps[k]:
- tkns = c.split()
- lenT += len(tkns)
- ng = find_ngrams(tkns, n)
- all_ngrams.update(ng)
- aggr_div.append(float(len(all_ngrams))/ (1e-6 + float(lenT)))
- return np.array(aggr_div).mean(), np.array(aggr_div)
-
-def compute_global_div_n(caps,n=1):
- aggr_div = []
- all_ngrams = set()
- lenT = 0.
- for k in caps:
- for c in caps[k]:
- tkns = c.split()
- lenT += len(tkns)
- ng = find_ngrams(tkns, n)
- all_ngrams.update(ng)
- if n == 1:
- aggr_div.append(float(len(all_ngrams)))
- else:
- aggr_div.append(float(len(all_ngrams))/ (1e-6 + float(lenT)))
- return aggr_div[0], np.repeat(np.array(aggr_div),len(caps))
\ No newline at end of file
diff --git a/spaces/NATSpeech/DiffSpeech/inference/tts/ds.py b/spaces/NATSpeech/DiffSpeech/inference/tts/ds.py
deleted file mode 100644
index 04b5b4925bfcbfc0e05732054fd3746f1e89bf02..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/DiffSpeech/inference/tts/ds.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import torch
-# from inference.tts.fs import FastSpeechInfer
-# from modules.tts.fs2_orig import FastSpeech2Orig
-from inference.tts.base_tts_infer import BaseTTSInfer
-from modules.tts.diffspeech.shallow_diffusion_tts import GaussianDiffusion
-from utils.commons.ckpt_utils import load_ckpt
-from utils.commons.hparams import hparams
-
-
-class DiffSpeechInfer(BaseTTSInfer):
- def build_model(self):
- dict_size = len(self.ph_encoder)
- model = GaussianDiffusion(dict_size, self.hparams)
- model.eval()
- load_ckpt(model, hparams['work_dir'], 'model')
- return model
-
- def forward_model(self, inp):
- sample = self.input_to_batch(inp)
- txt_tokens = sample['txt_tokens'] # [B, T_t]
- spk_id = sample.get('spk_ids')
- with torch.no_grad():
- output = self.model(txt_tokens, spk_id=spk_id, ref_mels=None, infer=True)
- mel_out = output['mel_out']
- wav_out = self.run_vocoder(mel_out)
- wav_out = wav_out.cpu().numpy()
- return wav_out[0]
-
-if __name__ == '__main__':
- DiffSpeechInfer.example_run()
diff --git a/spaces/NCTCMumbai/NCTC/models/research/adversarial_logit_pairing/adversarial_attack.py b/spaces/NCTCMumbai/NCTC/models/research/adversarial_logit_pairing/adversarial_attack.py
deleted file mode 100644
index 804bd64bcf4444007638f9802a83973ee68eb3cf..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/adversarial_logit_pairing/adversarial_attack.py
+++ /dev/null
@@ -1,219 +0,0 @@
-# Copyright 2018 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""Library with adversarial attacks.
-
-This library designed to be self-contained and have no dependencies other
-than TensorFlow. It only contains PGD / Iterative FGSM attacks,
-see https://arxiv.org/abs/1706.06083 and https://arxiv.org/abs/1607.02533
-for details.
-
-For wider set of adversarial attacks refer to Cleverhans library:
-https://github.com/tensorflow/cleverhans
-"""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import tensorflow as tf
-
-
-def generate_pgd_common(x,
- bounds,
- model_fn,
- attack_params,
- one_hot_labels,
- perturbation_multiplier):
- """Common code for generating PGD adversarial examples.
-
- Args:
- x: original examples.
- bounds: tuple with bounds of image values, bounds[0] < bounds[1].
- model_fn: model function with signature model_fn(images).
- attack_params: parameters of the attack.
- one_hot_labels: one hot label vector to use in the loss.
- perturbation_multiplier: multiplier of adversarial perturbation,
- either +1.0 or -1.0.
-
- Returns:
- Tensor with adversarial examples.
-
- Raises:
- ValueError: if attack parameters are invalid.
- """
- # parse attack_params
- # Format of attack_params: 'EPS_STEP_NITER'
- # where EPS - epsilon, STEP - step size, NITER - number of iterations
- params_list = attack_params.split('_')
- if len(params_list) != 3:
- raise ValueError('Invalid parameters of PGD attack: %s' % attack_params)
- epsilon = int(params_list[0])
- step_size = int(params_list[1])
- niter = int(params_list[2])
-
- # rescale epsilon and step size to image bounds
- epsilon = float(epsilon) / 255.0 * (bounds[1] - bounds[0])
- step_size = float(step_size) / 255.0 * (bounds[1] - bounds[0])
-
- # clipping boundaries
- clip_min = tf.maximum(x - epsilon, bounds[0])
- clip_max = tf.minimum(x + epsilon, bounds[1])
-
- # compute starting point
- start_x = x + tf.random_uniform(tf.shape(x), -epsilon, epsilon)
- start_x = tf.clip_by_value(start_x, clip_min, clip_max)
-
- # main iteration of PGD
- loop_vars = [0, start_x]
-
- def loop_cond(index, _):
- return index < niter
-
- def loop_body(index, adv_images):
- logits = model_fn(adv_images)
- loss = tf.reduce_sum(
- tf.nn.softmax_cross_entropy_with_logits_v2(
- labels=one_hot_labels,
- logits=logits))
- perturbation = step_size * tf.sign(tf.gradients(loss, adv_images)[0])
- new_adv_images = adv_images + perturbation_multiplier * perturbation
- new_adv_images = tf.clip_by_value(new_adv_images, clip_min, clip_max)
- return index + 1, new_adv_images
-
- with tf.control_dependencies([start_x]):
- _, result = tf.while_loop(
- loop_cond,
- loop_body,
- loop_vars,
- back_prop=False,
- parallel_iterations=1)
- return result
-
-
-def generate_pgd_ll(x, bounds, model_fn, attack_params):
- # pylint: disable=g-doc-args
- """Generats targeted PGD adversarial examples with least likely target class.
-
- See generate_pgd_common for description of arguments.
-
- Returns:
- Tensor with adversarial examples.
- """
- # pylint: enable=g-doc-args
-
- # compute one hot least likely class
- logits = model_fn(x)
- num_classes = tf.shape(logits)[1]
- one_hot_labels = tf.one_hot(tf.argmin(model_fn(x), axis=1), num_classes)
-
- return generate_pgd_common(x, bounds, model_fn, attack_params,
- one_hot_labels=one_hot_labels,
- perturbation_multiplier=-1.0)
-
-
-def generate_pgd_rand(x, bounds, model_fn, attack_params):
- # pylint: disable=g-doc-args
- """Generats targeted PGD adversarial examples with random target class.
-
- See generate_pgd_common for description of arguments.
-
- Returns:
- Tensor with adversarial examples.
- """
- # pylint: enable=g-doc-args
-
- # compute one hot random class
- logits = model_fn(x)
- batch_size = tf.shape(logits)[0]
- num_classes = tf.shape(logits)[1]
- random_labels = tf.random_uniform(shape=[batch_size],
- minval=0,
- maxval=num_classes,
- dtype=tf.int32)
- one_hot_labels = tf.one_hot(random_labels, num_classes)
-
- return generate_pgd_common(x, bounds, model_fn, attack_params,
- one_hot_labels=one_hot_labels,
- perturbation_multiplier=-1.0)
-
-
-def generate_pgd(x, bounds, model_fn, attack_params):
- # pylint: disable=g-doc-args
- """Generats non-targeted PGD adversarial examples.
-
- See generate_pgd_common for description of arguments.
-
- Returns:
- tensor with adversarial examples.
- """
- # pylint: enable=g-doc-args
-
- # compute one hot predicted class
- logits = model_fn(x)
- num_classes = tf.shape(logits)[1]
- one_hot_labels = tf.one_hot(tf.argmax(model_fn(x), axis=1), num_classes)
-
- return generate_pgd_common(x, bounds, model_fn, attack_params,
- one_hot_labels=one_hot_labels,
- perturbation_multiplier=1.0)
-
-
-def generate_adversarial_examples(x, bounds, model_fn, attack_description):
- """Generates adversarial examples.
-
- Args:
- x: original examples.
- bounds: tuple with bounds of image values, bounds[0] < bounds[1]
- model_fn: model function with signature model_fn(images).
- attack_description: string which describes an attack, see notes below for
- details.
-
- Returns:
- Tensor with adversarial examples.
-
- Raises:
- ValueError: if attack description is invalid.
-
-
- Attack description could be one of the following strings:
- - "clean" - no attack, return original images.
- - "pgd_EPS_STEP_NITER" - non-targeted PGD attack.
- - "pgdll_EPS_STEP_NITER" - tageted PGD attack with least likely target class.
- - "pgdrnd_EPS_STEP_NITER" - targetd PGD attack with random target class.
-
- Meaning of attack parameters is following:
- - EPS - maximum size of adversarial perturbation, between 0 and 255.
- - STEP - step size of one iteration of PGD, between 0 and 255.
- - NITER - number of iterations.
- """
- if attack_description == 'clean':
- return x
- idx = attack_description.find('_')
- if idx < 0:
- raise ValueError('Invalid value of attack description %s'
- % attack_description)
- attack_name = attack_description[:idx]
- attack_params = attack_description[idx+1:]
- if attack_name == 'pgdll':
- return generate_pgd_ll(x, bounds, model_fn, attack_params)
- elif attack_name == 'pgdrnd':
- return generate_pgd_rand(x, bounds, model_fn, attack_params)
- elif attack_name == 'pgd':
- return generate_pgd(x, bounds, model_fn, attack_params)
- else:
- raise ValueError('Invalid value of attack description %s'
- % attack_description)
-
diff --git a/spaces/NCTCMumbai/NCTC/models/research/autoencoder/AutoencoderRunner.py b/spaces/NCTCMumbai/NCTC/models/research/autoencoder/AutoencoderRunner.py
deleted file mode 100644
index 7f1ab2ecd5a91c12960714ea79a864631e634f8c..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/autoencoder/AutoencoderRunner.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import numpy as np
-import sklearn.preprocessing as prep
-import tensorflow as tf
-from tensorflow.examples.tutorials.mnist import input_data
-
-from autoencoder_models.Autoencoder import Autoencoder
-
-mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
-
-
-def standard_scale(X_train, X_test):
- preprocessor = prep.StandardScaler().fit(X_train)
- X_train = preprocessor.transform(X_train)
- X_test = preprocessor.transform(X_test)
- return X_train, X_test
-
-
-def get_random_block_from_data(data, batch_size):
- start_index = np.random.randint(0, len(data) - batch_size)
- return data[start_index:(start_index + batch_size)]
-
-
-X_train, X_test = standard_scale(mnist.train.images, mnist.test.images)
-
-n_samples = int(mnist.train.num_examples)
-training_epochs = 20
-batch_size = 128
-display_step = 1
-
-autoencoder = Autoencoder(n_layers=[784, 200],
- transfer_function = tf.nn.softplus,
- optimizer = tf.train.AdamOptimizer(learning_rate = 0.001))
-
-for epoch in range(training_epochs):
- avg_cost = 0.
- total_batch = int(n_samples / batch_size)
- # Loop over all batches
- for i in range(total_batch):
- batch_xs = get_random_block_from_data(X_train, batch_size)
-
- # Fit training using batch data
- cost = autoencoder.partial_fit(batch_xs)
- # Compute average loss
- avg_cost += cost / n_samples * batch_size
-
- # Display logs per epoch step
- if epoch % display_step == 0:
- print("Epoch:", '%d,' % (epoch + 1),
- "Cost:", "{:.9f}".format(avg_cost))
-
-print("Total cost: " + str(autoencoder.calc_total_cost(X_test)))
diff --git a/spaces/Nicholaspei/LangChain-ChatLLM/chinese_text_splitter.py b/spaces/Nicholaspei/LangChain-ChatLLM/chinese_text_splitter.py
deleted file mode 100644
index 47f4f87678e0b76e3b26010f9856b487cc3261eb..0000000000000000000000000000000000000000
--- a/spaces/Nicholaspei/LangChain-ChatLLM/chinese_text_splitter.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import re
-from typing import List
-
-from langchain.text_splitter import CharacterTextSplitter
-
-
-class ChineseTextSplitter(CharacterTextSplitter):
- def __init__(self, pdf: bool = False, **kwargs):
- super().__init__(**kwargs)
- self.pdf = pdf
-
- def split_text(self, text: str) -> List[str]:
- if self.pdf:
- text = re.sub(r"\n{3,}", "\n", text)
- text = re.sub('\s', ' ', text)
- text = text.replace("\n\n", "")
- sent_sep_pattern = re.compile('([﹒﹔﹖﹗.。!?]["’”」』]{0,2}|(?=["‘“「『]{1,2}|$))') # del :;
- sent_list = []
- for ele in sent_sep_pattern.split(text):
- if sent_sep_pattern.match(ele) and sent_list:
- sent_list[-1] += ele
- elif ele:
- sent_list.append(ele)
- return sent_list
diff --git a/spaces/OAOA/DifFace/datapipe/__init__.py b/spaces/OAOA/DifFace/datapipe/__init__.py
deleted file mode 100644
index 4603be1efff8ba58257a1b0658a36dfe828c487b..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/datapipe/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-#!/usr/bin/env python
-# -*- coding:utf-8 -*-
-# Power by Zongsheng Yue 2022-06-07 17:27:22
-
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/__init__.py
deleted file mode 100644
index 4dbf46a1cb31ce65c4224ae79cbc2d7cf9e4d111..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/__init__.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""isort:skip_file"""
-
-import importlib
-import os
-
-from fairseq import registry
-from fairseq.criterions.fairseq_criterion import ( # noqa
- FairseqCriterion,
- LegacyFairseqCriterion,
-)
-from omegaconf import DictConfig
-
-
-(
- build_criterion_,
- register_criterion,
- CRITERION_REGISTRY,
- CRITERION_DATACLASS_REGISTRY,
-) = registry.setup_registry(
- "--criterion", base_class=FairseqCriterion, default="cross_entropy"
-)
-
-
-def build_criterion(cfg: DictConfig, task):
- return build_criterion_(cfg, task)
-
-
-# automatically import any Python files in the criterions/ directory
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- file_name = file[: file.find(".py")]
- importlib.import_module("fairseq.criterions." + file_name)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/model_parallel/models/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/model_parallel/models/__init__.py
deleted file mode 100644
index 3532479e52a0e1f1ba204c6f5d51c71c98ee5df0..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/model_parallel/models/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import importlib
-import os
-
-
-# automatically import any Python files in the models/ directory
-models_dir = os.path.dirname(__file__)
-for file in os.listdir(models_dir):
- path = os.path.join(models_dir, file)
- if (
- not file.startswith("_")
- and not file.startswith(".")
- and (file.endswith(".py") or os.path.isdir(path))
- ):
- model_name = file[: file.find(".py")] if file.endswith(".py") else file
- module = importlib.import_module("fairseq.model_parallel.models." + model_name)
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/backtranslation/extract_bt_data.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/backtranslation/extract_bt_data.py
deleted file mode 100644
index e766391e873d0d9a9561d67d5864934b2fad0681..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/backtranslation/extract_bt_data.py
+++ /dev/null
@@ -1,72 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import fileinput
-
-from tqdm import tqdm
-
-
-def main():
- parser = argparse.ArgumentParser(
- description=(
- "Extract back-translations from the stdout of fairseq-generate. "
- "If there are multiply hypotheses for a source, we only keep the first one. "
- )
- )
- parser.add_argument("--output", required=True, help="output prefix")
- parser.add_argument(
- "--srclang", required=True, help="source language (extracted from H-* lines)"
- )
- parser.add_argument(
- "--tgtlang", required=True, help="target language (extracted from S-* lines)"
- )
- parser.add_argument("--minlen", type=int, help="min length filter")
- parser.add_argument("--maxlen", type=int, help="max length filter")
- parser.add_argument("--ratio", type=float, help="ratio filter")
- parser.add_argument("files", nargs="*", help="input files")
- args = parser.parse_args()
-
- def validate(src, tgt):
- srclen = len(src.split(" ")) if src != "" else 0
- tgtlen = len(tgt.split(" ")) if tgt != "" else 0
- if (
- (args.minlen is not None and (srclen < args.minlen or tgtlen < args.minlen))
- or (
- args.maxlen is not None
- and (srclen > args.maxlen or tgtlen > args.maxlen)
- )
- or (
- args.ratio is not None
- and (max(srclen, tgtlen) / float(min(srclen, tgtlen)) > args.ratio)
- )
- ):
- return False
- return True
-
- def safe_index(toks, index, default):
- try:
- return toks[index]
- except IndexError:
- return default
-
- with open(args.output + "." + args.srclang, "w") as src_h, open(
- args.output + "." + args.tgtlang, "w"
- ) as tgt_h:
- for line in tqdm(fileinput.input(args.files)):
- if line.startswith("S-"):
- tgt = safe_index(line.rstrip().split("\t"), 1, "")
- elif line.startswith("H-"):
- if tgt is not None:
- src = safe_index(line.rstrip().split("\t"), 2, "")
- if validate(src, tgt):
- print(src, file=src_h)
- print(tgt, file=tgt_h)
- tgt = None
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/tokenizers/tokenize_thai.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/tokenizers/tokenize_thai.py
deleted file mode 100644
index 9c72cb89056f6fc92a8963415e5f3a1e61b33a5b..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/tokenizers/tokenize_thai.py
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-
-from pythainlp import word_tokenize
-
-
-for line in sys.stdin:
- print(" ".join(word_tokenize(line.strip())))
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/binarizer.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/binarizer.py
deleted file mode 100644
index ae4d02a6dbbb523b76eb8684e87e38c74fe7c4a1..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/binarizer.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import Counter
-from typing import Dict
-
-import torch
-
-from fairseq.file_chunker_utils import Chunker
-from fairseq.file_io import PathManager
-from fairseq.tokenizer import tokenize_line
-
-
-class Binarizer:
- @staticmethod
- def binarize(
- filename,
- dict,
- consumer,
- tokenize=tokenize_line,
- append_eos=True,
- reverse_order=False,
- offset=0,
- end=-1,
- already_numberized=False,
- ) -> Dict[str, int]:
- nseq, ntok = 0, 0
- replaced = Counter()
-
- def replaced_consumer(word, idx):
- if idx == dict.unk_index and word != dict.unk_word:
- replaced.update([word])
-
- with Chunker(
- PathManager.get_local_path(filename), offset, end
- ) as line_iterator:
- for line in line_iterator:
- if already_numberized:
- id_strings = line.strip().split()
- id_list = [int(id_string) for id_string in id_strings]
- if reverse_order:
- id_list.reverse()
- if append_eos:
- id_list.append(dict.eos())
- ids = torch.IntTensor(id_list)
- else:
- ids = dict.encode_line(
- line=line,
- line_tokenizer=tokenize,
- add_if_not_exist=False,
- consumer=replaced_consumer,
- append_eos=append_eos,
- reverse_order=reverse_order,
- )
- nseq += 1
- ntok += len(ids)
- consumer(ids)
- return {
- "nseq": nseq,
- "nunk": sum(replaced.values()),
- "ntok": ntok,
- "replaced": replaced,
- }
-
- @staticmethod
- def binarize_alignments(
- filename, alignment_parser, consumer, offset=0, end=-1
- ) -> Dict[str, int]:
- nseq = 0
-
- with Chunker(
- PathManager.get_local_path(filename), offset, end
- ) as line_iterator:
- for line in line_iterator:
- ids = alignment_parser(line)
- nseq += 1
- consumer(ids)
- return {"nseq": nseq}
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py
deleted file mode 100644
index eb5d819da5121a243e345b3812292ef0b13ccf98..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py
+++ /dev/null
@@ -1,664 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from argparse import Namespace
-import contextlib
-import copy
-import math
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from dataclasses import dataclass, field
-from omegaconf import MISSING, II, open_dict
-from typing import Any, Optional
-
-from fairseq import checkpoint_utils, tasks, utils
-from fairseq.dataclass import FairseqDataclass
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-from fairseq.tasks import FairseqTask
-from fairseq.models import (
- BaseFairseqModel,
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- FairseqIncrementalDecoder,
- register_model,
-)
-from fairseq.models.wav2vec.wav2vec2 import MASKING_DISTRIBUTION_CHOICES
-from fairseq.modules import (
- LayerNorm,
- PositionalEmbedding,
- TransformerDecoderLayer,
-)
-
-
-@dataclass
-class Wav2Vec2AsrConfig(FairseqDataclass):
- w2v_path: str = field(
- default=MISSING, metadata={"help": "path to wav2vec 2.0 model"}
- )
- no_pretrained_weights: bool = field(
- default=False, metadata={"help": "if true, does not load pretrained weights"}
- )
- dropout_input: float = field(
- default=0.0,
- metadata={"help": "dropout to apply to the input (after feat extr)"},
- )
- final_dropout: float = field(
- default=0.0,
- metadata={"help": "dropout after transformer and before final projection"},
- )
- dropout: float = field(
- default=0.0, metadata={"help": "dropout probability inside wav2vec 2.0 model"}
- )
- attention_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability for attention weights inside wav2vec 2.0 model"
- },
- )
- activation_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability after activation in FFN inside wav2vec 2.0 model"
- },
- )
- conv_feature_layers: Optional[str] = field(
- default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]",
- metadata={
- "help": (
- "string describing convolutional feature extraction "
- "layers in form of a python list that contains "
- "[(dim, kernel_size, stride), ...]"
- ),
- },
- )
- encoder_embed_dim: Optional[int] = field(
- default=768, metadata={"help": "encoder embedding dimension"}
- )
-
- # masking
- apply_mask: bool = field(
- default=False, metadata={"help": "apply masking during fine-tuning"}
- )
- mask_length: int = field(
- default=10, metadata={"help": "repeat the mask indices multiple times"}
- )
- mask_prob: float = field(
- default=0.5,
- metadata={
- "help": "probability of replacing a token with mask (normalized by length)"
- },
- )
- mask_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static", metadata={"help": "how to choose masks"}
- )
- mask_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument (used for more complex distributions), "
- "see help in compute_mask_indices"
- },
- )
- no_mask_overlap: bool = field(
- default=False, metadata={"help": "whether to allow masks to overlap"}
- )
- mask_min_space: Optional[int] = field(
- default=1,
- metadata={"help": "min space between spans (if no overlap is enabled)"},
- )
-
- # channel masking
- mask_channel_length: int = field(
- default=10, metadata={"help": "length of the mask for features (channels)"}
- )
- mask_channel_prob: float = field(
- default=0.0, metadata={"help": "probability of replacing a feature with 0"}
- )
- mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static",
- metadata={"help": "how to choose mask length for channel masking"},
- )
- mask_channel_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument (used for more complex distributions), "
- "see help in compute_mask_indicesh"
- },
- )
- no_mask_channel_overlap: bool = field(
- default=False, metadata={"help": "whether to allow channel masks to overlap"}
- )
- freeze_finetune_updates: int = field(
- default=0, metadata={"help": "dont finetune wav2vec for this many updates"}
- )
- feature_grad_mult: float = field(
- default=0.0, metadata={"help": "reset feature grad mult in wav2vec 2.0 to this"}
- )
- layerdrop: float = field(
- default=0.0, metadata={"help": "probability of dropping a layer in wav2vec 2.0"}
- )
- mask_channel_min_space: Optional[int] = field(
- default=1,
- metadata={"help": "min space between spans (if no overlap is enabled)"},
- )
- mask_channel_before: bool = False
- normalize: bool = II("task.normalize")
- data: str = II("task.data")
- # this holds the loaded wav2vec args
- w2v_args: Any = None
-
-
-@dataclass
-class Wav2Vec2CtcConfig(Wav2Vec2AsrConfig):
- blank_weight: float = 0
- blank_mode: str = "add"
-
-
-@register_model("wav2vec_ctc", dataclass=Wav2Vec2CtcConfig)
-class Wav2VecCtc(BaseFairseqModel):
- def __init__(self, cfg: Wav2Vec2CtcConfig, w2v_encoder: BaseFairseqModel):
- super().__init__()
- self.cfg = cfg
- self.w2v_encoder = w2v_encoder
- self.blank_weight = cfg.blank_weight
- self.blank_mode = cfg.blank_mode
-
- def upgrade_state_dict_named(self, state_dict, name):
- super().upgrade_state_dict_named(state_dict, name)
- return state_dict
-
- @classmethod
- def build_model(cls, cfg: Wav2Vec2CtcConfig, task: FairseqTask):
- """Build a new model instance."""
- w2v_encoder = Wav2VecEncoder(cfg, len(task.target_dictionary))
- return cls(cfg, w2v_encoder)
-
- def get_logits(self, net_output, normalize=False):
- logits = net_output["encoder_out"]
- if self.blank_weight != 0:
- if self.blank_mode == "add":
- logits[..., 0] += self.blank_weight
- elif self.blank_mode == "set":
- logits[..., 0] = self.blank_weight
- else:
- raise Exception(f"invalid blank mode {self.blank_mode}")
-
- if net_output["padding_mask"] is not None and net_output["padding_mask"].any():
- logits[net_output["padding_mask"].T][..., 0] = float("inf")
- logits[net_output["padding_mask"].T][..., 1:] = float("-inf")
-
- if normalize:
- logits = utils.log_softmax(logits.float(), dim=-1)
-
- return logits
-
- def get_normalized_probs(self, net_output, log_probs):
- """Get normalized probabilities (or log probs) from a net's output."""
-
- logits = self.get_logits(net_output)
-
- if log_probs:
- return utils.log_softmax(logits.float(), dim=-1)
- else:
- return utils.softmax(logits.float(), dim=-1)
-
- def forward(self, **kwargs):
- x = self.w2v_encoder(**kwargs)
- return x
-
-
-@dataclass
-class Wav2Vec2Seq2SeqConfig(Wav2Vec2AsrConfig):
- decoder_embed_dim: int = field(
- default=768, metadata={"help": "decoder embedding dimension"}
- )
- decoder_ffn_embed_dim: int = field(
- default=3072, metadata={"help": "decoder embedding dimension for FFN"}
- )
- decoder_layers: int = field(default=6, metadata={"help": "num of decoder layers"})
- decoder_layerdrop: float = field(
- default=0.0, metadata={"help": "decoder layerdrop chance"}
- )
- decoder_attention_heads: int = field(
- default=4, metadata={"help": "num decoder attention heads"}
- )
- decoder_learned_pos: bool = field(
- default=False,
- metadata={"help": "use learned positional embeddings in the decoder"},
- )
- decoder_normalize_before: bool = field(
- default=False, metadata={"help": "apply layernorm before each decoder block"}
- )
- no_token_positional_embeddings: bool = field(
- default=False,
- metadata={
- "help": "if set, disables positional embeddings (outside self attention)"
- },
- )
- decoder_dropout: float = field(
- default=0.0, metadata={"help": "dropout probability in the decoder"}
- )
- decoder_attention_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability for attention weights inside the decoder"
- },
- )
- decoder_activation_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability after activation in FFN inside the decoder"
- },
- )
- max_target_positions: int = field(
- default=2048, metadata={"help": "max target positions"}
- )
- share_decoder_input_output_embed: bool = field(
- default=False, metadata={"help": "share decoder input and output embeddings"}
- )
- autoregressive: bool = II("task.autoregressive")
-
-
-@register_model("wav2vec_seq2seq", dataclass=Wav2Vec2Seq2SeqConfig)
-class Wav2Vec2Seq2SeqModel(FairseqEncoderDecoderModel):
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @classmethod
- def build_model(cls, cfg: Wav2Vec2Seq2SeqConfig, task: FairseqTask):
- """Build a new model instance."""
-
- assert (
- cfg.autoregressive
- ), "Please set task.autoregressive=true for seq2seq asr models"
-
- src_dict, tgt_dict = task.source_dictionary, task.target_dictionary
-
- def build_embedding(dictionary, embed_dim):
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- emb = Embedding(num_embeddings, embed_dim, padding_idx)
- return emb
-
- decoder_embed_tokens = build_embedding(tgt_dict, cfg.decoder_embed_dim)
-
- encoder = cls.build_encoder(cfg)
- decoder = cls.build_decoder(cfg, tgt_dict, decoder_embed_tokens)
-
- return Wav2Vec2Seq2SeqModel(encoder, decoder)
-
- @classmethod
- def build_encoder(cls, cfg: Wav2Vec2AsrConfig):
- return Wav2VecEncoder(cfg)
-
- @classmethod
- def build_decoder(cls, cfg: Wav2Vec2Seq2SeqConfig, tgt_dict, embed_tokens):
- return TransformerDecoder(cfg, tgt_dict, embed_tokens)
-
- def forward(self, **kwargs):
- encoder_out = self.encoder(**kwargs)
- decoder_out = self.decoder(encoder_out=encoder_out, **kwargs)
- return decoder_out
-
- def upgrade_state_dict_named(self, state_dict, name):
- super().upgrade_state_dict_named(state_dict, name)
- return state_dict
-
-
-class Wav2VecEncoder(FairseqEncoder):
- def __init__(self, cfg: Wav2Vec2AsrConfig, output_size=None):
- self.apply_mask = cfg.apply_mask
-
- arg_overrides = {
- "dropout": cfg.dropout,
- "activation_dropout": cfg.activation_dropout,
- "dropout_input": cfg.dropout_input,
- "attention_dropout": cfg.attention_dropout,
- "mask_length": cfg.mask_length,
- "mask_prob": cfg.mask_prob,
- "mask_selection": cfg.mask_selection,
- "mask_other": cfg.mask_other,
- "no_mask_overlap": cfg.no_mask_overlap,
- "mask_channel_length": cfg.mask_channel_length,
- "mask_channel_prob": cfg.mask_channel_prob,
- "mask_channel_before": cfg.mask_channel_before,
- "mask_channel_selection": cfg.mask_channel_selection,
- "mask_channel_other": cfg.mask_channel_other,
- "no_mask_channel_overlap": cfg.no_mask_channel_overlap,
- "encoder_layerdrop": cfg.layerdrop,
- "feature_grad_mult": cfg.feature_grad_mult,
- }
-
- if cfg.w2v_args is None:
- state = checkpoint_utils.load_checkpoint_to_cpu(cfg.w2v_path, arg_overrides)
- w2v_args = state.get("cfg", None)
- if w2v_args is None:
- w2v_args = convert_namespace_to_omegaconf(state["args"])
- w2v_args.criterion = None
- w2v_args.lr_scheduler = None
- cfg.w2v_args = w2v_args
- else:
- state = None
- w2v_args = cfg.w2v_args
- if isinstance(w2v_args, Namespace):
- cfg.w2v_args = w2v_args = convert_namespace_to_omegaconf(w2v_args)
-
- assert cfg.normalize == w2v_args.task.normalize, (
- "Fine-tuning works best when data normalization is the same. "
- "Please check that --normalize is set or unset for both pre-training and here"
- )
-
- w2v_args.task.data = cfg.data
- task = tasks.setup_task(w2v_args.task)
- model = task.build_model(w2v_args.model)
-
- if state is not None and not cfg.no_pretrained_weights:
- model.load_state_dict(state["model"], strict=True)
-
- model.remove_pretraining_modules()
-
- super().__init__(task.source_dictionary)
-
- d = w2v_args.model.encoder_embed_dim
-
- self.w2v_model = model
-
- self.final_dropout = nn.Dropout(cfg.final_dropout)
- self.freeze_finetune_updates = cfg.freeze_finetune_updates
- self.num_updates = 0
-
- targ_d = None
- self.proj = None
-
- if output_size is not None:
- targ_d = output_size
- elif getattr(cfg, "decoder_embed_dim", d) != d:
- targ_d = cfg.decoder_embed_dim
-
- if targ_d is not None:
- self.proj = Linear(d, targ_d)
-
- def set_num_updates(self, num_updates):
- """Set the number of parameters updates."""
- super().set_num_updates(num_updates)
- self.num_updates = num_updates
-
- def forward(self, source, padding_mask, **kwargs):
-
- w2v_args = {
- "source": source,
- "padding_mask": padding_mask,
- "mask": self.apply_mask and self.training,
- }
-
- ft = self.freeze_finetune_updates <= self.num_updates
-
- with torch.no_grad() if not ft else contextlib.ExitStack():
- res = self.w2v_model.extract_features(**w2v_args)
-
- x = res["x"]
- padding_mask = res["padding_mask"]
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- x = self.final_dropout(x)
-
- if self.proj:
- x = self.proj(x)
-
- return {
- "encoder_out": x, # T x B x C
- "padding_mask": padding_mask, # B x T,
- "layer_results": res["layer_results"],
- }
-
- def forward_torchscript(self, net_input):
- if torch.jit.is_scripting():
- return self.forward(net_input["source"], net_input["padding_mask"])
- else:
- return self.forward_non_torchscript(net_input)
-
- def reorder_encoder_out(self, encoder_out, new_order):
- if encoder_out["encoder_out"] is not None:
- encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select(
- 1, new_order
- )
- if encoder_out["padding_mask"] is not None:
- encoder_out["padding_mask"] = encoder_out[
- "padding_mask"
- ].index_select(0, new_order)
- return encoder_out
-
- def max_positions(self):
- """Maximum input length supported by the encoder."""
- return None
-
- def upgrade_state_dict_named(self, state_dict, name):
- return state_dict
-
-
-class TransformerDecoder(FairseqIncrementalDecoder):
- """
- Transformer decoder consisting of *args.decoder_layers* layers. Each layer
- is a :class:`TransformerDecoderLayer`.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- dictionary (~fairseq.data.Dictionary): decoding dictionary
- embed_tokens (torch.nn.Embedding): output embedding
- no_encoder_attn (bool, optional): whether to attend to encoder outputs
- (default: False).
- """
-
- def __init__(
- self,
- cfg: Wav2Vec2Seq2SeqConfig,
- dictionary,
- embed_tokens,
- no_encoder_attn=False,
- ):
- super().__init__(dictionary)
-
- self.dropout = cfg.decoder_dropout
- self.share_input_output_embed = cfg.share_decoder_input_output_embed
-
- input_embed_dim = embed_tokens.embedding_dim
- embed_dim = cfg.decoder_embed_dim
- self.output_embed_dim = cfg.decoder_embed_dim
-
- self.layerdrop = cfg.decoder_layerdrop
-
- self.padding_idx = embed_tokens.padding_idx
- self.max_target_positions = cfg.max_target_positions
-
- self.embed_tokens = embed_tokens
- self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim
-
- self.project_in_dim = (
- Linear(input_embed_dim, embed_dim, bias=False)
- if embed_dim != input_embed_dim
- else None
- )
-
- self.embed_positions = (
- PositionalEmbedding(
- cfg.max_target_positions,
- embed_dim,
- self.padding_idx,
- learned=cfg.decoder_learned_pos,
- )
- if not cfg.no_token_positional_embeddings
- else None
- )
-
- # TODO: update this when transformer gets converted to dataclass configs
- transformer_cfg = copy.deepcopy(cfg)
- with open_dict(transformer_cfg):
- transformer_cfg.dropout = transformer_cfg.decoder_dropout
- transformer_cfg.attention_dropout = (
- transformer_cfg.decoder_attention_dropout
- )
- transformer_cfg.activation_dropout = (
- transformer_cfg.decoder_activation_dropout
- )
-
- self.layers = nn.ModuleList([])
- self.layers.extend(
- [
- TransformerDecoderLayer(transformer_cfg, no_encoder_attn)
- for _ in range(transformer_cfg.decoder_layers)
- ]
- )
-
- if not self.share_input_output_embed:
- self.embed_out = nn.Parameter(
- torch.Tensor(len(dictionary), self.output_embed_dim)
- )
- nn.init.normal_(self.embed_out, mean=0, std=self.output_embed_dim ** -0.5)
-
- if transformer_cfg.decoder_normalize_before:
- self.layer_norm = LayerNorm(embed_dim)
- else:
- self.layer_norm = None
-
- def forward(
- self, prev_output_tokens, encoder_out=None, incremental_state=None, **unused
- ):
- """
- Args:
- prev_output_tokens (LongTensor): previous decoder outputs of shape
- `(batch, tgt_len)`, for teacher forcing
- encoder_out (Tensor, optional): output from the encoder, used for
- encoder-side attention
- incremental_state (dict): dictionary used for storing state during
- :ref:`Incremental decoding`
-
- Returns:
- tuple:
- - the decoder's output of shape `(batch, tgt_len, vocab)`
- - a dictionary with any model-specific outputs
- """
- prev_output_tokens = prev_output_tokens.long()
- x, extra = self.extract_features(
- prev_output_tokens, encoder_out, incremental_state
- )
- x = self.output_layer(x)
- return x, extra
-
- def extract_features(
- self, prev_output_tokens, encoder_out=None, incremental_state=None, **unused
- ):
- """
- Similar to *forward* but only return features.
-
- Returns:
- tuple:
- - the decoder's features of shape `(batch, tgt_len, embed_dim)`
- - a dictionary with any model-specific outputs
- """
-
- # embed positions
- positions = (
- self.embed_positions(
- prev_output_tokens, incremental_state=incremental_state
- )
- if self.embed_positions is not None
- else None
- )
-
- if incremental_state is not None:
- prev_output_tokens = prev_output_tokens[:, -1:]
- if positions is not None:
- positions = positions[:, -1:]
-
- # embed tokens and positions
- x = self.embed_scale * self.embed_tokens(prev_output_tokens)
-
- if self.project_in_dim is not None:
- x = self.project_in_dim(x)
-
- if positions is not None:
- x += positions
- x = F.dropout(x, p=self.dropout, training=self.training)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
- attn = None
-
- inner_states = [x]
-
- # decoder layers
- self_attn_padding_mask = None
- if prev_output_tokens.eq(self.padding_idx).any():
- self_attn_padding_mask = prev_output_tokens.eq(self.padding_idx)
- for layer in self.layers:
- dropout_probability = np.random.random()
- if not self.training or (dropout_probability > self.layerdrop):
- x, attn, _ = layer(
- x,
- encoder_out["encoder_out"] if encoder_out is not None else None,
- encoder_out["padding_mask"] if encoder_out is not None else None,
- incremental_state,
- self_attn_mask=self.buffered_future_mask(x)
- if incremental_state is None
- else None,
- self_attn_padding_mask=self_attn_padding_mask
- )
- inner_states.append(x)
-
- if self.layer_norm:
- x = self.layer_norm(x)
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
-
- return x, {"attn": attn, "inner_states": inner_states}
-
- def output_layer(self, features, **kwargs):
- """Project features to the vocabulary size."""
- # project back to size of vocabulary
- if self.share_input_output_embed:
- return F.linear(features, self.embed_tokens.weight)
- else:
- return F.linear(features, self.embed_out)
-
- def max_positions(self):
- """Maximum output length supported by the decoder."""
- if self.embed_positions is None:
- return self.max_target_positions
- return min(self.max_target_positions, self.embed_positions.max_positions)
-
- def buffered_future_mask(self, tensor):
- dim = tensor.size(0)
- if (
- not hasattr(self, "_future_mask")
- or self._future_mask is None
- or self._future_mask.device != tensor.device
- or self._future_mask.size(0) < dim
- ):
- self._future_mask = torch.triu(
- utils.fill_with_neg_inf(tensor.new(dim, dim)), 1
- )
- return self._future_mask[:dim, :dim]
-
- def upgrade_state_dict_named(self, state_dict, name):
- return state_dict
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
- nn.init.constant_(m.weight[padding_idx], 0)
- return m
-
-
-def Linear(in_features, out_features, bias=True):
- m = nn.Linear(in_features, out_features, bias)
- nn.init.xavier_uniform_(m.weight)
- if bias:
- nn.init.constant_(m.bias, 0.0)
- return m
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/wer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/wer.py
deleted file mode 100644
index 613ab50d39019f6edf67c56c2353646be2a2f17d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/wer.py
+++ /dev/null
@@ -1,82 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Implement unsupervised metric for decoding hyperparameter selection:
- $$ alpha * LM_PPL + ViterbitUER(%) * 100 $$
-"""
-import argparse
-import logging
-import sys
-
-import editdistance
-
-logging.root.setLevel(logging.INFO)
-logging.basicConfig(stream=sys.stdout, level=logging.INFO)
-logger = logging.getLogger(__name__)
-
-
-def get_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument("-s", "--hypo", help="hypo transcription", required=True)
- parser.add_argument(
- "-r", "--reference", help="reference transcription", required=True
- )
- return parser
-
-
-def compute_wer(ref_uid_to_tra, hyp_uid_to_tra, g2p):
- d_cnt = 0
- w_cnt = 0
- w_cnt_h = 0
- for uid in hyp_uid_to_tra:
- ref = ref_uid_to_tra[uid].split()
- if g2p is not None:
- hyp = g2p(hyp_uid_to_tra[uid])
- hyp = [p for p in hyp if p != "'" and p != " "]
- hyp = [p[:-1] if p[-1].isnumeric() else p for p in hyp]
- else:
- hyp = hyp_uid_to_tra[uid].split()
- d_cnt += editdistance.eval(ref, hyp)
- w_cnt += len(ref)
- w_cnt_h += len(hyp)
- wer = float(d_cnt) / w_cnt
- logger.debug(
- (
- f"wer = {wer * 100:.2f}%; num. of ref words = {w_cnt}; "
- f"num. of hyp words = {w_cnt_h}; num. of sentences = {len(ref_uid_to_tra)}"
- )
- )
- return wer
-
-
-def main():
- args = get_parser().parse_args()
-
- errs = 0
- count = 0
- with open(args.hypo, "r") as hf, open(args.reference, "r") as rf:
- for h, r in zip(hf, rf):
- h = h.rstrip().split()
- r = r.rstrip().split()
- errs += editdistance.eval(r, h)
- count += len(r)
-
- logger.info(f"UER: {errs / count * 100:.2f}%")
-
-
-if __name__ == "__main__":
- main()
-
-
-def load_tra(tra_path):
- with open(tra_path, "r") as f:
- uid_to_tra = {}
- for line in f:
- uid, tra = line.split(None, 1)
- uid_to_tra[uid] = tra
- logger.debug(f"loaded {len(uid_to_tra)} utterances from {tra_path}")
- return uid_to_tra
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/nms_rotated/nms_rotated_cpu.cpp b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/nms_rotated/nms_rotated_cpu.cpp
deleted file mode 100644
index d7556e645b604aa83d86cc702b783fd8ecedffcc..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/nms_rotated/nms_rotated_cpu.cpp
+++ /dev/null
@@ -1,75 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#include "../box_iou_rotated/box_iou_rotated_utils.h"
-#include "nms_rotated.h"
-
-namespace detectron2 {
-
-template
-at::Tensor nms_rotated_cpu_kernel(
- const at::Tensor& dets,
- const at::Tensor& scores,
- const double iou_threshold) {
- // nms_rotated_cpu_kernel is modified from torchvision's nms_cpu_kernel,
- // however, the code in this function is much shorter because
- // we delegate the IoU computation for rotated boxes to
- // the single_box_iou_rotated function in box_iou_rotated_utils.h
- AT_ASSERTM(dets.device().is_cpu(), "dets must be a CPU tensor");
- AT_ASSERTM(scores.device().is_cpu(), "scores must be a CPU tensor");
- AT_ASSERTM(
- dets.scalar_type() == scores.scalar_type(),
- "dets should have the same type as scores");
-
- if (dets.numel() == 0) {
- return at::empty({0}, dets.options().dtype(at::kLong));
- }
-
- auto order_t = std::get<1>(scores.sort(0, /* descending=*/true));
-
- auto ndets = dets.size(0);
- at::Tensor suppressed_t = at::zeros({ndets}, dets.options().dtype(at::kByte));
- at::Tensor keep_t = at::zeros({ndets}, dets.options().dtype(at::kLong));
-
- auto suppressed = suppressed_t.data_ptr();
- auto keep = keep_t.data_ptr();
- auto order = order_t.data_ptr();
-
- int64_t num_to_keep = 0;
-
- for (int64_t _i = 0; _i < ndets; _i++) {
- auto i = order[_i];
- if (suppressed[i] == 1) {
- continue;
- }
-
- keep[num_to_keep++] = i;
-
- for (int64_t _j = _i + 1; _j < ndets; _j++) {
- auto j = order[_j];
- if (suppressed[j] == 1) {
- continue;
- }
-
- auto ovr = single_box_iou_rotated(
- dets[i].data_ptr(), dets[j].data_ptr());
- if (ovr >= iou_threshold) {
- suppressed[j] = 1;
- }
- }
- }
- return keep_t.narrow(/*dim=*/0, /*start=*/0, /*length=*/num_to_keep);
-}
-
-at::Tensor nms_rotated_cpu(
- // input must be contiguous
- const at::Tensor& dets,
- const at::Tensor& scores,
- const double iou_threshold) {
- auto result = at::empty({0}, dets.options());
-
- AT_DISPATCH_FLOATING_TYPES(dets.scalar_type(), "nms_rotated", [&] {
- result = nms_rotated_cpu_kernel(dets, scores, iou_threshold);
- });
- return result;
-}
-
-} // namespace detectron2
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/__init__.py
deleted file mode 100644
index bc8709d92c610b36e0bcbd7da20c1eb41dc8cfcf..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : __init__.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d
-from .replicate import DataParallelWithCallback, patch_replication_callback
diff --git a/spaces/OpenGVLab/VideoChatGPT/utils/config.py b/spaces/OpenGVLab/VideoChatGPT/utils/config.py
deleted file mode 100644
index 63f9ef375b37daa6926f2259502913e38f22e6e2..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/VideoChatGPT/utils/config.py
+++ /dev/null
@@ -1,281 +0,0 @@
-from __future__ import annotations
-
-import argparse
-import ast
-import json
-import os
-import os.path as osp
-import re
-import shutil
-import sys
-import tempfile
-from copy import deepcopy
-from importlib import import_module
-
-import yaml
-
-from .easydict import EasyDict
-
-__all__ = ["Config", "pretty_text"]
-
-
-BASE_KEY = "_base_"
-# BASE_CONFIG = {"OUTPUT_DIR": "./workspace", "SESSION": "base", "LOG_FILE": "log.txt"}
-BASE_CONFIG = {}
-
-cfg = None
-
-
-class Config(object):
- """config"""
-
- @classmethod
- def pretty_text(cls, cfg: dict, indent=2) -> str:
- """format dict to a string
-
- Args:
- cfg (EasyDict): the params.
-
- Returns: The string to display.
-
- """
- msg = "{\n"
- for i, (k, v) in enumerate(cfg.items()):
- if isinstance(v, dict):
- v = cls.pretty_text(v, indent + 4)
- spaces = " " * indent
- msg += spaces + "{}: {}".format(k, v)
- if i == len(cfg) - 1:
- msg += " }"
- else:
- msg += "\n"
- return msg
-
- @classmethod
- def dump(cls, cfg, savepath=None):
- """dump cfg to `json` file.
-
- Args:
- cfg (dict): The dict to dump.
- savepath (str): The filepath to save the dumped dict.
-
- Returns: TODO
-
- """
- if savepath is None:
- savepath = osp.join(cfg.WORKSPACE, "config.json")
- json.dump(cfg, open(savepath, "w"), indent=2)
-
- @classmethod
- def get_config(cls, default_config: dict = None):
- """get a `Config` instance.
-
- Args:
- default_config (dict): The default config. `default_config` will be overrided
- by config file `--cfg`, `--cfg` will be overrided by commandline args.
-
- Returns: an EasyDict.
- """
- global cfg
- if cfg is not None:
- return cfg
-
- # define arg parser.
- parser = argparse.ArgumentParser()
- # parser.add_argument("--cfg", help="load configs from yaml file", default="", type=str)
- parser.add_argument(
- "config_file", help="the configuration file to load. support: .yaml, .json, .py"
- )
- parser.add_argument(
- "opts",
- default=None,
- nargs="*",
- help="overrided configs. List. Format: 'key1 name1 key2 name2'",
- )
- args = parser.parse_args()
-
- cfg = EasyDict(BASE_CONFIG)
- if osp.isfile(args.config_file):
- cfg_from_file = cls.from_file(args.config_file)
- cfg = merge_a_into_b(cfg_from_file, cfg)
- cfg = cls.merge_list(cfg, args.opts)
- cfg = eval_dict_leaf(cfg)
-
- # update some keys to make them show at the last
- for k in BASE_CONFIG:
- cfg[k] = cfg.pop(k)
- return cfg
-
- @classmethod
- def from_file(cls, filepath: str) -> EasyDict:
- """Build config from file. Supported filetypes: `.py`,`.yaml`,`.json`.
-
- Args:
- filepath (str): The config file path.
-
- Returns: TODO
-
- """
- filepath = osp.abspath(osp.expanduser(filepath))
- if not osp.isfile(filepath):
- raise IOError(f"File does not exist: {filepath}")
- if filepath.endswith(".py"):
- with tempfile.TemporaryDirectory() as temp_config_dir:
-
- shutil.copytree(osp.dirname(filepath), osp.join(temp_config_dir, "tmp_config"))
- sys.path.insert(0, temp_config_dir)
- mod = import_module("tmp_config." + osp.splitext(osp.basename(filepath))[0])
- # mod = import_module(temp_module_name)
- sys.path.pop(0)
- cfg_dict = {
- name: value
- for name, value in mod.__dict__.items()
- if not name.startswith("__")
- }
- for k in list(sys.modules.keys()):
- if "tmp_config" in k:
- del sys.modules[k]
- elif filepath.endswith((".yml", ".yaml")):
- cfg_dict = yaml.load(open(filepath, "r"), Loader=yaml.Loader)
- elif filepath.endswith(".json"):
- cfg_dict = json.load(open(filepath, "r"))
- else:
- raise IOError("Only py/yml/yaml/json type are supported now!")
-
- cfg_text = filepath + "\n"
- with open(filepath, "r") as f:
- cfg_text += f.read()
-
- if BASE_KEY in cfg_dict: # load configs in `BASE_KEY`
- cfg_dir = osp.dirname(filepath)
- base_filename = cfg_dict.pop(BASE_KEY)
- base_filename = (
- base_filename if isinstance(base_filename, list) else [base_filename]
- )
-
- cfg_dict_list = list()
- for f in base_filename:
- _cfg_dict = Config.from_file(osp.join(cfg_dir, f))
- cfg_dict_list.append(_cfg_dict)
-
- base_cfg_dict = dict()
- for c in cfg_dict_list:
- if len(base_cfg_dict.keys() & c.keys()) > 0:
- raise KeyError("Duplicate key is not allowed among bases")
- base_cfg_dict.update(c)
-
- cfg_dict = merge_a_into_b(cfg_dict, base_cfg_dict)
-
- return EasyDict(cfg_dict)
-
- @classmethod
- def merge_list(cls, cfg, opts: list):
- """merge commandline opts.
-
- Args:
- cfg: (dict): The config to be merged.
- opts (list): The list to merge. Format: [key1, name1, key2, name2,...].
- The keys can be nested. For example, ["a.b", v] will be considered
- as `dict(a=dict(b=v))`.
-
- Returns: dict.
-
- """
- assert len(opts) % 2 == 0, f"length of opts must be even. Got: {opts}"
- for i in range(0, len(opts), 2):
- full_k, v = opts[i], opts[i + 1]
- keys = full_k.split(".")
- sub_d = cfg
- for i, k in enumerate(keys):
- if not hasattr(sub_d, k):
- raise ValueError(f"The key {k} not exist in the config. Full key:{full_k}")
- if i != len(keys) - 1:
- sub_d = sub_d[k]
- else:
- sub_d[k] = v
- return cfg
-
-
-def merge_a_into_b(a, b, inplace=False):
- """The values in a will override values in b.
-
- Args:
- a (dict): source dict.
- b (dict): target dict.
-
- Returns: dict. recursively merge dict a into dict b.
-
- """
- if not inplace:
- b = deepcopy(b)
- for key in a:
- if key in b:
- if isinstance(a[key], dict) and isinstance(b[key], dict):
- b[key] = merge_a_into_b(a[key], b[key], inplace=True)
- else:
- b[key] = a[key]
- else:
- b[key] = a[key]
- return b
-
-
-def eval_dict_leaf(d, orig_dict=None):
- """eval values of dict leaf.
-
- Args:
- d (dict): The dict to eval.
-
- Returns: dict.
-
- """
- if orig_dict is None:
- orig_dict = d
- for k, v in d.items():
- if not isinstance(v, dict):
- d[k] = eval_string(v, orig_dict)
- else:
- eval_dict_leaf(v, orig_dict)
- return d
-
-
-def eval_string(string, d):
- """automatically evaluate string to corresponding types.
-
- For example:
- not a string -> return the original input
- '0' -> 0
- '0.2' -> 0.2
- '[0, 1, 2]' -> [0,1,2]
- 'eval(1+2)' -> 3
- 'eval(range(5))' -> [0,1,2,3,4]
- '${a}' -> d.a
-
-
-
- Args:
- string (str): The value to evaluate.
- d (dict): The
-
- Returns: the corresponding type
-
- """
- if not isinstance(string, str):
- return string
- # if len(string) > 1 and string[0] == "[" and string[-1] == "]":
- # return eval(string)
- if string[0:5] == "eval(":
- return eval(string[5:-1])
-
- s0 = string
- s1 = re.sub(r"\${(.*)}", r"d.\1", s0)
- if s1 != s0:
- while s1 != s0:
- s0 = s1
- s1 = re.sub(r"\${(.*)}", r"d.\1", s0)
- return eval(s1)
-
- try:
- v = ast.literal_eval(string)
- except:
- v = string
- return v
diff --git a/spaces/Owechada/roopfaceswapr/roop/face_analyser.py b/spaces/Owechada/roopfaceswapr/roop/face_analyser.py
deleted file mode 100644
index 9c0afe458763edb22dc2332f527dfdba48575b1d..0000000000000000000000000000000000000000
--- a/spaces/Owechada/roopfaceswapr/roop/face_analyser.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import threading
-from typing import Any
-import insightface
-
-import roop.globals
-from roop.typing import Frame
-
-FACE_ANALYSER = None
-THREAD_LOCK = threading.Lock()
-
-
-def get_face_analyser() -> Any:
- global FACE_ANALYSER
-
- with THREAD_LOCK:
- if FACE_ANALYSER is None:
- FACE_ANALYSER = insightface.app.FaceAnalysis(name='buffalo_l', providers=roop.globals.execution_providers)
- FACE_ANALYSER.prepare(ctx_id=0, det_size=(640, 640))
- return FACE_ANALYSER
-
-
-def get_one_face(frame: Frame) -> Any:
- face = get_face_analyser().get(frame)
- try:
- return min(face, key=lambda x: x.bbox[0])
- except ValueError:
- return None
-
-
-def get_many_faces(frame: Frame) -> Any:
- try:
- return get_face_analyser().get(frame)
- except IndexError:
- return None
diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/data/datasets/register_cityscapes_panoptic.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/data/datasets/register_cityscapes_panoptic.py
deleted file mode 100644
index 07ecb23ba6422ac24e4a21aa6bb3125b07f71f33..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/data/datasets/register_cityscapes_panoptic.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/datasets/cityscapes_panoptic.py
-# Modified by Jitesh Jain (https://github.com/praeclarumjj3)
-# ------------------------------------------------------------------------------
-
-import json
-import logging
-import os
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES
-from detectron2.utils.file_io import PathManager
-
-"""
-This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog.
-"""
-
-
-logger = logging.getLogger(__name__)
-
-
-def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info):
- files = []
- # scan through the directory
- cities = PathManager.ls(image_dir)
- logger.info(f"{len(cities)} cities found in '{image_dir}'.")
- image_dict = {}
- for city in cities:
- city_img_dir = os.path.join(image_dir, city)
- for basename in PathManager.ls(city_img_dir):
- image_file = os.path.join(city_img_dir, basename)
-
- suffix = "_leftImg8bit.png"
- assert basename.endswith(suffix), basename
- basename = os.path.basename(basename)[: -len(suffix)]
-
- image_dict[basename] = image_file
-
- for ann in json_info["annotations"]:
- image_file = image_dict.get(ann["image_id"], None)
- assert image_file is not None, "No image {} found for annotation {}".format(
- ann["image_id"], ann["file_name"]
- )
- label_file = os.path.join(gt_dir, ann["file_name"])
- segments_info = ann["segments_info"]
- files.append((image_file, label_file, segments_info))
-
- assert len(files), "No images found in {}".format(image_dir)
- assert PathManager.isfile(files[0][0]), files[0][0]
- assert PathManager.isfile(files[0][1]), files[0][1]
- return files
-
-
-def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta):
- """
- Args:
- image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train".
- gt_dir (str): path to the raw annotations. e.g.,
- "~/cityscapes/gtFine/cityscapes_panoptic_train".
- gt_json (str): path to the json file. e.g.,
- "~/cityscapes/gtFine/cityscapes_panoptic_train.json".
- meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id"
- and "stuff_dataset_id_to_contiguous_id" to map category ids to
- contiguous ids for training.
-
- Returns:
- list[dict]: a list of dicts in Detectron2 standard format. (See
- `Using Custom Datasets `_ )
- """
-
- def _convert_category_id(segment_info, meta):
- if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]:
- segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][
- segment_info["category_id"]
- ]
- else:
- segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][
- segment_info["category_id"]
- ]
- return segment_info
-
- assert os.path.exists(
- gt_json
- ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa
-
-
- with open(gt_json) as f:
- json_info = json.load(f)
-
- files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info)
- ret = []
- for image_file, label_file, segments_info in files:
- sem_label_file = (
- image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png"
- )
- segments_info = [_convert_category_id(x, meta) for x in segments_info]
- ret.append(
- {
- "file_name": image_file,
- "image_id": "_".join(
- os.path.splitext(os.path.basename(image_file))[0].split("_")[:3]
- ),
- "sem_seg_file_name": sem_label_file,
- "pan_seg_file_name": label_file,
- "segments_info": segments_info,
- }
- )
- assert len(ret), f"No images found in {image_dir}!"
- assert PathManager.isfile(
- ret[0]["sem_seg_file_name"]
- ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa
- assert PathManager.isfile(
- ret[0]["pan_seg_file_name"]
- ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa
- return ret
-
-
-_RAW_CITYSCAPES_PANOPTIC_SPLITS = {
- "cityscapes_fine_panoptic_train": (
- "cityscapes/leftImg8bit/train",
- "cityscapes/gtFine/cityscapes_panoptic_train",
- "cityscapes/gtFine/cityscapes_panoptic_train.json",
- ),
- "cityscapes_fine_panoptic_val": (
- "cityscapes/leftImg8bit/val",
- "cityscapes/gtFine/cityscapes_panoptic_val",
- "cityscapes/gtFine/cityscapes_panoptic_val.json",
- ),
- # "cityscapes_fine_panoptic_test": not supported yet
-}
-
-
-def register_all_cityscapes_panoptic(root):
- meta = {}
- # The following metadata maps contiguous id from [0, #thing categories +
- # #stuff categories) to their names and colors. We have to replica of the
- # same name and color under "thing_*" and "stuff_*" because the current
- # visualization function in D2 handles thing and class classes differently
- # due to some heuristic used in Panoptic FPN. We keep the same naming to
- # enable reusing existing visualization functions.
- thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES]
- thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES]
- stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES]
- stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES]
-
- meta["thing_classes"] = thing_classes
- meta["thing_colors"] = thing_colors
- meta["stuff_classes"] = stuff_classes
- meta["stuff_colors"] = stuff_colors
-
- # There are three types of ids in cityscapes panoptic segmentation:
- # (1) category id: like semantic segmentation, it is the class id for each
- # pixel. Since there are some classes not used in evaluation, the category
- # id is not always contiguous and thus we have two set of category ids:
- # - original category id: category id in the original dataset, mainly
- # used for evaluation.
- # - contiguous category id: [0, #classes), in order to train the classifier
- # (2) instance id: this id is used to differentiate different instances from
- # the same category. For "stuff" classes, the instance id is always 0; for
- # "thing" classes, the instance id starts from 1 and 0 is reserved for
- # ignored instances (e.g. crowd annotation).
- # (3) panoptic id: this is the compact id that encode both category and
- # instance id by: category_id * 1000 + instance_id.
- thing_dataset_id_to_contiguous_id = {}
- stuff_dataset_id_to_contiguous_id = {}
-
- for k in CITYSCAPES_CATEGORIES:
- if k["isthing"] == 1:
- thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"]
- else:
- stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"]
-
- meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id
- meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id
-
- for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items():
- image_dir = os.path.join(root, image_dir)
- gt_dir = os.path.join(root, gt_dir)
- gt_json = os.path.join(root, gt_json)
-
- if key in DatasetCatalog.list():
- DatasetCatalog.remove(key)
-
- DatasetCatalog.register(
- key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta)
- )
- MetadataCatalog.get(key).set(
- panoptic_root=gt_dir,
- image_root=image_dir,
- panoptic_json=gt_json,
- gt_dir=gt_dir.replace("cityscapes_panoptic_", ""),
- evaluator_type="cityscapes_panoptic_seg",
- ignore_label=255,
- label_divisor=1000,
- **meta,
- )
-
-_root = os.getenv("DETECTRON2_DATASETS", "datasets")
-register_all_cityscapes_panoptic(_root)
\ No newline at end of file
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/lexer.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/lexer.go
deleted file mode 100644
index 2b0431d0409574a7b3e23f732815957d76c9af6a..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/lexer.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/arithmetic/fixnums.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/arithmetic/fixnums.go
deleted file mode 100644
index 6640e05e6bef0cc2439cce39f2c46d9c162e7719..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/arithmetic/fixnums.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/io/simple.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/io/simple.go
deleted file mode 100644
index 0c173880fa5b52ed4af67987089309f0641f1320..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/io/simple.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/De-limiter/app.py b/spaces/PeepDaSlan9/De-limiter/app.py
deleted file mode 100644
index 44442e3cddef902e77eed6e165f767fb3536e5cc..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/De-limiter/app.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import os
-import json
-import argparse
-import copy
-
-import numpy as np
-import matplotlib.pyplot as plt
-import torch
-import tqdm
-import librosa
-import librosa.display
-import soundfile as sf
-import pyloudnorm as pyln
-from dotmap import DotMap
-import gradio as gr
-
-from models import load_model_with_args
-from separate_func import (
- conv_tasnet_separate,
-)
-from utils import db2linear
-
-
-tqdm.monitor_interval = 0
-
-
-def separate_track_with_model(
- args, model, device, track_audio, track_name, meter, augmented_gain
-):
- with torch.no_grad():
- if (
- args.model_loss_params.architecture == "conv_tasnet_mask_on_output"
- or args.model_loss_params.architecture == "conv_tasnet"
- ):
- estimates = conv_tasnet_separate(
- args,
- model,
- device,
- track_audio,
- track_name,
- meter=meter,
- augmented_gain=augmented_gain,
- )
-
- return estimates
-
-
-def parallel_mix(input, output, mix_coefficient):
- sr = 44100
- return sr, input[1] * mix_coefficient + output[1] * (1 - mix_coefficient)
-
-
-def int16_to_float32(wav):
- X = wav / 32768
- return X
-
-
-def waveform_plot(input, output, prl_mix_ouptut, figsize_x=20, figsize_y=9):
- sr = 44100
- fig, ax = plt.subplots(
- nrows=3, sharex=True, sharey=True, figsize=(figsize_x, figsize_y)
- )
- librosa.display.waveshow(int16_to_float32(input[1]).T, sr=sr, ax=ax[0])
- ax[0].set(title="Loudness Normalized Input")
- ax[0].label_outer()
- librosa.display.waveshow(int16_to_float32(output[1]).T, sr=sr, ax=ax[1])
- ax[1].set(title="De-limiter Output")
- ax[1].label_outer()
- librosa.display.waveshow(int16_to_float32(prl_mix_ouptut[1]).T, sr=sr, ax=ax[2])
- ax[2].set(title="Parallel Mix of the Input and its De-limiter Output")
- ax[2].label_outer()
- return fig
-
-
-def main(input, mix_coefficient):
- parser = argparse.ArgumentParser(description="model test.py")
- parser.add_argument("--target", type=str, default="all")
- parser.add_argument("--weight_directory", type=str, default="weight")
- parser.add_argument("--output_directory", type=str, default="output")
- parser.add_argument("--use_gpu", type=bool, default=True)
- parser.add_argument("--save_name_as_target", type=bool, default=False)
- parser.add_argument(
- "--loudnorm_input_lufs",
- type=float,
- default=None,
- help="If you want to use loudnorm for input",
- )
- parser.add_argument(
- "--save_output_loudnorm",
- type=float,
- default=-14.0,
- help="Save loudness normalized outputs or not. If you want to save, input target loudness",
- )
- parser.add_argument(
- "--save_mixed_output",
- type=float,
- default=True,
- help="Save original+delimited-estimation mixed output with a ratio of default 0.5 (orginal) and 1 - 0.5 (estimation)",
- )
- parser.add_argument(
- "--save_16k_mono",
- type=bool,
- default=False,
- help="Save 16k mono wav files for FAD evaluation.",
- )
- parser.add_argument(
- "--save_histogram",
- type=bool,
- default=False,
- help="Save histogram of the output. Only valid when the task is 'delimit'",
- )
- parser.add_argument(
- "--use_singletrackset",
- type=bool,
- default=False,
- help="Use SingleTrackSet if input data is too long.",
- )
-
- args, _ = parser.parse_known_args()
-
- with open(f"{args.weight_directory}/{args.target}.json", "r") as f:
- args_dict = json.load(f)
- args_dict = DotMap(args_dict)
-
- for key, value in args_dict["args"].items():
- if key in list(vars(args).keys()):
- pass
- else:
- setattr(args, key, value)
-
- args.test_output_dir = f"{args.output_directory}"
- os.makedirs(args.test_output_dir, exist_ok=True)
-
- device = torch.device(
- "cuda" if torch.cuda.is_available() and args.use_gpu else "cpu"
- )
-
- ###################### Define Models ######################
- our_model = load_model_with_args(args)
- our_model = our_model.to(device)
-
- target_model_path = f"{args.weight_directory}/{args.target}.pth"
- checkpoint = torch.load(target_model_path, map_location=device)
- our_model.load_state_dict(checkpoint)
-
- our_model.eval()
-
- meter = pyln.Meter(44100)
-
- sr, track_audio = input
- orig_sr = copy.deepcopy(sr)
- track_audio = track_audio.T
- track_name = "gradio_demo"
-
- if sr != 44100:
- track_audio = int16_to_float32(track_audio)
- track_audio = librosa.resample(
- track_audio, orig_sr=sr, target_sr=44100, res_type="soxr_vhq"
- )
- sr = 44100
-
- orig_audio = track_audio.copy()
-
- augmented_gain = None
-
- if args.loudnorm_input_lufs: # If you want to use loud-normalized input
- track_lufs = meter.integrated_loudness(track_audio.T)
- augmented_gain = args.loudnorm_input_lufs - track_lufs
- track_audio = track_audio * db2linear(augmented_gain, eps=0.0)
-
- track_audio = (
- torch.as_tensor(track_audio, dtype=torch.float32).unsqueeze(0).to(device)
- )
-
- estimates = separate_track_with_model(
- args, our_model, device, track_audio, track_name, meter, augmented_gain
- )
-
- if orig_sr == 44100:
- orig_audio = int16_to_float32(orig_audio)
- track_lufs = meter.integrated_loudness(orig_audio.T)
- augmented_gain = args.save_output_loudnorm - track_lufs
- orig_audio = orig_audio * db2linear(augmented_gain, eps=0.0)
-
- prl_mix_out = orig_audio.T * mix_coefficient + estimates.T * (1 - mix_coefficient)
- prl_mix_out = prl_mix_out * 32768
- prl_mix_out = prl_mix_out.astype(np.int16)
- estimates = estimates.T * 32768
- estimates = estimates.astype(np.int16)
- orig_audio = orig_audio.T * 32768
- orig_audio = orig_audio.astype(np.int16)
-
- return (
- (sr, estimates),
- (sr, orig_audio),
- (sr, prl_mix_out),
- )
-
-
-with gr.Blocks() as demo:
- gr.HTML(
- """
-
-
-
- Music De-limiter
-
-
-
- A demo for "Music De-limiter via Sample-wise Gain Inversion" to appear in WASPAA 2023.
- Upload a stereo music (tested with .wav, .mp3, .m4a) file and then press "De-limit" button to apply the De-limiter.
- The processing is based on 44.1kHz sample rate. Other sample rate will be automatically resampled to 44.1kHz.
- Since we use a CPU instead of a GPU, it may require a few seconds to minutes.
- Then, you can apply a Parallel Mix technique, which is a simple linear mixing technique of "loudness normalized input" and the "de-limiter output", similar to Parallel Compression.
- If the coefficient is 0.3 then the output will be the "loudness_normalized_input * 0.3 + de-limiter_output * 0.7"
- Check our Paper [arXiv]
- Codes [GitHub]
- Audio samples [Notion]
- Please let me know any issues or comments on vinyne@snu.ac.kr or the "Community" page (the upper right section of this page).
-
- """
- )
- with gr.Row().style(mobile_collapse=False, equal_height=True):
- with gr.Column():
- with gr.Box():
- input_audio = gr.Audio(source="upload", label="De-limiter Input")
- btn = gr.Button("De-limit")
- with gr.Column():
- with gr.Box():
- loud_norm_input = gr.Audio(
- label="Loudness Normalized Input (-14LUFS)",
- show_download_button=True,
- )
- with gr.Box():
- output_audio = gr.Audio(
- label="De-limiter Output",
- show_download_button=True,
- )
- with gr.Box():
- output_audio_parallel = gr.Audio(
- label="Parallel Mix of the Input and its De-limiter Output",
- show_download_button=True,
- )
- slider = gr.Slider(
- minimum=0,
- maximum=1,
- step=0.1,
- value=0.5,
- label="Parallel Mix Coefficient",
- )
- btn.click(
- main,
- inputs=[input_audio, slider],
- outputs=[output_audio, loud_norm_input, output_audio_parallel],
- )
- slider.release(
- parallel_mix,
- inputs=[loud_norm_input, output_audio, slider],
- outputs=output_audio_parallel,
- )
- with gr.Row().style(mobile_collapse=False, equal_height=True):
- with gr.Column():
- with gr.Box():
- plot = gr.Plot(label="Plots")
- btn2 = gr.Button("Show Plots")
- slider_plot_x = gr.Slider(
- minimum=1,
- maximum=100,
- step=1,
- value=20,
- label="Plot X-axis size",
- )
- slider_plot_y = gr.Slider(
- minimum=1,
- maximum=30,
- step=1,
- value=9,
- label="Plot Y-axis size",
- )
- btn2.click(
- waveform_plot,
- inputs=[
- loud_norm_input,
- output_audio,
- output_audio_parallel,
- slider_plot_x,
- slider_plot_y,
- ],
- outputs=plot,
- )
-
-if __name__ == "__main__":
- demo.launch(debug=True)
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/flickr/__init__.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/flickr/__init__.py
deleted file mode 100644
index b0af8d130af70b22791535c67f9dcf34baf5e528..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/flickr/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .flickr_eval import FlickrEvaluator
diff --git a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/transforms.py b/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/transforms.py
deleted file mode 100644
index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000
--- a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/transforms.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/PrismaticAI/MangaMaker/README.md b/spaces/PrismaticAI/MangaMaker/README.md
deleted file mode 100644
index 5f9f0bec3d0a5bb2a84a9a9e5510c9a99dcc7e09..0000000000000000000000000000000000000000
--- a/spaces/PrismaticAI/MangaMaker/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: MangaMaker
-emoji: 🏢
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.13.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/QuophyDzifa/Sepsis-prediction-App/Dockerfile b/spaces/QuophyDzifa/Sepsis-prediction-App/Dockerfile
deleted file mode 100644
index 37be5617bee4c033691306e5ba4297f93f40b976..0000000000000000000000000000000000000000
--- a/spaces/QuophyDzifa/Sepsis-prediction-App/Dockerfile
+++ /dev/null
@@ -1,13 +0,0 @@
-FROM python:3.9-slim
-
-WORKDIR /app
-
-COPY ./requirements.txt /app
-
-RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
-
-EXPOSE 7860
-
-COPY . .
-
-CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/mbcsgroupprober.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/mbcsgroupprober.py
deleted file mode 100644
index 94488360c4b15db311fdf88ef42e064d7d7b2c43..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/mbcsgroupprober.py
+++ /dev/null
@@ -1,56 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Universal charset detector code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 2001
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-# Shy Shalom - original C code
-# Proofpoint, Inc.
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from .big5prober import Big5Prober
-from .charsetgroupprober import CharSetGroupProber
-from .cp949prober import CP949Prober
-from .eucjpprober import EUCJPProber
-from .euckrprober import EUCKRProber
-from .euctwprober import EUCTWProber
-from .gb2312prober import GB2312Prober
-from .johabprober import JOHABProber
-from .sjisprober import SJISProber
-from .utf8prober import UTF8Prober
-
-
-class MBCSGroupProber(CharSetGroupProber):
- def __init__(self, lang_filter=None):
- super().__init__(lang_filter=lang_filter)
- self.probers = [
- UTF8Prober(),
- SJISProber(),
- EUCJPProber(),
- GB2312Prober(),
- EUCKRProber(),
- CP949Prober(),
- Big5Prober(),
- EUCTWProber(),
- JOHABProber(),
- ]
- self.reset()
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/diagnose.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/diagnose.py
deleted file mode 100644
index ad36183898eddb11e33ccb7623c0291ccc0f091d..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/diagnose.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import os
-import platform
-
-from pip._vendor.rich import inspect
-from pip._vendor.rich.console import Console, get_windows_console_features
-from pip._vendor.rich.panel import Panel
-from pip._vendor.rich.pretty import Pretty
-
-
-def report() -> None: # pragma: no cover
- """Print a report to the terminal with debugging information"""
- console = Console()
- inspect(console)
- features = get_windows_console_features()
- inspect(features)
-
- env_names = (
- "TERM",
- "COLORTERM",
- "CLICOLOR",
- "NO_COLOR",
- "TERM_PROGRAM",
- "COLUMNS",
- "LINES",
- "JUPYTER_COLUMNS",
- "JUPYTER_LINES",
- "JPY_PARENT_PID",
- "VSCODE_VERBOSE_LOGGING",
- )
- env = {name: os.getenv(name) for name in env_names}
- console.print(Panel.fit((Pretty(env)), title="[b]Environment Variables"))
-
- console.print(f'platform="{platform.system()}"')
-
-
-if __name__ == "__main__": # pragma: no cover
- report()
diff --git a/spaces/Realcat/image-matching-webui/third_party/SuperGluePretrainedNetwork/models/__init__.py b/spaces/Realcat/image-matching-webui/third_party/SuperGluePretrainedNetwork/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/masked_conv.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/masked_conv.py
deleted file mode 100644
index cd514cc204c1d571ea5dc7e74b038c0f477a008b..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/masked_conv.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-
-import torch
-import torch.nn as nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['masked_im2col_forward', 'masked_col2im_forward'])
-
-
-class MaskedConv2dFunction(Function):
-
- @staticmethod
- def symbolic(g, features, mask, weight, bias, padding, stride):
- return g.op(
- 'mmcv::MMCVMaskedConv2d',
- features,
- mask,
- weight,
- bias,
- padding_i=padding,
- stride_i=stride)
-
- @staticmethod
- def forward(ctx, features, mask, weight, bias, padding=0, stride=1):
- assert mask.dim() == 3 and mask.size(0) == 1
- assert features.dim() == 4 and features.size(0) == 1
- assert features.size()[2:] == mask.size()[1:]
- pad_h, pad_w = _pair(padding)
- stride_h, stride_w = _pair(stride)
- if stride_h != 1 or stride_w != 1:
- raise ValueError(
- 'Stride could not only be 1 in masked_conv2d currently.')
- out_channel, in_channel, kernel_h, kernel_w = weight.size()
-
- batch_size = features.size(0)
- out_h = int(
- math.floor((features.size(2) + 2 * pad_h -
- (kernel_h - 1) - 1) / stride_h + 1))
- out_w = int(
- math.floor((features.size(3) + 2 * pad_w -
- (kernel_h - 1) - 1) / stride_w + 1))
- mask_inds = torch.nonzero(mask[0] > 0, as_tuple=False)
- output = features.new_zeros(batch_size, out_channel, out_h, out_w)
- if mask_inds.numel() > 0:
- mask_h_idx = mask_inds[:, 0].contiguous()
- mask_w_idx = mask_inds[:, 1].contiguous()
- data_col = features.new_zeros(in_channel * kernel_h * kernel_w,
- mask_inds.size(0))
- ext_module.masked_im2col_forward(
- features,
- mask_h_idx,
- mask_w_idx,
- data_col,
- kernel_h=kernel_h,
- kernel_w=kernel_w,
- pad_h=pad_h,
- pad_w=pad_w)
-
- masked_output = torch.addmm(1, bias[:, None], 1,
- weight.view(out_channel, -1), data_col)
- ext_module.masked_col2im_forward(
- masked_output,
- mask_h_idx,
- mask_w_idx,
- output,
- height=out_h,
- width=out_w,
- channels=out_channel)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- return (None, ) * 5
-
-
-masked_conv2d = MaskedConv2dFunction.apply
-
-
-class MaskedConv2d(nn.Conv2d):
- """A MaskedConv2d which inherits the official Conv2d.
-
- The masked forward doesn't implement the backward function and only
- supports the stride parameter to be 1 currently.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- bias=True):
- super(MaskedConv2d,
- self).__init__(in_channels, out_channels, kernel_size, stride,
- padding, dilation, groups, bias)
-
- def forward(self, input, mask=None):
- if mask is None: # fallback to the normal Conv2d
- return super(MaskedConv2d, self).forward(input)
- else:
- return masked_conv2d(input, mask, self.weight, self.bias,
- self.padding)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/iou_balanced_neg_sampler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/iou_balanced_neg_sampler.py
deleted file mode 100644
index f275e430d1b57c4d9df57387b8f3ae6f0ff68cf1..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/iou_balanced_neg_sampler.py
+++ /dev/null
@@ -1,157 +0,0 @@
-import numpy as np
-import torch
-
-from ..builder import BBOX_SAMPLERS
-from .random_sampler import RandomSampler
-
-
-@BBOX_SAMPLERS.register_module()
-class IoUBalancedNegSampler(RandomSampler):
- """IoU Balanced Sampling.
-
- arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)
-
- Sampling proposals according to their IoU. `floor_fraction` of needed RoIs
- are sampled from proposals whose IoU are lower than `floor_thr` randomly.
- The others are sampled from proposals whose IoU are higher than
- `floor_thr`. These proposals are sampled from some bins evenly, which are
- split by `num_bins` via IoU evenly.
-
- Args:
- num (int): number of proposals.
- pos_fraction (float): fraction of positive proposals.
- floor_thr (float): threshold (minimum) IoU for IoU balanced sampling,
- set to -1 if all using IoU balanced sampling.
- floor_fraction (float): sampling fraction of proposals under floor_thr.
- num_bins (int): number of bins in IoU balanced sampling.
- """
-
- def __init__(self,
- num,
- pos_fraction,
- floor_thr=-1,
- floor_fraction=0,
- num_bins=3,
- **kwargs):
- super(IoUBalancedNegSampler, self).__init__(num, pos_fraction,
- **kwargs)
- assert floor_thr >= 0 or floor_thr == -1
- assert 0 <= floor_fraction <= 1
- assert num_bins >= 1
-
- self.floor_thr = floor_thr
- self.floor_fraction = floor_fraction
- self.num_bins = num_bins
-
- def sample_via_interval(self, max_overlaps, full_set, num_expected):
- """Sample according to the iou interval.
-
- Args:
- max_overlaps (torch.Tensor): IoU between bounding boxes and ground
- truth boxes.
- full_set (set(int)): A full set of indices of boxes。
- num_expected (int): Number of expected samples。
-
- Returns:
- np.ndarray: Indices of samples
- """
- max_iou = max_overlaps.max()
- iou_interval = (max_iou - self.floor_thr) / self.num_bins
- per_num_expected = int(num_expected / self.num_bins)
-
- sampled_inds = []
- for i in range(self.num_bins):
- start_iou = self.floor_thr + i * iou_interval
- end_iou = self.floor_thr + (i + 1) * iou_interval
- tmp_set = set(
- np.where(
- np.logical_and(max_overlaps >= start_iou,
- max_overlaps < end_iou))[0])
- tmp_inds = list(tmp_set & full_set)
- if len(tmp_inds) > per_num_expected:
- tmp_sampled_set = self.random_choice(tmp_inds,
- per_num_expected)
- else:
- tmp_sampled_set = np.array(tmp_inds, dtype=np.int)
- sampled_inds.append(tmp_sampled_set)
-
- sampled_inds = np.concatenate(sampled_inds)
- if len(sampled_inds) < num_expected:
- num_extra = num_expected - len(sampled_inds)
- extra_inds = np.array(list(full_set - set(sampled_inds)))
- if len(extra_inds) > num_extra:
- extra_inds = self.random_choice(extra_inds, num_extra)
- sampled_inds = np.concatenate([sampled_inds, extra_inds])
-
- return sampled_inds
-
- def _sample_neg(self, assign_result, num_expected, **kwargs):
- """Sample negative boxes.
-
- Args:
- assign_result (:obj:`AssignResult`): The assigned results of boxes.
- num_expected (int): The number of expected negative samples
-
- Returns:
- Tensor or ndarray: sampled indices.
- """
- neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False)
- if neg_inds.numel() != 0:
- neg_inds = neg_inds.squeeze(1)
- if len(neg_inds) <= num_expected:
- return neg_inds
- else:
- max_overlaps = assign_result.max_overlaps.cpu().numpy()
- # balance sampling for negative samples
- neg_set = set(neg_inds.cpu().numpy())
-
- if self.floor_thr > 0:
- floor_set = set(
- np.where(
- np.logical_and(max_overlaps >= 0,
- max_overlaps < self.floor_thr))[0])
- iou_sampling_set = set(
- np.where(max_overlaps >= self.floor_thr)[0])
- elif self.floor_thr == 0:
- floor_set = set(np.where(max_overlaps == 0)[0])
- iou_sampling_set = set(
- np.where(max_overlaps > self.floor_thr)[0])
- else:
- floor_set = set()
- iou_sampling_set = set(
- np.where(max_overlaps > self.floor_thr)[0])
- # for sampling interval calculation
- self.floor_thr = 0
-
- floor_neg_inds = list(floor_set & neg_set)
- iou_sampling_neg_inds = list(iou_sampling_set & neg_set)
- num_expected_iou_sampling = int(num_expected *
- (1 - self.floor_fraction))
- if len(iou_sampling_neg_inds) > num_expected_iou_sampling:
- if self.num_bins >= 2:
- iou_sampled_inds = self.sample_via_interval(
- max_overlaps, set(iou_sampling_neg_inds),
- num_expected_iou_sampling)
- else:
- iou_sampled_inds = self.random_choice(
- iou_sampling_neg_inds, num_expected_iou_sampling)
- else:
- iou_sampled_inds = np.array(
- iou_sampling_neg_inds, dtype=np.int)
- num_expected_floor = num_expected - len(iou_sampled_inds)
- if len(floor_neg_inds) > num_expected_floor:
- sampled_floor_inds = self.random_choice(
- floor_neg_inds, num_expected_floor)
- else:
- sampled_floor_inds = np.array(floor_neg_inds, dtype=np.int)
- sampled_inds = np.concatenate(
- (sampled_floor_inds, iou_sampled_inds))
- if len(sampled_inds) < num_expected:
- num_extra = num_expected - len(sampled_inds)
- extra_inds = np.array(list(neg_set - set(sampled_inds)))
- if len(extra_inds) > num_extra:
- extra_inds = self.random_choice(extra_inds, num_extra)
- sampled_inds = np.concatenate((sampled_inds, extra_inds))
- sampled_inds = torch.from_numpy(sampled_inds).long().to(
- assign_result.gt_inds.device)
- return sampled_inds
diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/legacy.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/legacy.py
deleted file mode 100644
index a22d32c80f313f6dead3ba2887caab5bb8cf7e23..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/legacy.py
+++ /dev/null
@@ -1,323 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import click
-import pickle
-import re
-import copy
-import numpy as np
-import torch
-import dnnlib
-from torch_utils import misc
-
-#----------------------------------------------------------------------------
-
-def load_network_pkl(f, force_fp16=False):
- data = _LegacyUnpickler(f).load()
-
- # Legacy TensorFlow pickle => convert.
- if isinstance(data, tuple) and len(data) == 3 and all(isinstance(net, _TFNetworkStub) for net in data):
- tf_G, tf_D, tf_Gs = data
- G = convert_tf_generator(tf_G)
- D = convert_tf_discriminator(tf_D)
- G_ema = convert_tf_generator(tf_Gs)
- data = dict(G=G, D=D, G_ema=G_ema)
-
- # Add missing fields.
- if 'training_set_kwargs' not in data:
- data['training_set_kwargs'] = None
- if 'augment_pipe' not in data:
- data['augment_pipe'] = None
-
- # Validate contents.
- assert isinstance(data['G'], torch.nn.Module)
- assert isinstance(data['D'], torch.nn.Module)
- assert isinstance(data['G_ema'], torch.nn.Module)
- assert isinstance(data['training_set_kwargs'], (dict, type(None)))
- assert isinstance(data['augment_pipe'], (torch.nn.Module, type(None)))
-
- # Force FP16.
- if force_fp16:
- for key in ['G', 'D', 'G_ema']:
- old = data[key]
- kwargs = copy.deepcopy(old.init_kwargs)
- if key.startswith('G'):
- kwargs.synthesis_kwargs = dnnlib.EasyDict(kwargs.get('synthesis_kwargs', {}))
- kwargs.synthesis_kwargs.num_fp16_res = 4
- kwargs.synthesis_kwargs.conv_clamp = 256
- if key.startswith('D'):
- kwargs.num_fp16_res = 4
- kwargs.conv_clamp = 256
- if kwargs != old.init_kwargs:
- new = type(old)(**kwargs).eval().requires_grad_(False)
- misc.copy_params_and_buffers(old, new, require_all=True)
- data[key] = new
- return data
-
-#----------------------------------------------------------------------------
-
-class _TFNetworkStub(dnnlib.EasyDict):
- pass
-
-class _LegacyUnpickler(pickle.Unpickler):
- def find_class(self, module, name):
- if module == 'torch.storage' and name == '_load_from_bytes':
- import io
- return lambda b: torch.load(io.BytesIO(b), map_location='cpu')
- if module == 'dnnlib.tflib.network' and name == 'Network':
- return _TFNetworkStub
- return super().find_class(module, name)
-
-#----------------------------------------------------------------------------
-
-def _collect_tf_params(tf_net):
- # pylint: disable=protected-access
- tf_params = dict()
- def recurse(prefix, tf_net):
- for name, value in tf_net.variables:
- tf_params[prefix + name] = value
- for name, comp in tf_net.components.items():
- recurse(prefix + name + '/', comp)
- recurse('', tf_net)
- return tf_params
-
-#----------------------------------------------------------------------------
-
-def _populate_module_params(module, *patterns):
- for name, tensor in misc.named_params_and_buffers(module):
- found = False
- value = None
- for pattern, value_fn in zip(patterns[0::2], patterns[1::2]):
- match = re.fullmatch(pattern, name)
- if match:
- found = True
- if value_fn is not None:
- value = value_fn(*match.groups())
- break
- try:
- assert found
- if value is not None:
- tensor.copy_(torch.from_numpy(np.array(value)))
- except:
- print(name, list(tensor.shape))
- raise
-
-#----------------------------------------------------------------------------
-
-def convert_tf_generator(tf_G):
- if tf_G.version < 4:
- raise ValueError('TensorFlow pickle version too low')
-
- # Collect kwargs.
- tf_kwargs = tf_G.static_kwargs
- known_kwargs = set()
- def kwarg(tf_name, default=None, none=None):
- known_kwargs.add(tf_name)
- val = tf_kwargs.get(tf_name, default)
- return val if val is not None else none
-
- # Convert kwargs.
- kwargs = dnnlib.EasyDict(
- z_dim = kwarg('latent_size', 512),
- c_dim = kwarg('label_size', 0),
- w_dim = kwarg('dlatent_size', 512),
- img_resolution = kwarg('resolution', 1024),
- img_channels = kwarg('num_channels', 3),
- mapping_kwargs = dnnlib.EasyDict(
- num_layers = kwarg('mapping_layers', 8),
- embed_features = kwarg('label_fmaps', None),
- layer_features = kwarg('mapping_fmaps', None),
- activation = kwarg('mapping_nonlinearity', 'lrelu'),
- lr_multiplier = kwarg('mapping_lrmul', 0.01),
- w_avg_beta = kwarg('w_avg_beta', 0.995, none=1),
- ),
- synthesis_kwargs = dnnlib.EasyDict(
- channel_base = kwarg('fmap_base', 16384) * 2,
- channel_max = kwarg('fmap_max', 512),
- num_fp16_res = kwarg('num_fp16_res', 0),
- conv_clamp = kwarg('conv_clamp', None),
- architecture = kwarg('architecture', 'skip'),
- resample_filter = kwarg('resample_kernel', [1,3,3,1]),
- use_noise = kwarg('use_noise', True),
- activation = kwarg('nonlinearity', 'lrelu'),
- ),
- )
-
- # Check for unknown kwargs.
- kwarg('truncation_psi')
- kwarg('truncation_cutoff')
- kwarg('style_mixing_prob')
- kwarg('structure')
- unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs)
- if len(unknown_kwargs) > 0:
- raise ValueError('Unknown TensorFlow kwarg', unknown_kwargs[0])
-
- # Collect params.
- tf_params = _collect_tf_params(tf_G)
- for name, value in list(tf_params.items()):
- match = re.fullmatch(r'ToRGB_lod(\d+)/(.*)', name)
- if match:
- r = kwargs.img_resolution // (2 ** int(match.group(1)))
- tf_params[f'{r}x{r}/ToRGB/{match.group(2)}'] = value
- kwargs.synthesis.kwargs.architecture = 'orig'
- #for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}')
-
- # Convert params.
- from training import networks
- G = networks.Generator(**kwargs).eval().requires_grad_(False)
- # pylint: disable=unnecessary-lambda
- _populate_module_params(G,
- r'mapping\.w_avg', lambda: tf_params[f'dlatent_avg'],
- r'mapping\.embed\.weight', lambda: tf_params[f'mapping/LabelEmbed/weight'].transpose(),
- r'mapping\.embed\.bias', lambda: tf_params[f'mapping/LabelEmbed/bias'],
- r'mapping\.fc(\d+)\.weight', lambda i: tf_params[f'mapping/Dense{i}/weight'].transpose(),
- r'mapping\.fc(\d+)\.bias', lambda i: tf_params[f'mapping/Dense{i}/bias'],
- r'synthesis\.b4\.const', lambda: tf_params[f'synthesis/4x4/Const/const'][0],
- r'synthesis\.b4\.conv1\.weight', lambda: tf_params[f'synthesis/4x4/Conv/weight'].transpose(3, 2, 0, 1),
- r'synthesis\.b4\.conv1\.bias', lambda: tf_params[f'synthesis/4x4/Conv/bias'],
- r'synthesis\.b4\.conv1\.noise_const', lambda: tf_params[f'synthesis/noise0'][0, 0],
- r'synthesis\.b4\.conv1\.noise_strength', lambda: tf_params[f'synthesis/4x4/Conv/noise_strength'],
- r'synthesis\.b4\.conv1\.affine\.weight', lambda: tf_params[f'synthesis/4x4/Conv/mod_weight'].transpose(),
- r'synthesis\.b4\.conv1\.affine\.bias', lambda: tf_params[f'synthesis/4x4/Conv/mod_bias'] + 1,
- r'synthesis\.b(\d+)\.conv0\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/weight'][::-1, ::-1].transpose(3, 2, 0, 1),
- r'synthesis\.b(\d+)\.conv0\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/bias'],
- r'synthesis\.b(\d+)\.conv0\.noise_const', lambda r: tf_params[f'synthesis/noise{int(np.log2(int(r)))*2-5}'][0, 0],
- r'synthesis\.b(\d+)\.conv0\.noise_strength', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/noise_strength'],
- r'synthesis\.b(\d+)\.conv0\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/mod_weight'].transpose(),
- r'synthesis\.b(\d+)\.conv0\.affine\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/mod_bias'] + 1,
- r'synthesis\.b(\d+)\.conv1\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/weight'].transpose(3, 2, 0, 1),
- r'synthesis\.b(\d+)\.conv1\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/bias'],
- r'synthesis\.b(\d+)\.conv1\.noise_const', lambda r: tf_params[f'synthesis/noise{int(np.log2(int(r)))*2-4}'][0, 0],
- r'synthesis\.b(\d+)\.conv1\.noise_strength', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/noise_strength'],
- r'synthesis\.b(\d+)\.conv1\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/mod_weight'].transpose(),
- r'synthesis\.b(\d+)\.conv1\.affine\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/mod_bias'] + 1,
- r'synthesis\.b(\d+)\.torgb\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/weight'].transpose(3, 2, 0, 1),
- r'synthesis\.b(\d+)\.torgb\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/bias'],
- r'synthesis\.b(\d+)\.torgb\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/mod_weight'].transpose(),
- r'synthesis\.b(\d+)\.torgb\.affine\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/mod_bias'] + 1,
- r'synthesis\.b(\d+)\.skip\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Skip/weight'][::-1, ::-1].transpose(3, 2, 0, 1),
- r'.*\.resample_filter', None,
- )
- return G
-
-#----------------------------------------------------------------------------
-
-def convert_tf_discriminator(tf_D):
- if tf_D.version < 4:
- raise ValueError('TensorFlow pickle version too low')
-
- # Collect kwargs.
- tf_kwargs = tf_D.static_kwargs
- known_kwargs = set()
- def kwarg(tf_name, default=None):
- known_kwargs.add(tf_name)
- return tf_kwargs.get(tf_name, default)
-
- # Convert kwargs.
- kwargs = dnnlib.EasyDict(
- c_dim = kwarg('label_size', 0),
- img_resolution = kwarg('resolution', 1024),
- img_channels = kwarg('num_channels', 3),
- architecture = kwarg('architecture', 'resnet'),
- channel_base = kwarg('fmap_base', 16384) * 2,
- channel_max = kwarg('fmap_max', 512),
- num_fp16_res = kwarg('num_fp16_res', 0),
- conv_clamp = kwarg('conv_clamp', None),
- cmap_dim = kwarg('mapping_fmaps', None),
- block_kwargs = dnnlib.EasyDict(
- activation = kwarg('nonlinearity', 'lrelu'),
- resample_filter = kwarg('resample_kernel', [1,3,3,1]),
- freeze_layers = kwarg('freeze_layers', 0),
- ),
- mapping_kwargs = dnnlib.EasyDict(
- num_layers = kwarg('mapping_layers', 0),
- embed_features = kwarg('mapping_fmaps', None),
- layer_features = kwarg('mapping_fmaps', None),
- activation = kwarg('nonlinearity', 'lrelu'),
- lr_multiplier = kwarg('mapping_lrmul', 0.1),
- ),
- epilogue_kwargs = dnnlib.EasyDict(
- mbstd_group_size = kwarg('mbstd_group_size', None),
- mbstd_num_channels = kwarg('mbstd_num_features', 1),
- activation = kwarg('nonlinearity', 'lrelu'),
- ),
- )
-
- # Check for unknown kwargs.
- kwarg('structure')
- unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs)
- if len(unknown_kwargs) > 0:
- raise ValueError('Unknown TensorFlow kwarg', unknown_kwargs[0])
-
- # Collect params.
- tf_params = _collect_tf_params(tf_D)
- for name, value in list(tf_params.items()):
- match = re.fullmatch(r'FromRGB_lod(\d+)/(.*)', name)
- if match:
- r = kwargs.img_resolution // (2 ** int(match.group(1)))
- tf_params[f'{r}x{r}/FromRGB/{match.group(2)}'] = value
- kwargs.architecture = 'orig'
- #for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}')
-
- # Convert params.
- from training import networks
- D = networks.Discriminator(**kwargs).eval().requires_grad_(False)
- # pylint: disable=unnecessary-lambda
- _populate_module_params(D,
- r'b(\d+)\.fromrgb\.weight', lambda r: tf_params[f'{r}x{r}/FromRGB/weight'].transpose(3, 2, 0, 1),
- r'b(\d+)\.fromrgb\.bias', lambda r: tf_params[f'{r}x{r}/FromRGB/bias'],
- r'b(\d+)\.conv(\d+)\.weight', lambda r, i: tf_params[f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/weight'].transpose(3, 2, 0, 1),
- r'b(\d+)\.conv(\d+)\.bias', lambda r, i: tf_params[f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/bias'],
- r'b(\d+)\.skip\.weight', lambda r: tf_params[f'{r}x{r}/Skip/weight'].transpose(3, 2, 0, 1),
- r'mapping\.embed\.weight', lambda: tf_params[f'LabelEmbed/weight'].transpose(),
- r'mapping\.embed\.bias', lambda: tf_params[f'LabelEmbed/bias'],
- r'mapping\.fc(\d+)\.weight', lambda i: tf_params[f'Mapping{i}/weight'].transpose(),
- r'mapping\.fc(\d+)\.bias', lambda i: tf_params[f'Mapping{i}/bias'],
- r'b4\.conv\.weight', lambda: tf_params[f'4x4/Conv/weight'].transpose(3, 2, 0, 1),
- r'b4\.conv\.bias', lambda: tf_params[f'4x4/Conv/bias'],
- r'b4\.fc\.weight', lambda: tf_params[f'4x4/Dense0/weight'].transpose(),
- r'b4\.fc\.bias', lambda: tf_params[f'4x4/Dense0/bias'],
- r'b4\.out\.weight', lambda: tf_params[f'Output/weight'].transpose(),
- r'b4\.out\.bias', lambda: tf_params[f'Output/bias'],
- r'.*\.resample_filter', None,
- )
- return D
-
-#----------------------------------------------------------------------------
-
-@click.command()
-@click.option('--source', help='Input pickle', required=True, metavar='PATH')
-@click.option('--dest', help='Output pickle', required=True, metavar='PATH')
-@click.option('--force-fp16', help='Force the networks to use FP16', type=bool, default=False, metavar='BOOL', show_default=True)
-def convert_network_pickle(source, dest, force_fp16):
- """Convert legacy network pickle into the native PyTorch format.
-
- The tool is able to load the main network configurations exported using the TensorFlow version of StyleGAN2 or StyleGAN2-ADA.
- It does not support e.g. StyleGAN2-ADA comparison methods, StyleGAN2 configs A-D, or StyleGAN1 networks.
-
- Example:
-
- \b
- python legacy.py \\
- --source=https://nvlabs-fi-cdn.nvidia.com/stylegan2/networks/stylegan2-cat-config-f.pkl \\
- --dest=stylegan2-cat-config-f.pkl
- """
- print(f'Loading "{source}"...')
- with dnnlib.util.open_url(source) as f:
- data = load_network_pkl(f, force_fp16=force_fp16)
- print(f'Saving "{dest}"...')
- with open(dest, 'wb') as f:
- pickle.dump(data, f)
- print('Done.')
-
-#----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- convert_network_pickle() # pylint: disable=no-value-for-parameter
-
-#----------------------------------------------------------------------------
diff --git a/spaces/RuijiaTan/MultiPrincipalElementAlloyPropertyPredictor/Parser/element.py b/spaces/RuijiaTan/MultiPrincipalElementAlloyPropertyPredictor/Parser/element.py
deleted file mode 100644
index 07cb9c1ffdf63068b1af24b5010538de61bc9836..0000000000000000000000000000000000000000
--- a/spaces/RuijiaTan/MultiPrincipalElementAlloyPropertyPredictor/Parser/element.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Contains 27 elements.
-elements_list = (["Al", "B", "C", "Co", "Cr", "Cu", "Fe", "Ga", "Ge",
- "Hf", "Li", "Mg", "Mn", "Mo", "N", "Nb","Ni", "Sc",
- "Si", "Sn", "Ta", "Ti", "V", "W", "Y", "Zn", "Zr"])
-
diff --git a/spaces/SIGGRAPH2022/Self-Distilled-StyleGAN/style.css b/spaces/SIGGRAPH2022/Self-Distilled-StyleGAN/style.css
deleted file mode 100644
index 8dd6cf3081735167994093f71d1d0c80d1a7d144..0000000000000000000000000000000000000000
--- a/spaces/SIGGRAPH2022/Self-Distilled-StyleGAN/style.css
+++ /dev/null
@@ -1,11 +0,0 @@
-h1 {
- text-align: center;
-}
-div#result {
- max-width: 600px;
- max-height: 600px;
-}
-img#visitor-badge {
- display: block;
- margin: auto;
-}
diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/ui_util/__init__.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/ui_util/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Sa-m/YOLO-V7-Custom-Model-Pot-Hole-Detection/utils/google_utils.py b/spaces/Sa-m/YOLO-V7-Custom-Model-Pot-Hole-Detection/utils/google_utils.py
deleted file mode 100644
index f363408e63981702e63dcda189cbc2099d0a9499..0000000000000000000000000000000000000000
--- a/spaces/Sa-m/YOLO-V7-Custom-Model-Pot-Hole-Detection/utils/google_utils.py
+++ /dev/null
@@ -1,123 +0,0 @@
-# Google utils: https://cloud.google.com/storage/docs/reference/libraries
-
-import os
-import platform
-import subprocess
-import time
-from pathlib import Path
-
-import requests
-import torch
-
-
-def gsutil_getsize(url=''):
- # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
- s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')
- return eval(s.split(' ')[0]) if len(s) else 0 # bytes
-
-
-def attempt_download(file, repo='WongKinYiu/yolov7'):
- # Attempt file download if does not exist
- file = Path(str(file).strip().replace("'", '').lower())
-
- if not file.exists():
- try:
- response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api
- assets = [x['name'] for x in response['assets']] # release assets
- tag = response['tag_name'] # i.e. 'v1.0'
- except: # fallback plan
- assets = ['yolov7.pt', 'yolov7-tiny.pt', 'yolov7x.pt', 'yolov7-d6.pt', 'yolov7-e6.pt',
- 'yolov7-e6e.pt', 'yolov7-w6.pt']
- tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
-
- name = file.name
- if name in assets:
- msg = f'{file} missing, try downloading from https://github.com/{repo}/releases/'
- redundant = False # second download option
- try: # GitHub
- url = f'https://github.com/{repo}/releases/download/{tag}/{name}'
- print(f'Downloading {url} to {file}...')
- torch.hub.download_url_to_file(url, file)
- assert file.exists() and file.stat().st_size > 1E6 # check
- except Exception as e: # GCP
- print(f'Download error: {e}')
- assert redundant, 'No secondary mirror'
- url = f'https://storage.googleapis.com/{repo}/ckpt/{name}'
- print(f'Downloading {url} to {file}...')
- os.system(f'curl -L {url} -o {file}') # torch.hub.download_url_to_file(url, weights)
- finally:
- if not file.exists() or file.stat().st_size < 1E6: # check
- file.unlink(missing_ok=True) # remove partial downloads
- print(f'ERROR: Download failure: {msg}')
- print('')
- return
-
-
-def gdrive_download(id='', file='tmp.zip'):
- # Downloads a file from Google Drive. from yolov7.utils.google_utils import *; gdrive_download()
- t = time.time()
- file = Path(file)
- cookie = Path('cookie') # gdrive cookie
- print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='')
- file.unlink(missing_ok=True) # remove existing file
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Attempt file download
- out = "NUL" if platform.system() == "Windows" else "/dev/null"
- os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}')
- if os.path.exists('cookie'): # large file
- s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}'
- else: # small file
- s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"'
- r = os.system(s) # execute, capture return
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Error check
- if r != 0:
- file.unlink(missing_ok=True) # remove partial
- print('Download error ') # raise Exception('Download error')
- return r
-
- # Unzip if archive
- if file.suffix == '.zip':
- print('unzipping... ', end='')
- os.system(f'unzip -q {file}') # unzip
- file.unlink() # remove zip to free space
-
- print(f'Done ({time.time() - t:.1f}s)')
- return r
-
-
-def get_token(cookie="./cookie"):
- with open(cookie) as f:
- for line in f:
- if "download" in line:
- return line.split()[-1]
- return ""
-
-# def upload_blob(bucket_name, source_file_name, destination_blob_name):
-# # Uploads a file to a bucket
-# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python
-#
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(destination_blob_name)
-#
-# blob.upload_from_filename(source_file_name)
-#
-# print('File {} uploaded to {}.'.format(
-# source_file_name,
-# destination_blob_name))
-#
-#
-# def download_blob(bucket_name, source_blob_name, destination_file_name):
-# # Uploads a blob from a bucket
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(source_blob_name)
-#
-# blob.download_to_filename(destination_file_name)
-#
-# print('Blob {} downloaded to {}.'.format(
-# source_blob_name,
-# destination_file_name))
diff --git a/spaces/Saffy/minipets/app.py b/spaces/Saffy/minipets/app.py
deleted file mode 100644
index 5e23bc4076bba55b531dd7417074288e202b2e16..0000000000000000000000000000000000000000
--- a/spaces/Saffy/minipets/app.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from fastai.vision.all import *
-import gradio as gr
-
-def is_cat(x): return x[0].isupper()
-
-learn = load_learner('pet_model.pkl')
-
-categories = ('Dog', 'Cat')
-
-def classify_image(img):
- pred,idx,probs = learn.predict(img)
- return dict(zip(categories, map(float,probs)))
-
-image = gr.inputs.Image(shape=(192, 192))
-label = gr.outputs.Label()
-examples = ['dog.jpg', 'cat.jpg', 'cat1.jpg', 'dog1.jpg']
-
-intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples)
-intf.launch(inline=False)
diff --git a/spaces/SaintPepe/google-ddpm-church-256/app.py b/spaces/SaintPepe/google-ddpm-church-256/app.py
deleted file mode 100644
index eae6f0d1eea6816da67d4a7f8e237465046db036..0000000000000000000000000000000000000000
--- a/spaces/SaintPepe/google-ddpm-church-256/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/google/ddpm-church-256").launch()
\ No newline at end of file
diff --git a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/app.py b/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/app.py
deleted file mode 100644
index 0922ba565c4e51338ba37758b7fd9683bc0f8c84..0000000000000000000000000000000000000000
--- a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/app.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import os
-import json
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-import translators.server as tss
-
-import commons
-import utils
-from data_utils import TextAudioLoader, TextAudioCollate, TextAudioSpeakerLoader, TextAudioSpeakerCollate
-from models import SynthesizerTrn
-from text.symbols import symbols
-from text import text_to_sequence
-import gradio as gr
-
-from scipy.io.wavfile import write
-
-
-def get_text(text, hps):
- text_norm = text_to_sequence(text, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-hps = utils.get_hparams_from_file("./configs/uma87.json")
-net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model)
-_ = net_g.eval()
-
-_ = utils.load_checkpoint("pretrained_models/G_1153000.pth", net_g, None)
-
-title = "Umamusume voice synthesizer \n 赛马娘语音合成器"
-description = """
-This synthesizer is created based on VITS (https://arxiv.org/abs/2106.06103) model, trained on voice data extracted from mobile game \n
-这个合成器是基于VITS文本到语音模型,在从手游《賽馬娘:Pretty Derby》解包的语音数据上训练得到。
-"""
-article = """
-If your input language is not Japanese, it will be translated to Japanese by Google translator, but accuracy is not guaranteed.\n
-如果您的输入语言不是日语,则会由谷歌翻译自动翻译为日语,但是准确性不能保证。
-"""
-def infer(text, character, language):
- if language == '日本語':
- pass
- elif language == '简体中文':
- text = tss.google(text, from_language='zh', to_language='ja')
- elif language == 'English':
- text = tss.google(text, from_language='en', to_language='ja')
- char_id = int(character.split(':')[0])
- stn_tst = get_text(text, hps)
- with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = torch.LongTensor([stn_tst.size(0)])
- sid = torch.LongTensor([char_id])
- audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy()
- return (text,(22050, audio))
-
-# We instantiate the Textbox class
-textbox = gr.Textbox(label="Text", placeholder="Type your sentence here", lines=2)
-# select character
-char_dropdown = gr.Dropdown(['0:特别周','1:无声铃鹿','2:东海帝王','3:丸善斯基',
- '4:富士奇迹','5:小栗帽','6:黄金船','7:伏特加',
- '8:大和赤骥','9:大树快车','10:草上飞','11:菱亚马逊',
- '12:目白麦昆','13:神鹰','14:好歌剧','15:成田白仁',
- '16:鲁道夫象征','17:气槽','18:爱丽数码','19:青云天空',
- '20:玉藻十字','21:美妙姿势','22:琵琶晨光','23:重炮',
- '24:曼城茶座','25:美普波旁','26:目白雷恩','27:菱曙',
- '28:雪之美人','29:米浴','30:艾尼斯风神','31:爱丽速子',
- '32:爱慕织姬','33:稻荷一','34:胜利奖券','35:空中神宫',
- '36:荣进闪耀','37:真机伶','38:川上公主','39:黄金城市',
- '40:樱花进王','41:采珠','42:新光风','43:东商变革',
- '44:超级小溪','45:醒目飞鹰','46:荒漠英雄','47:东瀛佐敦',
- '48:中山庆典','49:成田大进','50:西野花','51:春乌拉拉',
- '52:青竹回忆','53:微光飞驹','54:美丽周日','55:待兼福来',
- '56:Mr.C.B','57:名将怒涛','58:目白多伯','59:优秀素质',
- '60:帝王光环','61:待兼诗歌剧','62:生野狄杜斯','63:目白善信',
- '64:大拓太阳神','65:双涡轮','66:里见光钻','67:北部玄驹',
- '68:樱花千代王','69:天狼星象征','70:目白阿尔丹','71:八重无敌',
- '72:鹤丸刚志','73:目白光明','74:樱花桂冠','75:成田路',
- '76:也文摄辉','77:吉兆','78:谷野美酒','79:第一红宝石',
- '80:真弓快车','81:骏川手纲','82:凯斯奇迹','83:小林历奇',
- '84:北港火山','85:奇锐骏','86:秋川理事长'])
-language_dropdown = gr.Dropdown(['日本語','简体中文','English'])
-examples = [['お疲れ様です,トレーナーさん。', '1:无声铃鹿', '日本語'],
- ['張り切っていこう!', '67:北部玄驹', '日本語'],
- ['何でこんなに慣れでんのよ,私のほが先に好きだっだのに。', '10:草上飞','日本語'],
- ['授業中に出しだら,学校生活終わるですわ。', '12:目白麦昆','日本語'],
- ['お帰りなさい,お兄様!', '29:米浴','日本語'],
- ['私の処女をもらっでください!', '29:米浴','日本語']]
-gr.Interface(fn=infer, inputs=[textbox, char_dropdown, language_dropdown], outputs=["text","audio"],
- title=title, description=description, article=article, examples = examples).launch()
\ No newline at end of file
diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/pPose_nms.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/pPose_nms.py
deleted file mode 100644
index 3c041e145445cc2ccd2ea6034a37fa2c4bb3b24e..0000000000000000000000000000000000000000
--- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/pPose_nms.py
+++ /dev/null
@@ -1,363 +0,0 @@
-# -*- coding: utf-8 -*-
-import torch
-import json
-import os
-import zipfile
-import time
-from multiprocessing.dummy import Pool as ThreadPool
-import numpy as np
-from opt import opt
-
-''' Constant Configuration '''
-delta1 = 1
-mu = 1.7
-delta2 = 2.65
-gamma = 22.48
-scoreThreds = 0.3
-matchThreds = 5
-areaThres = 0#40 * 40.5
-alpha = 0.1
-#pool = ThreadPool(4)
-
-
-def pose_nms(bboxes, bbox_scores, pose_preds, pose_scores):
- '''
- Parametric Pose NMS algorithm
- bboxes: bbox locations list (n, 4)
- bbox_scores: bbox scores list (n,)
- pose_preds: pose locations list (n, 17, 2)
- pose_scores: pose scores list (n, 17, 1)
- '''
- #global ori_pose_preds, ori_pose_scores, ref_dists
-
- pose_scores[pose_scores == 0] = 1e-5
-
- final_result = []
-
- ori_bbox_scores = bbox_scores.clone()
- ori_pose_preds = pose_preds.clone()
- ori_pose_scores = pose_scores.clone()
-
- xmax = bboxes[:, 2]
- xmin = bboxes[:, 0]
- ymax = bboxes[:, 3]
- ymin = bboxes[:, 1]
-
- widths = xmax - xmin
- heights = ymax - ymin
- ref_dists = alpha * np.maximum(widths, heights)
-
- nsamples = bboxes.shape[0]
- human_scores = pose_scores.mean(dim=1)
-
- human_ids = np.arange(nsamples)
- # Do pPose-NMS
- pick = []
- merge_ids = []
- while(human_scores.shape[0] != 0):
- # Pick the one with highest score
- pick_id = torch.argmax(human_scores)
- pick.append(human_ids[pick_id])
- # num_visPart = torch.sum(pose_scores[pick_id] > 0.2)
-
- # Get numbers of match keypoints by calling PCK_match
- ref_dist = ref_dists[human_ids[pick_id]]
- simi = get_parametric_distance(pick_id, pose_preds, pose_scores, ref_dist)
- num_match_keypoints = PCK_match(pose_preds[pick_id], pose_preds, ref_dist)
-
- # Delete humans who have more than matchThreds keypoints overlap and high similarity
- delete_ids = torch.from_numpy(np.arange(human_scores.shape[0]))[(simi > gamma) | (num_match_keypoints >= matchThreds)]
-
- if delete_ids.shape[0] == 0:
- delete_ids = pick_id
- #else:
- # delete_ids = torch.from_numpy(delete_ids)
-
- merge_ids.append(human_ids[delete_ids])
- pose_preds = np.delete(pose_preds, delete_ids, axis=0)
- pose_scores = np.delete(pose_scores, delete_ids, axis=0)
- human_ids = np.delete(human_ids, delete_ids)
- human_scores = np.delete(human_scores, delete_ids, axis=0)
- bbox_scores = np.delete(bbox_scores, delete_ids, axis=0)
-
- assert len(merge_ids) == len(pick)
- preds_pick = ori_pose_preds[pick]
- scores_pick = ori_pose_scores[pick]
- bbox_scores_pick = ori_bbox_scores[pick]
- #final_result = pool.map(filter_result, zip(scores_pick, merge_ids, preds_pick, pick, bbox_scores_pick))
- #final_result = [item for item in final_result if item is not None]
-
- for j in range(len(pick)):
- ids = np.arange(17)
- max_score = torch.max(scores_pick[j, ids, 0])
-
- if max_score < scoreThreds:
- continue
-
- # Merge poses
- merge_id = merge_ids[j]
- merge_pose, merge_score = p_merge_fast(
- preds_pick[j], ori_pose_preds[merge_id], ori_pose_scores[merge_id], ref_dists[pick[j]])
-
- max_score = torch.max(merge_score[ids])
- if max_score < scoreThreds:
- continue
-
- xmax = max(merge_pose[:, 0])
- xmin = min(merge_pose[:, 0])
- ymax = max(merge_pose[:, 1])
- ymin = min(merge_pose[:, 1])
-
- if (1.5 ** 2 * (xmax - xmin) * (ymax - ymin) < areaThres):
- continue
-
- final_result.append({
- 'keypoints': merge_pose - 0.3,
- 'kp_score': merge_score,
- 'proposal_score': torch.mean(merge_score) + bbox_scores_pick[j] + 1.25 * max(merge_score)
- })
-
- return final_result
-
-
-def filter_result(args):
- score_pick, merge_id, pred_pick, pick, bbox_score_pick = args
- global ori_pose_preds, ori_pose_scores, ref_dists
- ids = np.arange(17)
- max_score = torch.max(score_pick[ids, 0])
-
- if max_score < scoreThreds:
- return None
-
- # Merge poses
- merge_pose, merge_score = p_merge_fast(
- pred_pick, ori_pose_preds[merge_id], ori_pose_scores[merge_id], ref_dists[pick])
-
- max_score = torch.max(merge_score[ids])
- if max_score < scoreThreds:
- return None
-
- xmax = max(merge_pose[:, 0])
- xmin = min(merge_pose[:, 0])
- ymax = max(merge_pose[:, 1])
- ymin = min(merge_pose[:, 1])
-
- if (1.5 ** 2 * (xmax - xmin) * (ymax - ymin) < 40 * 40.5):
- return None
-
- return {
- 'keypoints': merge_pose - 0.3,
- 'kp_score': merge_score,
- 'proposal_score': torch.mean(merge_score) + bbox_score_pick + 1.25 * max(merge_score)
- }
-
-
-def p_merge(ref_pose, cluster_preds, cluster_scores, ref_dist):
- '''
- Score-weighted pose merging
- INPUT:
- ref_pose: reference pose -- [17, 2]
- cluster_preds: redundant poses -- [n, 17, 2]
- cluster_scores: redundant poses score -- [n, 17, 1]
- ref_dist: reference scale -- Constant
- OUTPUT:
- final_pose: merged pose -- [17, 2]
- final_score: merged score -- [17]
- '''
- dist = torch.sqrt(torch.sum(
- torch.pow(ref_pose[np.newaxis, :] - cluster_preds, 2),
- dim=2
- )) # [n, 17]
-
- kp_num = 17
- ref_dist = min(ref_dist, 15)
-
- mask = (dist <= ref_dist)
- final_pose = torch.zeros(kp_num, 2)
- final_score = torch.zeros(kp_num)
-
- if cluster_preds.dim() == 2:
- cluster_preds.unsqueeze_(0)
- cluster_scores.unsqueeze_(0)
- if mask.dim() == 1:
- mask.unsqueeze_(0)
-
- for i in range(kp_num):
- cluster_joint_scores = cluster_scores[:, i][mask[:, i]] # [k, 1]
- cluster_joint_location = cluster_preds[:, i, :][mask[:, i].unsqueeze(
- -1).repeat(1, 2)].view((torch.sum(mask[:, i]), -1))
-
- # Get an normalized score
- normed_scores = cluster_joint_scores / torch.sum(cluster_joint_scores)
-
- # Merge poses by a weighted sum
- final_pose[i, 0] = torch.dot(cluster_joint_location[:, 0], normed_scores.squeeze(-1))
- final_pose[i, 1] = torch.dot(cluster_joint_location[:, 1], normed_scores.squeeze(-1))
-
- final_score[i] = torch.dot(cluster_joint_scores.transpose(0, 1).squeeze(0), normed_scores.squeeze(-1))
-
- return final_pose, final_score
-
-
-def p_merge_fast(ref_pose, cluster_preds, cluster_scores, ref_dist):
- '''
- Score-weighted pose merging
- INPUT:
- ref_pose: reference pose -- [17, 2]
- cluster_preds: redundant poses -- [n, 17, 2]
- cluster_scores: redundant poses score -- [n, 17, 1]
- ref_dist: reference scale -- Constant
- OUTPUT:
- final_pose: merged pose -- [17, 2]
- final_score: merged score -- [17]
- '''
- dist = torch.sqrt(torch.sum(
- torch.pow(ref_pose[np.newaxis, :] - cluster_preds, 2),
- dim=2
- ))
-
- kp_num = 17
- ref_dist = min(ref_dist, 15)
-
- mask = (dist <= ref_dist)
- final_pose = torch.zeros(kp_num, 2)
- final_score = torch.zeros(kp_num)
-
- if cluster_preds.dim() == 2:
- cluster_preds.unsqueeze_(0)
- cluster_scores.unsqueeze_(0)
- if mask.dim() == 1:
- mask.unsqueeze_(0)
-
- # Weighted Merge
- masked_scores = cluster_scores.mul(mask.float().unsqueeze(-1))
- normed_scores = masked_scores / torch.sum(masked_scores, dim=0)
-
- final_pose = torch.mul(cluster_preds, normed_scores.repeat(1, 1, 2)).sum(dim=0)
- final_score = torch.mul(masked_scores, normed_scores).sum(dim=0)
- return final_pose, final_score
-
-
-def get_parametric_distance(i, all_preds, keypoint_scores, ref_dist):
- pick_preds = all_preds[i]
- pred_scores = keypoint_scores[i]
- dist = torch.sqrt(torch.sum(
- torch.pow(pick_preds[np.newaxis, :] - all_preds, 2),
- dim=2
- ))
- mask = (dist <= 1)
-
- # Define a keypoints distance
- score_dists = torch.zeros(all_preds.shape[0], 17)
- keypoint_scores.squeeze_()
- if keypoint_scores.dim() == 1:
- keypoint_scores.unsqueeze_(0)
- if pred_scores.dim() == 1:
- pred_scores.unsqueeze_(1)
- # The predicted scores are repeated up to do broadcast
- pred_scores = pred_scores.repeat(1, all_preds.shape[0]).transpose(0, 1)
-
- score_dists[mask] = torch.tanh(pred_scores[mask] / delta1) * torch.tanh(keypoint_scores[mask] / delta1)
-
- point_dist = torch.exp((-1) * dist / delta2)
- final_dist = torch.sum(score_dists, dim=1) + mu * torch.sum(point_dist, dim=1)
-
- return final_dist
-
-
-def PCK_match(pick_pred, all_preds, ref_dist):
- dist = torch.sqrt(torch.sum(
- torch.pow(pick_pred[np.newaxis, :] - all_preds, 2),
- dim=2
- ))
- ref_dist = min(ref_dist, 7)
- num_match_keypoints = torch.sum(
- dist / ref_dist <= 1,
- dim=1
- )
-
- return num_match_keypoints
-
-
-def write_json(all_results, outputpath, for_eval=False):
- '''
- all_result: result dict of predictions
- outputpath: output directory
- '''
- form = opt.format
- json_results = []
- json_results_cmu = {}
- for im_res in all_results:
- im_name = im_res['imgname']
- for human in im_res['result']:
- keypoints = []
- result = {}
- if for_eval:
- result['image_id'] = int(im_name.split('/')[-1].split('.')[0].split('_')[-1])
- else:
- result['image_id'] = im_name.split('/')[-1]
- result['category_id'] = 1
-
- kp_preds = human['keypoints']
- kp_scores = human['kp_score']
- pro_scores = human['proposal_score']
- for n in range(kp_scores.shape[0]):
- keypoints.append(float(kp_preds[n, 0]))
- keypoints.append(float(kp_preds[n, 1]))
- keypoints.append(float(kp_scores[n]))
- result['keypoints'] = keypoints
- result['score'] = float(pro_scores)
-
- if form == 'cmu': # the form of CMU-Pose
- if result['image_id'] not in json_results_cmu.keys():
- json_results_cmu[result['image_id']]={}
- json_results_cmu[result['image_id']]['version']="AlphaPose v0.2"
- json_results_cmu[result['image_id']]['bodies']=[]
- tmp={'joints':[]}
- result['keypoints'].append((result['keypoints'][15]+result['keypoints'][18])/2)
- result['keypoints'].append((result['keypoints'][16]+result['keypoints'][19])/2)
- result['keypoints'].append((result['keypoints'][17]+result['keypoints'][20])/2)
- indexarr=[0,51,18,24,30,15,21,27,36,42,48,33,39,45,6,3,12,9]
- for i in indexarr:
- tmp['joints'].append(result['keypoints'][i])
- tmp['joints'].append(result['keypoints'][i+1])
- tmp['joints'].append(result['keypoints'][i+2])
- json_results_cmu[result['image_id']]['bodies'].append(tmp)
- elif form == 'open': # the form of OpenPose
- if result['image_id'] not in json_results_cmu.keys():
- json_results_cmu[result['image_id']]={}
- json_results_cmu[result['image_id']]['version']="AlphaPose v0.2"
- json_results_cmu[result['image_id']]['people']=[]
- tmp={'pose_keypoints_2d':[]}
- result['keypoints'].append((result['keypoints'][15]+result['keypoints'][18])/2)
- result['keypoints'].append((result['keypoints'][16]+result['keypoints'][19])/2)
- result['keypoints'].append((result['keypoints'][17]+result['keypoints'][20])/2)
- indexarr=[0,51,18,24,30,15,21,27,36,42,48,33,39,45,6,3,12,9]
- for i in indexarr:
- tmp['pose_keypoints_2d'].append(result['keypoints'][i])
- tmp['pose_keypoints_2d'].append(result['keypoints'][i+1])
- tmp['pose_keypoints_2d'].append(result['keypoints'][i+2])
- json_results_cmu[result['image_id']]['people'].append(tmp)
- else:
- json_results.append(result)
-
- if form == 'cmu': # the form of CMU-Pose
- with open(os.path.join(outputpath,'alphapose-results.json'), 'w') as json_file:
- json_file.write(json.dumps(json_results_cmu))
- if not os.path.exists(os.path.join(outputpath,'sep-json')):
- os.mkdir(os.path.join(outputpath,'sep-json'))
- for name in json_results_cmu.keys():
- with open(os.path.join(outputpath,'sep-json',name.split('.')[0]+'.json'),'w') as json_file:
- json_file.write(json.dumps(json_results_cmu[name]))
- elif form == 'open': # the form of OpenPose
- with open(os.path.join(outputpath,'alphapose-results.json'), 'w') as json_file:
- json_file.write(json.dumps(json_results_cmu))
- if not os.path.exists(os.path.join(outputpath,'sep-json')):
- os.mkdir(os.path.join(outputpath,'sep-json'))
- for name in json_results_cmu.keys():
- with open(os.path.join(outputpath,'sep-json',name.split('.')[0]+'.json'),'w') as json_file:
- json_file.write(json.dumps(json_results_cmu[name]))
- else:
- with open(os.path.join(outputpath,'alphapose-results.json'), 'w') as json_file:
- json_file.write(json.dumps(json_results))
-
diff --git a/spaces/Sentdex/StableBeluga-7B-Chat/README.md b/spaces/Sentdex/StableBeluga-7B-Chat/README.md
deleted file mode 100644
index 7b442112c35e080f51afbdcf25581debc55cd58d..0000000000000000000000000000000000000000
--- a/spaces/Sentdex/StableBeluga-7B-Chat/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: StableBeluga 7B Chat
-emoji: 🦀
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/ui.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/ui.py
deleted file mode 100644
index 68fcbe0af257bdbaad767708843b545064d9b219..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/ui.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from pathlib import Path
-
-import gradio as gr
-import torch
-
-refresh_symbol = '\U0001f504' # 🔄
-
-class ToolButton(gr.Button, gr.components.IOComponent):
- """Small button with single emoji as text, fits inside gradio forms"""
-
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def get_block_name(self):
- return "button"
-
-
-def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_class):
- def refresh():
- refresh_method()
- args = refreshed_args() if callable(refreshed_args) else refreshed_args
-
- for k, v in args.items():
- setattr(refresh_component, k, v)
-
- return gr.update(**(args or {}))
-
- refresh_button = ToolButton(value=refresh_symbol, elem_classes=elem_class, scale=1, size="sm", container=False)
- refresh_button.click(
- fn=refresh,
- inputs=[],
- outputs=[refresh_component]
- )
- return refresh_button
\ No newline at end of file
diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/modules/test_activations.py b/spaces/SuYuanS/AudioCraft_Plus/tests/modules/test_activations.py
deleted file mode 100644
index 24e30d4cd87683430488bfa442e098b34229a5ee..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/tests/modules/test_activations.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from torch import nn
-
-from audiocraft.modules.activations import CustomGLU
-
-
-class TestActivations:
- def test_custom_glu_calculation(self):
-
- activation = CustomGLU(nn.Identity())
-
- initial_shape = (4, 8, 8)
-
- part_a = torch.ones(initial_shape) * 2
- part_b = torch.ones(initial_shape) * -1
- input = torch.cat((part_a, part_b), dim=-1)
-
- output = activation(input)
-
- # ensure all dimensions match initial shape
- assert output.shape == initial_shape
- # ensure the gating was calculated correctly a * f(b)
- assert torch.all(output == -2).item()
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/prompts.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/prompts.py
deleted file mode 100644
index 7fd218d37ae97b7d4c051ad3ac7feffff15df66c..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/prompts.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Being removed
-"""
-
-class LazyEvaluate(object):
- """This is used for formatting strings with values that need to be updated
- at that time, such as the current time or working directory."""
- def __init__(self, func, *args, **kwargs):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
- def __call__(self, **kwargs):
- self.kwargs.update(kwargs)
- return self.func(*self.args, **self.kwargs)
-
- def __str__(self):
- return str(self())
-
- def __format__(self, format_spec):
- return format(self(), format_spec)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/shortcuts/auto_suggest.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/shortcuts/auto_suggest.py
deleted file mode 100644
index 65f91577ce9898a2cee36ae63529a5e0986bd8dc..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/shortcuts/auto_suggest.py
+++ /dev/null
@@ -1,401 +0,0 @@
-import re
-import tokenize
-from io import StringIO
-from typing import Callable, List, Optional, Union, Generator, Tuple
-import warnings
-
-from prompt_toolkit.buffer import Buffer
-from prompt_toolkit.key_binding import KeyPressEvent
-from prompt_toolkit.key_binding.bindings import named_commands as nc
-from prompt_toolkit.auto_suggest import AutoSuggestFromHistory, Suggestion
-from prompt_toolkit.document import Document
-from prompt_toolkit.history import History
-from prompt_toolkit.shortcuts import PromptSession
-from prompt_toolkit.layout.processors import (
- Processor,
- Transformation,
- TransformationInput,
-)
-
-from IPython.core.getipython import get_ipython
-from IPython.utils.tokenutil import generate_tokens
-
-from .filters import pass_through
-
-
-def _get_query(document: Document):
- return document.lines[document.cursor_position_row]
-
-
-class AppendAutoSuggestionInAnyLine(Processor):
- """
- Append the auto suggestion to lines other than the last (appending to the
- last line is natively supported by the prompt toolkit).
- """
-
- def __init__(self, style: str = "class:auto-suggestion") -> None:
- self.style = style
-
- def apply_transformation(self, ti: TransformationInput) -> Transformation:
- is_last_line = ti.lineno == ti.document.line_count - 1
- is_active_line = ti.lineno == ti.document.cursor_position_row
-
- if not is_last_line and is_active_line:
- buffer = ti.buffer_control.buffer
-
- if buffer.suggestion and ti.document.is_cursor_at_the_end_of_line:
- suggestion = buffer.suggestion.text
- else:
- suggestion = ""
-
- return Transformation(fragments=ti.fragments + [(self.style, suggestion)])
- else:
- return Transformation(fragments=ti.fragments)
-
-
-class NavigableAutoSuggestFromHistory(AutoSuggestFromHistory):
- """
- A subclass of AutoSuggestFromHistory that allow navigation to next/previous
- suggestion from history. To do so it remembers the current position, but it
- state need to carefully be cleared on the right events.
- """
-
- def __init__(
- self,
- ):
- self.skip_lines = 0
- self._connected_apps = []
-
- def reset_history_position(self, _: Buffer):
- self.skip_lines = 0
-
- def disconnect(self):
- for pt_app in self._connected_apps:
- text_insert_event = pt_app.default_buffer.on_text_insert
- text_insert_event.remove_handler(self.reset_history_position)
-
- def connect(self, pt_app: PromptSession):
- self._connected_apps.append(pt_app)
- # note: `on_text_changed` could be used for a bit different behaviour
- # on character deletion (i.e. reseting history position on backspace)
- pt_app.default_buffer.on_text_insert.add_handler(self.reset_history_position)
- pt_app.default_buffer.on_cursor_position_changed.add_handler(self._dismiss)
-
- def get_suggestion(
- self, buffer: Buffer, document: Document
- ) -> Optional[Suggestion]:
- text = _get_query(document)
-
- if text.strip():
- for suggestion, _ in self._find_next_match(
- text, self.skip_lines, buffer.history
- ):
- return Suggestion(suggestion)
-
- return None
-
- def _dismiss(self, buffer, *args, **kwargs):
- buffer.suggestion = None
-
- def _find_match(
- self, text: str, skip_lines: float, history: History, previous: bool
- ) -> Generator[Tuple[str, float], None, None]:
- """
- text : str
- Text content to find a match for, the user cursor is most of the
- time at the end of this text.
- skip_lines : float
- number of items to skip in the search, this is used to indicate how
- far in the list the user has navigated by pressing up or down.
- The float type is used as the base value is +inf
- history : History
- prompt_toolkit History instance to fetch previous entries from.
- previous : bool
- Direction of the search, whether we are looking previous match
- (True), or next match (False).
-
- Yields
- ------
- Tuple with:
- str:
- current suggestion.
- float:
- will actually yield only ints, which is passed back via skip_lines,
- which may be a +inf (float)
-
-
- """
- line_number = -1
- for string in reversed(list(history.get_strings())):
- for line in reversed(string.splitlines()):
- line_number += 1
- if not previous and line_number < skip_lines:
- continue
- # do not return empty suggestions as these
- # close the auto-suggestion overlay (and are useless)
- if line.startswith(text) and len(line) > len(text):
- yield line[len(text) :], line_number
- if previous and line_number >= skip_lines:
- return
-
- def _find_next_match(
- self, text: str, skip_lines: float, history: History
- ) -> Generator[Tuple[str, float], None, None]:
- return self._find_match(text, skip_lines, history, previous=False)
-
- def _find_previous_match(self, text: str, skip_lines: float, history: History):
- return reversed(
- list(self._find_match(text, skip_lines, history, previous=True))
- )
-
- def up(self, query: str, other_than: str, history: History) -> None:
- for suggestion, line_number in self._find_next_match(
- query, self.skip_lines, history
- ):
- # if user has history ['very.a', 'very', 'very.b'] and typed 'very'
- # we want to switch from 'very.b' to 'very.a' because a) if the
- # suggestion equals current text, prompt-toolkit aborts suggesting
- # b) user likely would not be interested in 'very' anyways (they
- # already typed it).
- if query + suggestion != other_than:
- self.skip_lines = line_number
- break
- else:
- # no matches found, cycle back to beginning
- self.skip_lines = 0
-
- def down(self, query: str, other_than: str, history: History) -> None:
- for suggestion, line_number in self._find_previous_match(
- query, self.skip_lines, history
- ):
- if query + suggestion != other_than:
- self.skip_lines = line_number
- break
- else:
- # no matches found, cycle to end
- for suggestion, line_number in self._find_previous_match(
- query, float("Inf"), history
- ):
- if query + suggestion != other_than:
- self.skip_lines = line_number
- break
-
-
-def accept_or_jump_to_end(event: KeyPressEvent):
- """Apply autosuggestion or jump to end of line."""
- buffer = event.current_buffer
- d = buffer.document
- after_cursor = d.text[d.cursor_position :]
- lines = after_cursor.split("\n")
- end_of_current_line = lines[0].strip()
- suggestion = buffer.suggestion
- if (suggestion is not None) and (suggestion.text) and (end_of_current_line == ""):
- buffer.insert_text(suggestion.text)
- else:
- nc.end_of_line(event)
-
-
-def _deprected_accept_in_vi_insert_mode(event: KeyPressEvent):
- """Accept autosuggestion or jump to end of line.
-
- .. deprecated:: 8.12
- Use `accept_or_jump_to_end` instead.
- """
- return accept_or_jump_to_end(event)
-
-
-def accept(event: KeyPressEvent):
- """Accept autosuggestion"""
- buffer = event.current_buffer
- suggestion = buffer.suggestion
- if suggestion:
- buffer.insert_text(suggestion.text)
- else:
- nc.forward_char(event)
-
-
-def discard(event: KeyPressEvent):
- """Discard autosuggestion"""
- buffer = event.current_buffer
- buffer.suggestion = None
-
-
-def accept_word(event: KeyPressEvent):
- """Fill partial autosuggestion by word"""
- buffer = event.current_buffer
- suggestion = buffer.suggestion
- if suggestion:
- t = re.split(r"(\S+\s+)", suggestion.text)
- buffer.insert_text(next((x for x in t if x), ""))
- else:
- nc.forward_word(event)
-
-
-def accept_character(event: KeyPressEvent):
- """Fill partial autosuggestion by character"""
- b = event.current_buffer
- suggestion = b.suggestion
- if suggestion and suggestion.text:
- b.insert_text(suggestion.text[0])
-
-
-def accept_and_keep_cursor(event: KeyPressEvent):
- """Accept autosuggestion and keep cursor in place"""
- buffer = event.current_buffer
- old_position = buffer.cursor_position
- suggestion = buffer.suggestion
- if suggestion:
- buffer.insert_text(suggestion.text)
- buffer.cursor_position = old_position
-
-
-def accept_and_move_cursor_left(event: KeyPressEvent):
- """Accept autosuggestion and move cursor left in place"""
- accept_and_keep_cursor(event)
- nc.backward_char(event)
-
-
-def _update_hint(buffer: Buffer):
- if buffer.auto_suggest:
- suggestion = buffer.auto_suggest.get_suggestion(buffer, buffer.document)
- buffer.suggestion = suggestion
-
-
-def backspace_and_resume_hint(event: KeyPressEvent):
- """Resume autosuggestions after deleting last character"""
- nc.backward_delete_char(event)
- _update_hint(event.current_buffer)
-
-
-def resume_hinting(event: KeyPressEvent):
- """Resume autosuggestions"""
- pass_through.reply(event)
- # Order matters: if update happened first and event reply second, the
- # suggestion would be auto-accepted if both actions are bound to same key.
- _update_hint(event.current_buffer)
-
-
-def up_and_update_hint(event: KeyPressEvent):
- """Go up and update hint"""
- current_buffer = event.current_buffer
-
- current_buffer.auto_up(count=event.arg)
- _update_hint(current_buffer)
-
-
-def down_and_update_hint(event: KeyPressEvent):
- """Go down and update hint"""
- current_buffer = event.current_buffer
-
- current_buffer.auto_down(count=event.arg)
- _update_hint(current_buffer)
-
-
-def accept_token(event: KeyPressEvent):
- """Fill partial autosuggestion by token"""
- b = event.current_buffer
- suggestion = b.suggestion
-
- if suggestion:
- prefix = _get_query(b.document)
- text = prefix + suggestion.text
-
- tokens: List[Optional[str]] = [None, None, None]
- substrings = [""]
- i = 0
-
- for token in generate_tokens(StringIO(text).readline):
- if token.type == tokenize.NEWLINE:
- index = len(text)
- else:
- index = text.index(token[1], len(substrings[-1]))
- substrings.append(text[:index])
- tokenized_so_far = substrings[-1]
- if tokenized_so_far.startswith(prefix):
- if i == 0 and len(tokenized_so_far) > len(prefix):
- tokens[0] = tokenized_so_far[len(prefix) :]
- substrings.append(tokenized_so_far)
- i += 1
- tokens[i] = token[1]
- if i == 2:
- break
- i += 1
-
- if tokens[0]:
- to_insert: str
- insert_text = substrings[-2]
- if tokens[1] and len(tokens[1]) == 1:
- insert_text = substrings[-1]
- to_insert = insert_text[len(prefix) :]
- b.insert_text(to_insert)
- return
-
- nc.forward_word(event)
-
-
-Provider = Union[AutoSuggestFromHistory, NavigableAutoSuggestFromHistory, None]
-
-
-def _swap_autosuggestion(
- buffer: Buffer,
- provider: NavigableAutoSuggestFromHistory,
- direction_method: Callable,
-):
- """
- We skip most recent history entry (in either direction) if it equals the
- current autosuggestion because if user cycles when auto-suggestion is shown
- they most likely want something else than what was suggested (otherwise
- they would have accepted the suggestion).
- """
- suggestion = buffer.suggestion
- if not suggestion:
- return
-
- query = _get_query(buffer.document)
- current = query + suggestion.text
-
- direction_method(query=query, other_than=current, history=buffer.history)
-
- new_suggestion = provider.get_suggestion(buffer, buffer.document)
- buffer.suggestion = new_suggestion
-
-
-def swap_autosuggestion_up(event: KeyPressEvent):
- """Get next autosuggestion from history."""
- shell = get_ipython()
- provider = shell.auto_suggest
-
- if not isinstance(provider, NavigableAutoSuggestFromHistory):
- return
-
- return _swap_autosuggestion(
- buffer=event.current_buffer, provider=provider, direction_method=provider.up
- )
-
-
-def swap_autosuggestion_down(event: KeyPressEvent):
- """Get previous autosuggestion from history."""
- shell = get_ipython()
- provider = shell.auto_suggest
-
- if not isinstance(provider, NavigableAutoSuggestFromHistory):
- return
-
- return _swap_autosuggestion(
- buffer=event.current_buffer,
- provider=provider,
- direction_method=provider.down,
- )
-
-
-def __getattr__(key):
- if key == "accept_in_vi_insert_mode":
- warnings.warn(
- "`accept_in_vi_insert_mode` is deprecated since IPython 8.12 and "
- "renamed to `accept_or_jump_to_end`. Please update your configuration "
- "accordingly",
- DeprecationWarning,
- stacklevel=2,
- )
- return _deprected_accept_in_vi_insert_mode
- raise AttributeError
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/wildcard.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/wildcard.py
deleted file mode 100644
index cbef8c5175b1560de52a3c707098685cac6c35fa..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/wildcard.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Support for wildcard pattern matching in object inspection.
-
-Authors
--------
-- Jörgen Stenarson
-- Thomas Kluyver
-"""
-
-#*****************************************************************************
-# Copyright (C) 2005 Jörgen Stenarson
-#
-# Distributed under the terms of the BSD License. The full license is in
-# the file COPYING, distributed as part of this software.
-#*****************************************************************************
-
-import re
-import types
-
-from IPython.utils.dir2 import dir2
-
-def create_typestr2type_dicts(dont_include_in_type2typestr=["lambda"]):
- """Return dictionaries mapping lower case typename (e.g. 'tuple') to type
- objects from the types package, and vice versa."""
- typenamelist = [tname for tname in dir(types) if tname.endswith("Type")]
- typestr2type, type2typestr = {}, {}
-
- for tname in typenamelist:
- name = tname[:-4].lower() # Cut 'Type' off the end of the name
- obj = getattr(types, tname)
- typestr2type[name] = obj
- if name not in dont_include_in_type2typestr:
- type2typestr[obj] = name
- return typestr2type, type2typestr
-
-typestr2type, type2typestr = create_typestr2type_dicts()
-
-def is_type(obj, typestr_or_type):
- """is_type(obj, typestr_or_type) verifies if obj is of a certain type. It
- can take strings or actual python types for the second argument, i.e.
- 'tuple'<->TupleType. 'all' matches all types.
-
- TODO: Should be extended for choosing more than one type."""
- if typestr_or_type == "all":
- return True
- if type(typestr_or_type) == type:
- test_type = typestr_or_type
- else:
- test_type = typestr2type.get(typestr_or_type, False)
- if test_type:
- return isinstance(obj, test_type)
- return False
-
-def show_hidden(str, show_all=False):
- """Return true for strings starting with single _ if show_all is true."""
- return show_all or str.startswith("__") or not str.startswith("_")
-
-def dict_dir(obj):
- """Produce a dictionary of an object's attributes. Builds on dir2 by
- checking that a getattr() call actually succeeds."""
- ns = {}
- for key in dir2(obj):
- # This seemingly unnecessary try/except is actually needed
- # because there is code out there with metaclasses that
- # create 'write only' attributes, where a getattr() call
- # will fail even if the attribute appears listed in the
- # object's dictionary. Properties can actually do the same
- # thing. In particular, Traits use this pattern
- try:
- ns[key] = getattr(obj, key)
- except AttributeError:
- pass
- return ns
-
-def filter_ns(ns, name_pattern="*", type_pattern="all", ignore_case=True,
- show_all=True):
- """Filter a namespace dictionary by name pattern and item type."""
- pattern = name_pattern.replace("*",".*").replace("?",".")
- if ignore_case:
- reg = re.compile(pattern+"$", re.I)
- else:
- reg = re.compile(pattern+"$")
-
- # Check each one matches regex; shouldn't be hidden; of correct type.
- return dict((key,obj) for key, obj in ns.items() if reg.match(key) \
- and show_hidden(key, show_all) \
- and is_type(obj, type_pattern) )
-
-def list_namespace(namespace, type_pattern, filter, ignore_case=False, show_all=False):
- """Return dictionary of all objects in a namespace dictionary that match
- type_pattern and filter."""
- pattern_list=filter.split(".")
- if len(pattern_list) == 1:
- return filter_ns(namespace, name_pattern=pattern_list[0],
- type_pattern=type_pattern,
- ignore_case=ignore_case, show_all=show_all)
- else:
- # This is where we can change if all objects should be searched or
- # only modules. Just change the type_pattern to module to search only
- # modules
- filtered = filter_ns(namespace, name_pattern=pattern_list[0],
- type_pattern="all",
- ignore_case=ignore_case, show_all=show_all)
- results = {}
- for name, obj in filtered.items():
- ns = list_namespace(dict_dir(obj), type_pattern,
- ".".join(pattern_list[1:]),
- ignore_case=ignore_case, show_all=show_all)
- for inner_name, inner_obj in ns.items():
- results["%s.%s"%(name,inner_name)] = inner_obj
- return results
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/EpsImagePlugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/EpsImagePlugin.py
deleted file mode 100644
index 1c88d22c749f3786f8b0ad5e3f02841028bec1ee..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/EpsImagePlugin.py
+++ /dev/null
@@ -1,460 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# EPS file handling
-#
-# History:
-# 1995-09-01 fl Created (0.1)
-# 1996-05-18 fl Don't choke on "atend" fields, Ghostscript interface (0.2)
-# 1996-08-22 fl Don't choke on floating point BoundingBox values
-# 1996-08-23 fl Handle files from Macintosh (0.3)
-# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.4)
-# 2003-09-07 fl Check gs.close status (from Federico Di Gregorio) (0.5)
-# 2014-05-07 e Handling of EPS with binary preview and fixed resolution
-# resizing
-#
-# Copyright (c) 1997-2003 by Secret Labs AB.
-# Copyright (c) 1995-2003 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import io
-import os
-import re
-import subprocess
-import sys
-import tempfile
-
-from . import Image, ImageFile
-from ._binary import i32le as i32
-from ._deprecate import deprecate
-
-# --------------------------------------------------------------------
-
-
-split = re.compile(r"^%%([^:]*):[ \t]*(.*)[ \t]*$")
-field = re.compile(r"^%[%!\w]([^:]*)[ \t]*$")
-
-gs_windows_binary = None
-if sys.platform.startswith("win"):
- import shutil
-
- for binary in ("gswin32c", "gswin64c", "gs"):
- if shutil.which(binary) is not None:
- gs_windows_binary = binary
- break
- else:
- gs_windows_binary = False
-
-
-def has_ghostscript():
- if gs_windows_binary:
- return True
- if not sys.platform.startswith("win"):
- try:
- subprocess.check_call(["gs", "--version"], stdout=subprocess.DEVNULL)
- return True
- except OSError:
- # No Ghostscript
- pass
- return False
-
-
-def Ghostscript(tile, size, fp, scale=1, transparency=False):
- """Render an image using Ghostscript"""
-
- # Unpack decoder tile
- decoder, tile, offset, data = tile[0]
- length, bbox = data
-
- # Hack to support hi-res rendering
- scale = int(scale) or 1
- # orig_size = size
- # orig_bbox = bbox
- size = (size[0] * scale, size[1] * scale)
- # resolution is dependent on bbox and size
- res = (
- 72.0 * size[0] / (bbox[2] - bbox[0]),
- 72.0 * size[1] / (bbox[3] - bbox[1]),
- )
-
- out_fd, outfile = tempfile.mkstemp()
- os.close(out_fd)
-
- infile_temp = None
- if hasattr(fp, "name") and os.path.exists(fp.name):
- infile = fp.name
- else:
- in_fd, infile_temp = tempfile.mkstemp()
- os.close(in_fd)
- infile = infile_temp
-
- # Ignore length and offset!
- # Ghostscript can read it
- # Copy whole file to read in Ghostscript
- with open(infile_temp, "wb") as f:
- # fetch length of fp
- fp.seek(0, io.SEEK_END)
- fsize = fp.tell()
- # ensure start position
- # go back
- fp.seek(0)
- lengthfile = fsize
- while lengthfile > 0:
- s = fp.read(min(lengthfile, 100 * 1024))
- if not s:
- break
- lengthfile -= len(s)
- f.write(s)
-
- device = "pngalpha" if transparency else "ppmraw"
-
- # Build Ghostscript command
- command = [
- "gs",
- "-q", # quiet mode
- "-g%dx%d" % size, # set output geometry (pixels)
- "-r%fx%f" % res, # set input DPI (dots per inch)
- "-dBATCH", # exit after processing
- "-dNOPAUSE", # don't pause between pages
- "-dSAFER", # safe mode
- f"-sDEVICE={device}",
- f"-sOutputFile={outfile}", # output file
- # adjust for image origin
- "-c",
- f"{-bbox[0]} {-bbox[1]} translate",
- "-f",
- infile, # input file
- # showpage (see https://bugs.ghostscript.com/show_bug.cgi?id=698272)
- "-c",
- "showpage",
- ]
-
- if gs_windows_binary is not None:
- if not gs_windows_binary:
- msg = "Unable to locate Ghostscript on paths"
- raise OSError(msg)
- command[0] = gs_windows_binary
-
- # push data through Ghostscript
- try:
- startupinfo = None
- if sys.platform.startswith("win"):
- startupinfo = subprocess.STARTUPINFO()
- startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
- subprocess.check_call(command, startupinfo=startupinfo)
- out_im = Image.open(outfile)
- out_im.load()
- finally:
- try:
- os.unlink(outfile)
- if infile_temp:
- os.unlink(infile_temp)
- except OSError:
- pass
-
- im = out_im.im.copy()
- out_im.close()
- return im
-
-
-class PSFile:
- """
- Wrapper for bytesio object that treats either CR or LF as end of line.
- This class is no longer used internally, but kept for backwards compatibility.
- """
-
- def __init__(self, fp):
- deprecate(
- "PSFile",
- 11,
- action="If you need the functionality of this class "
- "you will need to implement it yourself.",
- )
- self.fp = fp
- self.char = None
-
- def seek(self, offset, whence=io.SEEK_SET):
- self.char = None
- self.fp.seek(offset, whence)
-
- def readline(self):
- s = [self.char or b""]
- self.char = None
-
- c = self.fp.read(1)
- while (c not in b"\r\n") and len(c):
- s.append(c)
- c = self.fp.read(1)
-
- self.char = self.fp.read(1)
- # line endings can be 1 or 2 of \r \n, in either order
- if self.char in b"\r\n":
- self.char = None
-
- return b"".join(s).decode("latin-1")
-
-
-def _accept(prefix):
- return prefix[:4] == b"%!PS" or (len(prefix) >= 4 and i32(prefix) == 0xC6D3D0C5)
-
-
-##
-# Image plugin for Encapsulated PostScript. This plugin supports only
-# a few variants of this format.
-
-
-class EpsImageFile(ImageFile.ImageFile):
- """EPS File Parser for the Python Imaging Library"""
-
- format = "EPS"
- format_description = "Encapsulated Postscript"
-
- mode_map = {1: "L", 2: "LAB", 3: "RGB", 4: "CMYK"}
-
- def _open(self):
- (length, offset) = self._find_offset(self.fp)
-
- # go to offset - start of "%!PS"
- self.fp.seek(offset)
-
- self.mode = "RGB"
- self._size = None
-
- byte_arr = bytearray(255)
- bytes_mv = memoryview(byte_arr)
- bytes_read = 0
- reading_comments = True
-
- def check_required_header_comments():
- if "PS-Adobe" not in self.info:
- msg = 'EPS header missing "%!PS-Adobe" comment'
- raise SyntaxError(msg)
- if "BoundingBox" not in self.info:
- msg = 'EPS header missing "%%BoundingBox" comment'
- raise SyntaxError(msg)
-
- while True:
- byte = self.fp.read(1)
- if byte == b"":
- # if we didn't read a byte we must be at the end of the file
- if bytes_read == 0:
- break
- elif byte in b"\r\n":
- # if we read a line ending character, ignore it and parse what
- # we have already read. if we haven't read any other characters,
- # continue reading
- if bytes_read == 0:
- continue
- else:
- # ASCII/hexadecimal lines in an EPS file must not exceed
- # 255 characters, not including line ending characters
- if bytes_read >= 255:
- # only enforce this for lines starting with a "%",
- # otherwise assume it's binary data
- if byte_arr[0] == ord("%"):
- msg = "not an EPS file"
- raise SyntaxError(msg)
- else:
- if reading_comments:
- check_required_header_comments()
- reading_comments = False
- # reset bytes_read so we can keep reading
- # data until the end of the line
- bytes_read = 0
- byte_arr[bytes_read] = byte[0]
- bytes_read += 1
- continue
-
- if reading_comments:
- # Load EPS header
-
- # if this line doesn't start with a "%",
- # or does start with "%%EndComments",
- # then we've reached the end of the header/comments
- if byte_arr[0] != ord("%") or bytes_mv[:13] == b"%%EndComments":
- check_required_header_comments()
- reading_comments = False
- continue
-
- s = str(bytes_mv[:bytes_read], "latin-1")
-
- try:
- m = split.match(s)
- except re.error as e:
- msg = "not an EPS file"
- raise SyntaxError(msg) from e
-
- if m:
- k, v = m.group(1, 2)
- self.info[k] = v
- if k == "BoundingBox":
- try:
- # Note: The DSC spec says that BoundingBox
- # fields should be integers, but some drivers
- # put floating point values there anyway.
- box = [int(float(i)) for i in v.split()]
- self._size = box[2] - box[0], box[3] - box[1]
- self.tile = [
- ("eps", (0, 0) + self.size, offset, (length, box))
- ]
- except Exception:
- pass
- else:
- m = field.match(s)
- if m:
- k = m.group(1)
- if k[:8] == "PS-Adobe":
- self.info["PS-Adobe"] = k[9:]
- else:
- self.info[k] = ""
- elif s[0] == "%":
- # handle non-DSC PostScript comments that some
- # tools mistakenly put in the Comments section
- pass
- else:
- msg = "bad EPS header"
- raise OSError(msg)
- elif bytes_mv[:11] == b"%ImageData:":
- # Check for an "ImageData" descriptor
- # https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/#50577413_pgfId-1035096
-
- # Values:
- # columns
- # rows
- # bit depth (1 or 8)
- # mode (1: L, 2: LAB, 3: RGB, 4: CMYK)
- # number of padding channels
- # block size (number of bytes per row per channel)
- # binary/ascii (1: binary, 2: ascii)
- # data start identifier (the image data follows after a single line
- # consisting only of this quoted value)
- image_data_values = byte_arr[11:bytes_read].split(None, 7)
- columns, rows, bit_depth, mode_id = [
- int(value) for value in image_data_values[:4]
- ]
-
- if bit_depth == 1:
- self.mode = "1"
- elif bit_depth == 8:
- try:
- self.mode = self.mode_map[mode_id]
- except ValueError:
- break
- else:
- break
-
- self._size = columns, rows
- return
-
- bytes_read = 0
-
- check_required_header_comments()
-
- if not self._size:
- self._size = 1, 1 # errors if this isn't set. why (1,1)?
- msg = "cannot determine EPS bounding box"
- raise OSError(msg)
-
- def _find_offset(self, fp):
- s = fp.read(4)
-
- if s == b"%!PS":
- # for HEAD without binary preview
- fp.seek(0, io.SEEK_END)
- length = fp.tell()
- offset = 0
- elif i32(s) == 0xC6D3D0C5:
- # FIX for: Some EPS file not handled correctly / issue #302
- # EPS can contain binary data
- # or start directly with latin coding
- # more info see:
- # https://web.archive.org/web/20160528181353/http://partners.adobe.com/public/developer/en/ps/5002.EPSF_Spec.pdf
- s = fp.read(8)
- offset = i32(s)
- length = i32(s, 4)
- else:
- msg = "not an EPS file"
- raise SyntaxError(msg)
-
- return length, offset
-
- def load(self, scale=1, transparency=False):
- # Load EPS via Ghostscript
- if self.tile:
- self.im = Ghostscript(self.tile, self.size, self.fp, scale, transparency)
- self.mode = self.im.mode
- self._size = self.im.size
- self.tile = []
- return Image.Image.load(self)
-
- def load_seek(self, *args, **kwargs):
- # we can't incrementally load, so force ImageFile.parser to
- # use our custom load method by defining this method.
- pass
-
-
-# --------------------------------------------------------------------
-
-
-def _save(im, fp, filename, eps=1):
- """EPS Writer for the Python Imaging Library."""
-
- # make sure image data is available
- im.load()
-
- # determine PostScript image mode
- if im.mode == "L":
- operator = (8, 1, b"image")
- elif im.mode == "RGB":
- operator = (8, 3, b"false 3 colorimage")
- elif im.mode == "CMYK":
- operator = (8, 4, b"false 4 colorimage")
- else:
- msg = "image mode is not supported"
- raise ValueError(msg)
-
- if eps:
- # write EPS header
- fp.write(b"%!PS-Adobe-3.0 EPSF-3.0\n")
- fp.write(b"%%Creator: PIL 0.1 EpsEncode\n")
- # fp.write("%%CreationDate: %s"...)
- fp.write(b"%%%%BoundingBox: 0 0 %d %d\n" % im.size)
- fp.write(b"%%Pages: 1\n")
- fp.write(b"%%EndComments\n")
- fp.write(b"%%Page: 1 1\n")
- fp.write(b"%%ImageData: %d %d " % im.size)
- fp.write(b'%d %d 0 1 1 "%s"\n' % operator)
-
- # image header
- fp.write(b"gsave\n")
- fp.write(b"10 dict begin\n")
- fp.write(b"/buf %d string def\n" % (im.size[0] * operator[1]))
- fp.write(b"%d %d scale\n" % im.size)
- fp.write(b"%d %d 8\n" % im.size) # <= bits
- fp.write(b"[%d 0 0 -%d 0 %d]\n" % (im.size[0], im.size[1], im.size[1]))
- fp.write(b"{ currentfile buf readhexstring pop } bind\n")
- fp.write(operator[2] + b"\n")
- if hasattr(fp, "flush"):
- fp.flush()
-
- ImageFile._save(im, fp, [("eps", (0, 0) + im.size, 0, None)])
-
- fp.write(b"\n%%%%EndBinary\n")
- fp.write(b"grestore end\n")
- if hasattr(fp, "flush"):
- fp.flush()
-
-
-# --------------------------------------------------------------------
-
-
-Image.register_open(EpsImageFile.format, EpsImageFile, _accept)
-
-Image.register_save(EpsImageFile.format, _save)
-
-Image.register_extensions(EpsImageFile.format, [".ps", ".eps"])
-
-Image.register_mime(EpsImageFile.format, "application/postscript")
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/_distutils_hack/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/_distutils_hack/__init__.py
deleted file mode 100644
index f987a5367fdfaa4f17cd4bf700d56f4b50992368..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/_distutils_hack/__init__.py
+++ /dev/null
@@ -1,222 +0,0 @@
-# don't import any costly modules
-import sys
-import os
-
-
-is_pypy = '__pypy__' in sys.builtin_module_names
-
-
-def warn_distutils_present():
- if 'distutils' not in sys.modules:
- return
- if is_pypy and sys.version_info < (3, 7):
- # PyPy for 3.6 unconditionally imports distutils, so bypass the warning
- # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250
- return
- import warnings
-
- warnings.warn(
- "Distutils was imported before Setuptools, but importing Setuptools "
- "also replaces the `distutils` module in `sys.modules`. This may lead "
- "to undesirable behaviors or errors. To avoid these issues, avoid "
- "using distutils directly, ensure that setuptools is installed in the "
- "traditional way (e.g. not an editable install), and/or make sure "
- "that setuptools is always imported before distutils."
- )
-
-
-def clear_distutils():
- if 'distutils' not in sys.modules:
- return
- import warnings
-
- warnings.warn("Setuptools is replacing distutils.")
- mods = [
- name
- for name in sys.modules
- if name == "distutils" or name.startswith("distutils.")
- ]
- for name in mods:
- del sys.modules[name]
-
-
-def enabled():
- """
- Allow selection of distutils by environment variable.
- """
- which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'local')
- return which == 'local'
-
-
-def ensure_local_distutils():
- import importlib
-
- clear_distutils()
-
- # With the DistutilsMetaFinder in place,
- # perform an import to cause distutils to be
- # loaded from setuptools._distutils. Ref #2906.
- with shim():
- importlib.import_module('distutils')
-
- # check that submodules load as expected
- core = importlib.import_module('distutils.core')
- assert '_distutils' in core.__file__, core.__file__
- assert 'setuptools._distutils.log' not in sys.modules
-
-
-def do_override():
- """
- Ensure that the local copy of distutils is preferred over stdlib.
-
- See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401
- for more motivation.
- """
- if enabled():
- warn_distutils_present()
- ensure_local_distutils()
-
-
-class _TrivialRe:
- def __init__(self, *patterns):
- self._patterns = patterns
-
- def match(self, string):
- return all(pat in string for pat in self._patterns)
-
-
-class DistutilsMetaFinder:
- def find_spec(self, fullname, path, target=None):
- # optimization: only consider top level modules and those
- # found in the CPython test suite.
- if path is not None and not fullname.startswith('test.'):
- return
-
- method_name = 'spec_for_{fullname}'.format(**locals())
- method = getattr(self, method_name, lambda: None)
- return method()
-
- def spec_for_distutils(self):
- if self.is_cpython():
- return
-
- import importlib
- import importlib.abc
- import importlib.util
-
- try:
- mod = importlib.import_module('setuptools._distutils')
- except Exception:
- # There are a couple of cases where setuptools._distutils
- # may not be present:
- # - An older Setuptools without a local distutils is
- # taking precedence. Ref #2957.
- # - Path manipulation during sitecustomize removes
- # setuptools from the path but only after the hook
- # has been loaded. Ref #2980.
- # In either case, fall back to stdlib behavior.
- return
-
- class DistutilsLoader(importlib.abc.Loader):
- def create_module(self, spec):
- mod.__name__ = 'distutils'
- return mod
-
- def exec_module(self, module):
- pass
-
- return importlib.util.spec_from_loader(
- 'distutils', DistutilsLoader(), origin=mod.__file__
- )
-
- @staticmethod
- def is_cpython():
- """
- Suppress supplying distutils for CPython (build and tests).
- Ref #2965 and #3007.
- """
- return os.path.isfile('pybuilddir.txt')
-
- def spec_for_pip(self):
- """
- Ensure stdlib distutils when running under pip.
- See pypa/pip#8761 for rationale.
- """
- if self.pip_imported_during_build():
- return
- clear_distutils()
- self.spec_for_distutils = lambda: None
-
- @classmethod
- def pip_imported_during_build(cls):
- """
- Detect if pip is being imported in a build script. Ref #2355.
- """
- import traceback
-
- return any(
- cls.frame_file_is_setup(frame) for frame, line in traceback.walk_stack(None)
- )
-
- @staticmethod
- def frame_file_is_setup(frame):
- """
- Return True if the indicated frame suggests a setup.py file.
- """
- # some frames may not have __file__ (#2940)
- return frame.f_globals.get('__file__', '').endswith('setup.py')
-
- def spec_for_sensitive_tests(self):
- """
- Ensure stdlib distutils when running select tests under CPython.
-
- python/cpython#91169
- """
- clear_distutils()
- self.spec_for_distutils = lambda: None
-
- sensitive_tests = (
- [
- 'test.test_distutils',
- 'test.test_peg_generator',
- 'test.test_importlib',
- ]
- if sys.version_info < (3, 10)
- else [
- 'test.test_distutils',
- ]
- )
-
-
-for name in DistutilsMetaFinder.sensitive_tests:
- setattr(
- DistutilsMetaFinder,
- f'spec_for_{name}',
- DistutilsMetaFinder.spec_for_sensitive_tests,
- )
-
-
-DISTUTILS_FINDER = DistutilsMetaFinder()
-
-
-def add_shim():
- DISTUTILS_FINDER in sys.meta_path or insert_shim()
-
-
-class shim:
- def __enter__(self):
- insert_shim()
-
- def __exit__(self, exc, value, tb):
- remove_shim()
-
-
-def insert_shim():
- sys.meta_path.insert(0, DISTUTILS_FINDER)
-
-
-def remove_shim():
- try:
- sys.meta_path.remove(DISTUTILS_FINDER)
- except ValueError:
- pass
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/chase_db1.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/chase_db1.py
deleted file mode 100644
index 298594ea925f87f22b37094a2ec50e370aec96a0..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/chase_db1.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# dataset settings
-dataset_type = 'ChaseDB1Dataset'
-data_root = 'data/CHASE_DB1'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-img_scale = (960, 999)
-crop_size = (128, 128)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg'])
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale,
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img'])
- ])
-]
-
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type='RepeatDataset',
- times=40000,
- dataset=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/training',
- ann_dir='annotations/training',
- pipeline=train_pipeline)),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline))
diff --git a/spaces/TIMBOVILL/RVC-Noobie/run.sh b/spaces/TIMBOVILL/RVC-Noobie/run.sh
deleted file mode 100644
index 31d0be013006e9130e7b3b24d479272dd01c8acd..0000000000000000000000000000000000000000
--- a/spaces/TIMBOVILL/RVC-Noobie/run.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-# Install Debian packages
-sudo apt-get update
-sudo apt-get install -qq -y build-essential ffmpeg aria2
-
-# Upgrade pip and setuptools
-pip install --upgrade pip
-pip install --upgrade setuptools
-
-# Install wheel package (built-package format for Python)
-pip install wheel
-
-# Install Python packages using pip
-pip install -r requirements.txt
-
-# Run application locally at http://127.0.0.1:7860
-python app.py
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/euckrprober.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/euckrprober.py
deleted file mode 100644
index 1fc5de0462cd9a09472cece4087cafe699da4fa7..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/euckrprober.py
+++ /dev/null
@@ -1,47 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is mozilla.org code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from .chardistribution import EUCKRDistributionAnalysis
-from .codingstatemachine import CodingStateMachine
-from .mbcharsetprober import MultiByteCharSetProber
-from .mbcssm import EUCKR_SM_MODEL
-
-
-class EUCKRProber(MultiByteCharSetProber):
- def __init__(self) -> None:
- super().__init__()
- self.coding_sm = CodingStateMachine(EUCKR_SM_MODEL)
- self.distribution_analyzer = EUCKRDistributionAnalysis()
- self.reset()
-
- @property
- def charset_name(self) -> str:
- return "EUC-KR"
-
- @property
- def language(self) -> str:
- return "Korean"
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/sbcsgroupprober.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/sbcsgroupprober.py
deleted file mode 100644
index 890ae8465c5b0ad2a5f99464fe5f5c0be49809f1..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/sbcsgroupprober.py
+++ /dev/null
@@ -1,88 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Universal charset detector code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 2001
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-# Shy Shalom - original C code
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from .charsetgroupprober import CharSetGroupProber
-from .hebrewprober import HebrewProber
-from .langbulgarianmodel import ISO_8859_5_BULGARIAN_MODEL, WINDOWS_1251_BULGARIAN_MODEL
-from .langgreekmodel import ISO_8859_7_GREEK_MODEL, WINDOWS_1253_GREEK_MODEL
-from .langhebrewmodel import WINDOWS_1255_HEBREW_MODEL
-
-# from .langhungarianmodel import (ISO_8859_2_HUNGARIAN_MODEL,
-# WINDOWS_1250_HUNGARIAN_MODEL)
-from .langrussianmodel import (
- IBM855_RUSSIAN_MODEL,
- IBM866_RUSSIAN_MODEL,
- ISO_8859_5_RUSSIAN_MODEL,
- KOI8_R_RUSSIAN_MODEL,
- MACCYRILLIC_RUSSIAN_MODEL,
- WINDOWS_1251_RUSSIAN_MODEL,
-)
-from .langthaimodel import TIS_620_THAI_MODEL
-from .langturkishmodel import ISO_8859_9_TURKISH_MODEL
-from .sbcharsetprober import SingleByteCharSetProber
-
-
-class SBCSGroupProber(CharSetGroupProber):
- def __init__(self) -> None:
- super().__init__()
- hebrew_prober = HebrewProber()
- logical_hebrew_prober = SingleByteCharSetProber(
- WINDOWS_1255_HEBREW_MODEL, is_reversed=False, name_prober=hebrew_prober
- )
- # TODO: See if using ISO-8859-8 Hebrew model works better here, since
- # it's actually the visual one
- visual_hebrew_prober = SingleByteCharSetProber(
- WINDOWS_1255_HEBREW_MODEL, is_reversed=True, name_prober=hebrew_prober
- )
- hebrew_prober.set_model_probers(logical_hebrew_prober, visual_hebrew_prober)
- # TODO: ORDER MATTERS HERE. I changed the order vs what was in master
- # and several tests failed that did not before. Some thought
- # should be put into the ordering, and we should consider making
- # order not matter here, because that is very counter-intuitive.
- self.probers = [
- SingleByteCharSetProber(WINDOWS_1251_RUSSIAN_MODEL),
- SingleByteCharSetProber(KOI8_R_RUSSIAN_MODEL),
- SingleByteCharSetProber(ISO_8859_5_RUSSIAN_MODEL),
- SingleByteCharSetProber(MACCYRILLIC_RUSSIAN_MODEL),
- SingleByteCharSetProber(IBM866_RUSSIAN_MODEL),
- SingleByteCharSetProber(IBM855_RUSSIAN_MODEL),
- SingleByteCharSetProber(ISO_8859_7_GREEK_MODEL),
- SingleByteCharSetProber(WINDOWS_1253_GREEK_MODEL),
- SingleByteCharSetProber(ISO_8859_5_BULGARIAN_MODEL),
- SingleByteCharSetProber(WINDOWS_1251_BULGARIAN_MODEL),
- # TODO: Restore Hungarian encodings (iso-8859-2 and windows-1250)
- # after we retrain model.
- # SingleByteCharSetProber(ISO_8859_2_HUNGARIAN_MODEL),
- # SingleByteCharSetProber(WINDOWS_1250_HUNGARIAN_MODEL),
- SingleByteCharSetProber(TIS_620_THAI_MODEL),
- SingleByteCharSetProber(ISO_8859_9_TURKISH_MODEL),
- hebrew_prober,
- logical_hebrew_prober,
- visual_hebrew_prober,
- ]
- self.reset()
diff --git a/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/hand.py b/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/hand.py
deleted file mode 100644
index 3d0bf17165ad7eb225332b51f4a2aa16718664b2..0000000000000000000000000000000000000000
--- a/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/hand.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import cv2
-import json
-import numpy as np
-import math
-import time
-from scipy.ndimage.filters import gaussian_filter
-import matplotlib.pyplot as plt
-import matplotlib
-import torch
-from skimage.measure import label
-
-from .model import handpose_model
-from . import util
-
-class Hand(object):
- def __init__(self, model_path):
- self.model = handpose_model()
- if torch.cuda.is_available():
- self.model = self.model.cuda()
- print('cuda')
- model_dict = util.transfer(self.model, torch.load(model_path))
- self.model.load_state_dict(model_dict)
- self.model.eval()
-
- def __call__(self, oriImg):
- scale_search = [0.5, 1.0, 1.5, 2.0]
- # scale_search = [0.5]
- boxsize = 368
- stride = 8
- padValue = 128
- thre = 0.05
- multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search]
- heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 22))
- # paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38))
-
- for m in range(len(multiplier)):
- scale = multiplier[m]
- imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
- imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue)
- im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5
- im = np.ascontiguousarray(im)
-
- data = torch.from_numpy(im).float()
- if torch.cuda.is_available():
- data = data.cuda()
- # data = data.permute([2, 0, 1]).unsqueeze(0).float()
- with torch.no_grad():
- output = self.model(data).cpu().numpy()
- # output = self.model(data).numpy()q
-
- # extract outputs, resize, and remove padding
- heatmap = np.transpose(np.squeeze(output), (1, 2, 0)) # output 1 is heatmaps
- heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
- heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
-
- heatmap_avg += heatmap / len(multiplier)
-
- all_peaks = []
- for part in range(21):
- map_ori = heatmap_avg[:, :, part]
- one_heatmap = gaussian_filter(map_ori, sigma=3)
- binary = np.ascontiguousarray(one_heatmap > thre, dtype=np.uint8)
- # 全部小于阈值
- if np.sum(binary) == 0:
- all_peaks.append([0, 0])
- continue
- label_img, label_numbers = label(binary, return_num=True, connectivity=binary.ndim)
- max_index = np.argmax([np.sum(map_ori[label_img == i]) for i in range(1, label_numbers + 1)]) + 1
- label_img[label_img != max_index] = 0
- map_ori[label_img == 0] = 0
-
- y, x = util.npmax(map_ori)
- all_peaks.append([x, y])
- return np.array(all_peaks)
-
-if __name__ == "__main__":
- hand_estimation = Hand('../model/hand_pose_model.pth')
-
- # test_image = '../images/hand.jpg'
- test_image = '../images/hand.jpg'
- oriImg = cv2.imread(test_image) # B,G,R order
- peaks = hand_estimation(oriImg)
- canvas = util.draw_handpose(oriImg, peaks, True)
- cv2.imshow('', canvas)
- cv2.waitKey(0)
\ No newline at end of file
diff --git a/spaces/Theivaprakasham/yolov6/configs/yolov6n_finetune.py b/spaces/Theivaprakasham/yolov6/configs/yolov6n_finetune.py
deleted file mode 100644
index 7d1fab5a2c4946eb9fa1986b210af8ad98a5700c..0000000000000000000000000000000000000000
--- a/spaces/Theivaprakasham/yolov6/configs/yolov6n_finetune.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6n',
- pretrained='./weights/yolov6n.pt',
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPAN',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- iou_type='ciou'
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243
-)
diff --git a/spaces/WRH/wrhwang_foodvision_mini/app.py b/spaces/WRH/wrhwang_foodvision_mini/app.py
deleted file mode 100644
index 19b9932eb3188a87f994cf8dd66c89771bb3c0d7..0000000000000000000000000000000000000000
--- a/spaces/WRH/wrhwang_foodvision_mini/app.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Created on Tue Dec 6 21:33:32 2022
-
-@author: WR
-"""
-### 1. Imports and class names setup ###
-import gradio as gr
-import os
-import torch
-
-from model import create_effnetb2_model
-from timeit import default_timer as timer
-from typing import Tuple, Dict
-
-# Setup class names
-class_names = ["pizza", "steak", "sushi"]
-
-### 2. Model and transforms preparation ###
-
-# Create EffNetB2 model
-effnetb2, effnetb2_transforms = create_effnetb2_model(
- num_classes=3, # len(class_names) would also work
-)
-
-# Load saved weights
-effnetb2.load_state_dict(
- torch.load(
- f="09_pretrained_effnetb2_feature_extractor_pizza_steak_sushi_20_percent.pth",
- map_location=torch.device("cpu"), # load to CPU
- )
-)
-
-### 3. Predict function ###
-
-# Create predict function
-def predict(img) -> Tuple[Dict, float]:
- """Transforms and performs a prediction on img and returns prediction and time taken.
- """
- # Start the timer
- start_time = timer()
-
- # Transform the target image and add a batch dimension
- img = effnetb2_transforms(img).unsqueeze(0)
-
- # Put model into evaluation mode and turn on inference mode
- effnetb2.eval()
- with torch.inference_mode():
- # Pass the transformed image through the model and turn the prediction logits into prediction probabilities
- pred_probs = torch.softmax(effnetb2(img), dim=1)
-
- # Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter)
- pred_labels_and_probs = {class_names[i]: float(pred_probs[0][i]) for i in range(len(class_names))}
-
- # Calculate the prediction time
- pred_time = round(timer() - start_time, 5)
-
- # Return the prediction dictionary and prediction time
- return pred_labels_and_probs, pred_time
-
-### 4. Gradio app ###
-
-# Create title, description and article strings
-title = "FoodVision Mini 🍕🥩🍣"
-description = "An EfficientNetB2 feature extractor computer vision model to classify images of food as pizza, steak or sushi."
-article = "Created at [09. PyTorch Model Deployment](https://www.learnpytorch.io/09_pytorch_model_deployment/)."
-
-# Create examples list from "examples/" directory
-example_list = [["examples/" + example] for example in os.listdir("examples")]
-
-# Create the Gradio demo
-demo = gr.Interface(fn=predict, # mapping function from input to output
- inputs=gr.Image(type="pil"), # what are the inputs?
- outputs=[gr.Label(num_top_classes=3, label="Predictions"), # what are the outputs?
- gr.Number(label="Prediction time (s)")], # our fn has two outputs, therefore we have two outputs
- # Create examples list from "examples/" directory
- examples=example_list,
- title=title,
- description=description,
- article=article)
-
-# Launch the demo!
-demo.launch()
-
-
diff --git a/spaces/XuebaoDingZhen/YOLOv50.0.1/README.md b/spaces/XuebaoDingZhen/YOLOv50.0.1/README.md
deleted file mode 100644
index 51b2e19a9ac7b62cac7f108ed7b6d187c24db5c4..0000000000000000000000000000000000000000
--- a/spaces/XuebaoDingZhen/YOLOv50.0.1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: YOLOv50.0.1
-emoji: 📈
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/XzJosh/Jianmo-Bert-VITS2/monotonic_align/core.py b/spaces/XzJosh/Jianmo-Bert-VITS2/monotonic_align/core.py
deleted file mode 100644
index dddc688d76172b880054e544b7a217acd013f14f..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Jianmo-Bert-VITS2/monotonic_align/core.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:,:,::1], numba.float32[:,:,::1], numba.int32[::1], numba.int32[::1]), nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val=-1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y-1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y-1, x-1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- index = index - 1
diff --git a/spaces/Yuliang/ECON/lib/pymafx/models/transformers/tokenlearner.py b/spaces/Yuliang/ECON/lib/pymafx/models/transformers/tokenlearner.py
deleted file mode 100644
index 441b361a721f685f481e764c19b624b593124c1b..0000000000000000000000000000000000000000
--- a/spaces/Yuliang/ECON/lib/pymafx/models/transformers/tokenlearner.py
+++ /dev/null
@@ -1,65 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class SpatialAttention(nn.Module):
- def __init__(self) -> None:
- super().__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(2, 1, kernel_size=(1, 1), stride=1), nn.BatchNorm2d(1), nn.ReLU()
- )
-
- self.sgap = nn.AvgPool2d(2)
-
- def forward(self, x):
- B, H, W, C = x.shape
- x = x.reshape(B, C, H, W)
-
- mx = torch.max(x, 1)[0].unsqueeze(1)
- avg = torch.mean(x, 1).unsqueeze(1)
- combined = torch.cat([mx, avg], dim=1)
- fmap = self.conv(combined)
- weight_map = torch.sigmoid(fmap)
- out = (x * weight_map).mean(dim=(-2, -1))
-
- return out, x * weight_map
-
-
-class TokenLearner(nn.Module):
- def __init__(self, S) -> None:
- super().__init__()
- self.S = S
- self.tokenizers = nn.ModuleList([SpatialAttention() for _ in range(S)])
-
- def forward(self, x):
- B, _, _, C = x.shape
- Z = torch.Tensor(B, self.S, C).to(x)
- for i in range(self.S):
- Ai, _ = self.tokenizers[i](x) # [B, C]
- Z[:, i, :] = Ai
- return Z
-
-
-class TokenFuser(nn.Module):
- def __init__(self, H, W, C, S) -> None:
- super().__init__()
- self.projection = nn.Linear(S, S, bias=False)
- self.Bi = nn.Linear(C, S)
- self.spatial_attn = SpatialAttention()
- self.S = S
-
- def forward(self, y, x):
- B, S, C = y.shape
- B, H, W, C = x.shape
-
- Y = self.projection(y.reshape(B, C, S)).reshape(B, S, C)
- Bw = torch.sigmoid(self.Bi(x)).reshape(B, H * W, S) # [B, HW, S]
- BwY = torch.matmul(Bw, Y)
-
- _, xj = self.spatial_attn(x)
- xj = xj.reshape(B, H * W, C)
-
- out = (BwY + xj).reshape(B, H, W, C)
-
- return out
diff --git a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/poetry_diacritizer/__init__.py b/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/poetry_diacritizer/__init__.py
deleted file mode 100644
index 42dcd7aa19e499d4ac240deb5d7e68bcf33795ed..0000000000000000000000000000000000000000
--- a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/poetry_diacritizer/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from poetry_diacritizer import predict
\ No newline at end of file
diff --git a/spaces/aadnk/faster-whisper-webui/src/prompts/abstractPromptStrategy.py b/spaces/aadnk/faster-whisper-webui/src/prompts/abstractPromptStrategy.py
deleted file mode 100644
index 41e8cba49fdbcc294ea216fffcafee89b07ed4df..0000000000000000000000000000000000000000
--- a/spaces/aadnk/faster-whisper-webui/src/prompts/abstractPromptStrategy.py
+++ /dev/null
@@ -1,73 +0,0 @@
-import abc
-
-
-class AbstractPromptStrategy:
- """
- Represents a strategy for generating prompts for a given audio segment.
-
- Note that the strategy must be picklable, as it will be serialized and sent to the workers.
- """
-
- @abc.abstractmethod
- def get_segment_prompt(self, segment_index: int, whisper_prompt: str, detected_language: str) -> str:
- """
- Retrieves the prompt for a given segment.
-
- Parameters
- ----------
- segment_index: int
- The index of the segment.
- whisper_prompt: str
- The prompt for the segment generated by Whisper. This is typically concatenated with the initial prompt.
- detected_language: str
- The language detected for the segment.
- """
- pass
-
- @abc.abstractmethod
- def on_segment_finished(self, segment_index: int, whisper_prompt: str, detected_language: str, result: dict):
- """
- Called when a segment has finished processing.
-
- Parameters
- ----------
- segment_index: int
- The index of the segment.
- whisper_prompt: str
- The prompt for the segment generated by Whisper. This is typically concatenated with the initial prompt.
- detected_language: str
- The language detected for the segment.
- result: dict
- The result of the segment. It has the following format:
- {
- "text": str,
- "segments": [
- {
- "text": str,
- "start": float,
- "end": float,
- "words": [words],
- }
- ],
- "language": str,
- }
- """
- pass
-
- def _concat_prompt(self, prompt1, prompt2):
- """
- Concatenates two prompts.
-
- Parameters
- ----------
- prompt1: str
- The first prompt.
- prompt2: str
- The second prompt.
- """
- if (prompt1 is None):
- return prompt2
- elif (prompt2 is None):
- return prompt1
- else:
- return prompt1 + " " + prompt2
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/merge_cells.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/merge_cells.py
deleted file mode 100644
index 48ca8cc0a8aca8432835bd760c0403a3c35b34cf..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/merge_cells.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from abc import abstractmethod
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..cnn import ConvModule
-
-
-class BaseMergeCell(nn.Module):
- """The basic class for cells used in NAS-FPN and NAS-FCOS.
-
- BaseMergeCell takes 2 inputs. After applying convolution
- on them, they are resized to the target size. Then,
- they go through binary_op, which depends on the type of cell.
- If with_out_conv is True, the result of output will go through
- another convolution layer.
-
- Args:
- in_channels (int): number of input channels in out_conv layer.
- out_channels (int): number of output channels in out_conv layer.
- with_out_conv (bool): Whether to use out_conv layer
- out_conv_cfg (dict): Config dict for convolution layer, which should
- contain "groups", "kernel_size", "padding", "bias" to build
- out_conv layer.
- out_norm_cfg (dict): Config dict for normalization layer in out_conv.
- out_conv_order (tuple): The order of conv/norm/activation layers in
- out_conv.
- with_input1_conv (bool): Whether to use convolution on input1.
- with_input2_conv (bool): Whether to use convolution on input2.
- input_conv_cfg (dict): Config dict for building input1_conv layer and
- input2_conv layer, which is expected to contain the type of
- convolution.
- Default: None, which means using conv2d.
- input_norm_cfg (dict): Config dict for normalization layer in
- input1_conv and input2_conv layer. Default: None.
- upsample_mode (str): Interpolation method used to resize the output
- of input1_conv and input2_conv to target size. Currently, we
- support ['nearest', 'bilinear']. Default: 'nearest'.
- """
-
- def __init__(self,
- fused_channels=256,
- out_channels=256,
- with_out_conv=True,
- out_conv_cfg=dict(
- groups=1, kernel_size=3, padding=1, bias=True),
- out_norm_cfg=None,
- out_conv_order=('act', 'conv', 'norm'),
- with_input1_conv=False,
- with_input2_conv=False,
- input_conv_cfg=None,
- input_norm_cfg=None,
- upsample_mode='nearest'):
- super(BaseMergeCell, self).__init__()
- assert upsample_mode in ['nearest', 'bilinear']
- self.with_out_conv = with_out_conv
- self.with_input1_conv = with_input1_conv
- self.with_input2_conv = with_input2_conv
- self.upsample_mode = upsample_mode
-
- if self.with_out_conv:
- self.out_conv = ConvModule(
- fused_channels,
- out_channels,
- **out_conv_cfg,
- norm_cfg=out_norm_cfg,
- order=out_conv_order)
-
- self.input1_conv = self._build_input_conv(
- out_channels, input_conv_cfg,
- input_norm_cfg) if with_input1_conv else nn.Sequential()
- self.input2_conv = self._build_input_conv(
- out_channels, input_conv_cfg,
- input_norm_cfg) if with_input2_conv else nn.Sequential()
-
- def _build_input_conv(self, channel, conv_cfg, norm_cfg):
- return ConvModule(
- channel,
- channel,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- bias=True)
-
- @abstractmethod
- def _binary_op(self, x1, x2):
- pass
-
- def _resize(self, x, size):
- if x.shape[-2:] == size:
- return x
- elif x.shape[-2:] < size:
- return F.interpolate(x, size=size, mode=self.upsample_mode)
- else:
- assert x.shape[-2] % size[-2] == 0 and x.shape[-1] % size[-1] == 0
- kernel_size = x.shape[-1] // size[-1]
- x = F.max_pool2d(x, kernel_size=kernel_size, stride=kernel_size)
- return x
-
- def forward(self, x1, x2, out_size=None):
- assert x1.shape[:2] == x2.shape[:2]
- assert out_size is None or len(out_size) == 2
- if out_size is None: # resize to larger one
- out_size = max(x1.size()[2:], x2.size()[2:])
-
- x1 = self.input1_conv(x1)
- x2 = self.input2_conv(x2)
-
- x1 = self._resize(x1, out_size)
- x2 = self._resize(x2, out_size)
-
- x = self._binary_op(x1, x2)
- if self.with_out_conv:
- x = self.out_conv(x)
- return x
-
-
-class SumCell(BaseMergeCell):
-
- def __init__(self, in_channels, out_channels, **kwargs):
- super(SumCell, self).__init__(in_channels, out_channels, **kwargs)
-
- def _binary_op(self, x1, x2):
- return x1 + x2
-
-
-class ConcatCell(BaseMergeCell):
-
- def __init__(self, in_channels, out_channels, **kwargs):
- super(ConcatCell, self).__init__(in_channels * 2, out_channels,
- **kwargs)
-
- def _binary_op(self, x1, x2):
- ret = torch.cat([x1, x2], dim=1)
- return ret
-
-
-class GlobalPoolingCell(BaseMergeCell):
-
- def __init__(self, in_channels=None, out_channels=None, **kwargs):
- super().__init__(in_channels, out_channels, **kwargs)
- self.global_pool = nn.AdaptiveAvgPool2d((1, 1))
-
- def _binary_op(self, x1, x2):
- x2_att = self.global_pool(x2).sigmoid()
- return x2 + x2_att * x1
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/fileio/file_client.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/fileio/file_client.py
deleted file mode 100644
index 950f0c1aeab14b8e308a7455ccd64a95b5d98add..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/fileio/file_client.py
+++ /dev/null
@@ -1,1148 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import inspect
-import os
-import os.path as osp
-import re
-import tempfile
-import warnings
-from abc import ABCMeta, abstractmethod
-from contextlib import contextmanager
-from pathlib import Path
-from typing import Iterable, Iterator, Optional, Tuple, Union
-from urllib.request import urlopen
-
-import annotator.uniformer.mmcv as mmcv
-from annotator.uniformer.mmcv.utils.misc import has_method
-from annotator.uniformer.mmcv.utils.path import is_filepath
-
-
-class BaseStorageBackend(metaclass=ABCMeta):
- """Abstract class of storage backends.
-
- All backends need to implement two apis: ``get()`` and ``get_text()``.
- ``get()`` reads the file as a byte stream and ``get_text()`` reads the file
- as texts.
- """
-
- # a flag to indicate whether the backend can create a symlink for a file
- _allow_symlink = False
-
- @property
- def name(self):
- return self.__class__.__name__
-
- @property
- def allow_symlink(self):
- return self._allow_symlink
-
- @abstractmethod
- def get(self, filepath):
- pass
-
- @abstractmethod
- def get_text(self, filepath):
- pass
-
-
-class CephBackend(BaseStorageBackend):
- """Ceph storage backend (for internal use).
-
- Args:
- path_mapping (dict|None): path mapping dict from local path to Petrel
- path. When ``path_mapping={'src': 'dst'}``, ``src`` in ``filepath``
- will be replaced by ``dst``. Default: None.
-
- .. warning::
- :class:`mmcv.fileio.file_client.CephBackend` will be deprecated,
- please use :class:`mmcv.fileio.file_client.PetrelBackend` instead.
- """
-
- def __init__(self, path_mapping=None):
- try:
- import ceph
- except ImportError:
- raise ImportError('Please install ceph to enable CephBackend.')
-
- warnings.warn(
- 'CephBackend will be deprecated, please use PetrelBackend instead')
- self._client = ceph.S3Client()
- assert isinstance(path_mapping, dict) or path_mapping is None
- self.path_mapping = path_mapping
-
- def get(self, filepath):
- filepath = str(filepath)
- if self.path_mapping is not None:
- for k, v in self.path_mapping.items():
- filepath = filepath.replace(k, v)
- value = self._client.Get(filepath)
- value_buf = memoryview(value)
- return value_buf
-
- def get_text(self, filepath, encoding=None):
- raise NotImplementedError
-
-
-class PetrelBackend(BaseStorageBackend):
- """Petrel storage backend (for internal use).
-
- PetrelBackend supports reading and writing data to multiple clusters.
- If the file path contains the cluster name, PetrelBackend will read data
- from specified cluster or write data to it. Otherwise, PetrelBackend will
- access the default cluster.
-
- Args:
- path_mapping (dict, optional): Path mapping dict from local path to
- Petrel path. When ``path_mapping={'src': 'dst'}``, ``src`` in
- ``filepath`` will be replaced by ``dst``. Default: None.
- enable_mc (bool, optional): Whether to enable memcached support.
- Default: True.
-
- Examples:
- >>> filepath1 = 's3://path/of/file'
- >>> filepath2 = 'cluster-name:s3://path/of/file'
- >>> client = PetrelBackend()
- >>> client.get(filepath1) # get data from default cluster
- >>> client.get(filepath2) # get data from 'cluster-name' cluster
- """
-
- def __init__(self,
- path_mapping: Optional[dict] = None,
- enable_mc: bool = True):
- try:
- from petrel_client import client
- except ImportError:
- raise ImportError('Please install petrel_client to enable '
- 'PetrelBackend.')
-
- self._client = client.Client(enable_mc=enable_mc)
- assert isinstance(path_mapping, dict) or path_mapping is None
- self.path_mapping = path_mapping
-
- def _map_path(self, filepath: Union[str, Path]) -> str:
- """Map ``filepath`` to a string path whose prefix will be replaced by
- :attr:`self.path_mapping`.
-
- Args:
- filepath (str): Path to be mapped.
- """
- filepath = str(filepath)
- if self.path_mapping is not None:
- for k, v in self.path_mapping.items():
- filepath = filepath.replace(k, v)
- return filepath
-
- def _format_path(self, filepath: str) -> str:
- """Convert a ``filepath`` to standard format of petrel oss.
-
- If the ``filepath`` is concatenated by ``os.path.join``, in a Windows
- environment, the ``filepath`` will be the format of
- 's3://bucket_name\\image.jpg'. By invoking :meth:`_format_path`, the
- above ``filepath`` will be converted to 's3://bucket_name/image.jpg'.
-
- Args:
- filepath (str): Path to be formatted.
- """
- return re.sub(r'\\+', '/', filepath)
-
- def get(self, filepath: Union[str, Path]) -> memoryview:
- """Read data from a given ``filepath`` with 'rb' mode.
-
- Args:
- filepath (str or Path): Path to read data.
-
- Returns:
- memoryview: A memory view of expected bytes object to avoid
- copying. The memoryview object can be converted to bytes by
- ``value_buf.tobytes()``.
- """
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- value = self._client.Get(filepath)
- value_buf = memoryview(value)
- return value_buf
-
- def get_text(self,
- filepath: Union[str, Path],
- encoding: str = 'utf-8') -> str:
- """Read data from a given ``filepath`` with 'r' mode.
-
- Args:
- filepath (str or Path): Path to read data.
- encoding (str): The encoding format used to open the ``filepath``.
- Default: 'utf-8'.
-
- Returns:
- str: Expected text reading from ``filepath``.
- """
- return str(self.get(filepath), encoding=encoding)
-
- def put(self, obj: bytes, filepath: Union[str, Path]) -> None:
- """Save data to a given ``filepath``.
-
- Args:
- obj (bytes): Data to be saved.
- filepath (str or Path): Path to write data.
- """
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- self._client.put(filepath, obj)
-
- def put_text(self,
- obj: str,
- filepath: Union[str, Path],
- encoding: str = 'utf-8') -> None:
- """Save data to a given ``filepath``.
-
- Args:
- obj (str): Data to be written.
- filepath (str or Path): Path to write data.
- encoding (str): The encoding format used to encode the ``obj``.
- Default: 'utf-8'.
- """
- self.put(bytes(obj, encoding=encoding), filepath)
-
- def remove(self, filepath: Union[str, Path]) -> None:
- """Remove a file.
-
- Args:
- filepath (str or Path): Path to be removed.
- """
- if not has_method(self._client, 'delete'):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `delete` method, please use a higher version or dev'
- ' branch instead.'))
-
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- self._client.delete(filepath)
-
- def exists(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path exists.
-
- Args:
- filepath (str or Path): Path to be checked whether exists.
-
- Returns:
- bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise.
- """
- if not (has_method(self._client, 'contains')
- and has_method(self._client, 'isdir')):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `contains` and `isdir` methods, please use a higher'
- 'version or dev branch instead.'))
-
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- return self._client.contains(filepath) or self._client.isdir(filepath)
-
- def isdir(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a directory.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a
- directory.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a directory,
- ``False`` otherwise.
- """
- if not has_method(self._client, 'isdir'):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `isdir` method, please use a higher version or dev'
- ' branch instead.'))
-
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- return self._client.isdir(filepath)
-
- def isfile(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a file.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a file.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a file, ``False``
- otherwise.
- """
- if not has_method(self._client, 'contains'):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `contains` method, please use a higher version or '
- 'dev branch instead.'))
-
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- return self._client.contains(filepath)
-
- def join_path(self, filepath: Union[str, Path],
- *filepaths: Union[str, Path]) -> str:
- """Concatenate all file paths.
-
- Args:
- filepath (str or Path): Path to be concatenated.
-
- Returns:
- str: The result after concatenation.
- """
- filepath = self._format_path(self._map_path(filepath))
- if filepath.endswith('/'):
- filepath = filepath[:-1]
- formatted_paths = [filepath]
- for path in filepaths:
- formatted_paths.append(self._format_path(self._map_path(path)))
- return '/'.join(formatted_paths)
-
- @contextmanager
- def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]:
- """Download a file from ``filepath`` and return a temporary path.
-
- ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It
- can be called with ``with`` statement, and when exists from the
- ``with`` statement, the temporary path will be released.
-
- Args:
- filepath (str | Path): Download a file from ``filepath``.
-
- Examples:
- >>> client = PetrelBackend()
- >>> # After existing from the ``with`` clause,
- >>> # the path will be removed
- >>> with client.get_local_path('s3://path/of/your/file') as path:
- ... # do something here
-
- Yields:
- Iterable[str]: Only yield one temporary path.
- """
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- assert self.isfile(filepath)
- try:
- f = tempfile.NamedTemporaryFile(delete=False)
- f.write(self.get(filepath))
- f.close()
- yield f.name
- finally:
- os.remove(f.name)
-
- def list_dir_or_file(self,
- dir_path: Union[str, Path],
- list_dir: bool = True,
- list_file: bool = True,
- suffix: Optional[Union[str, Tuple[str]]] = None,
- recursive: bool = False) -> Iterator[str]:
- """Scan a directory to find the interested directories or files in
- arbitrary order.
-
- Note:
- Petrel has no concept of directories but it simulates the directory
- hierarchy in the filesystem through public prefixes. In addition,
- if the returned path ends with '/', it means the path is a public
- prefix which is a logical directory.
-
- Note:
- :meth:`list_dir_or_file` returns the path relative to ``dir_path``.
- In addition, the returned path of directory will not contains the
- suffix '/' which is consistent with other backends.
-
- Args:
- dir_path (str | Path): Path of the directory.
- list_dir (bool): List the directories. Default: True.
- list_file (bool): List the path of files. Default: True.
- suffix (str or tuple[str], optional): File suffix
- that we are interested in. Default: None.
- recursive (bool): If set to True, recursively scan the
- directory. Default: False.
-
- Yields:
- Iterable[str]: A relative path to ``dir_path``.
- """
- if not has_method(self._client, 'list'):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `list` method, please use a higher version or dev'
- ' branch instead.'))
-
- dir_path = self._map_path(dir_path)
- dir_path = self._format_path(dir_path)
- if list_dir and suffix is not None:
- raise TypeError(
- '`list_dir` should be False when `suffix` is not None')
-
- if (suffix is not None) and not isinstance(suffix, (str, tuple)):
- raise TypeError('`suffix` must be a string or tuple of strings')
-
- # Petrel's simulated directory hierarchy assumes that directory paths
- # should end with `/`
- if not dir_path.endswith('/'):
- dir_path += '/'
-
- root = dir_path
-
- def _list_dir_or_file(dir_path, list_dir, list_file, suffix,
- recursive):
- for path in self._client.list(dir_path):
- # the `self.isdir` is not used here to determine whether path
- # is a directory, because `self.isdir` relies on
- # `self._client.list`
- if path.endswith('/'): # a directory path
- next_dir_path = self.join_path(dir_path, path)
- if list_dir:
- # get the relative path and exclude the last
- # character '/'
- rel_dir = next_dir_path[len(root):-1]
- yield rel_dir
- if recursive:
- yield from _list_dir_or_file(next_dir_path, list_dir,
- list_file, suffix,
- recursive)
- else: # a file path
- absolute_path = self.join_path(dir_path, path)
- rel_path = absolute_path[len(root):]
- if (suffix is None
- or rel_path.endswith(suffix)) and list_file:
- yield rel_path
-
- return _list_dir_or_file(dir_path, list_dir, list_file, suffix,
- recursive)
-
-
-class MemcachedBackend(BaseStorageBackend):
- """Memcached storage backend.
-
- Attributes:
- server_list_cfg (str): Config file for memcached server list.
- client_cfg (str): Config file for memcached client.
- sys_path (str | None): Additional path to be appended to `sys.path`.
- Default: None.
- """
-
- def __init__(self, server_list_cfg, client_cfg, sys_path=None):
- if sys_path is not None:
- import sys
- sys.path.append(sys_path)
- try:
- import mc
- except ImportError:
- raise ImportError(
- 'Please install memcached to enable MemcachedBackend.')
-
- self.server_list_cfg = server_list_cfg
- self.client_cfg = client_cfg
- self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg,
- self.client_cfg)
- # mc.pyvector servers as a point which points to a memory cache
- self._mc_buffer = mc.pyvector()
-
- def get(self, filepath):
- filepath = str(filepath)
- import mc
- self._client.Get(filepath, self._mc_buffer)
- value_buf = mc.ConvertBuffer(self._mc_buffer)
- return value_buf
-
- def get_text(self, filepath, encoding=None):
- raise NotImplementedError
-
-
-class LmdbBackend(BaseStorageBackend):
- """Lmdb storage backend.
-
- Args:
- db_path (str): Lmdb database path.
- readonly (bool, optional): Lmdb environment parameter. If True,
- disallow any write operations. Default: True.
- lock (bool, optional): Lmdb environment parameter. If False, when
- concurrent access occurs, do not lock the database. Default: False.
- readahead (bool, optional): Lmdb environment parameter. If False,
- disable the OS filesystem readahead mechanism, which may improve
- random read performance when a database is larger than RAM.
- Default: False.
-
- Attributes:
- db_path (str): Lmdb database path.
- """
-
- def __init__(self,
- db_path,
- readonly=True,
- lock=False,
- readahead=False,
- **kwargs):
- try:
- import lmdb
- except ImportError:
- raise ImportError('Please install lmdb to enable LmdbBackend.')
-
- self.db_path = str(db_path)
- self._client = lmdb.open(
- self.db_path,
- readonly=readonly,
- lock=lock,
- readahead=readahead,
- **kwargs)
-
- def get(self, filepath):
- """Get values according to the filepath.
-
- Args:
- filepath (str | obj:`Path`): Here, filepath is the lmdb key.
- """
- filepath = str(filepath)
- with self._client.begin(write=False) as txn:
- value_buf = txn.get(filepath.encode('ascii'))
- return value_buf
-
- def get_text(self, filepath, encoding=None):
- raise NotImplementedError
-
-
-class HardDiskBackend(BaseStorageBackend):
- """Raw hard disks storage backend."""
-
- _allow_symlink = True
-
- def get(self, filepath: Union[str, Path]) -> bytes:
- """Read data from a given ``filepath`` with 'rb' mode.
-
- Args:
- filepath (str or Path): Path to read data.
-
- Returns:
- bytes: Expected bytes object.
- """
- with open(filepath, 'rb') as f:
- value_buf = f.read()
- return value_buf
-
- def get_text(self,
- filepath: Union[str, Path],
- encoding: str = 'utf-8') -> str:
- """Read data from a given ``filepath`` with 'r' mode.
-
- Args:
- filepath (str or Path): Path to read data.
- encoding (str): The encoding format used to open the ``filepath``.
- Default: 'utf-8'.
-
- Returns:
- str: Expected text reading from ``filepath``.
- """
- with open(filepath, 'r', encoding=encoding) as f:
- value_buf = f.read()
- return value_buf
-
- def put(self, obj: bytes, filepath: Union[str, Path]) -> None:
- """Write data to a given ``filepath`` with 'wb' mode.
-
- Note:
- ``put`` will create a directory if the directory of ``filepath``
- does not exist.
-
- Args:
- obj (bytes): Data to be written.
- filepath (str or Path): Path to write data.
- """
- mmcv.mkdir_or_exist(osp.dirname(filepath))
- with open(filepath, 'wb') as f:
- f.write(obj)
-
- def put_text(self,
- obj: str,
- filepath: Union[str, Path],
- encoding: str = 'utf-8') -> None:
- """Write data to a given ``filepath`` with 'w' mode.
-
- Note:
- ``put_text`` will create a directory if the directory of
- ``filepath`` does not exist.
-
- Args:
- obj (str): Data to be written.
- filepath (str or Path): Path to write data.
- encoding (str): The encoding format used to open the ``filepath``.
- Default: 'utf-8'.
- """
- mmcv.mkdir_or_exist(osp.dirname(filepath))
- with open(filepath, 'w', encoding=encoding) as f:
- f.write(obj)
-
- def remove(self, filepath: Union[str, Path]) -> None:
- """Remove a file.
-
- Args:
- filepath (str or Path): Path to be removed.
- """
- os.remove(filepath)
-
- def exists(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path exists.
-
- Args:
- filepath (str or Path): Path to be checked whether exists.
-
- Returns:
- bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise.
- """
- return osp.exists(filepath)
-
- def isdir(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a directory.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a
- directory.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a directory,
- ``False`` otherwise.
- """
- return osp.isdir(filepath)
-
- def isfile(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a file.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a file.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a file, ``False``
- otherwise.
- """
- return osp.isfile(filepath)
-
- def join_path(self, filepath: Union[str, Path],
- *filepaths: Union[str, Path]) -> str:
- """Concatenate all file paths.
-
- Join one or more filepath components intelligently. The return value
- is the concatenation of filepath and any members of *filepaths.
-
- Args:
- filepath (str or Path): Path to be concatenated.
-
- Returns:
- str: The result of concatenation.
- """
- return osp.join(filepath, *filepaths)
-
- @contextmanager
- def get_local_path(
- self, filepath: Union[str, Path]) -> Iterable[Union[str, Path]]:
- """Only for unified API and do nothing."""
- yield filepath
-
- def list_dir_or_file(self,
- dir_path: Union[str, Path],
- list_dir: bool = True,
- list_file: bool = True,
- suffix: Optional[Union[str, Tuple[str]]] = None,
- recursive: bool = False) -> Iterator[str]:
- """Scan a directory to find the interested directories or files in
- arbitrary order.
-
- Note:
- :meth:`list_dir_or_file` returns the path relative to ``dir_path``.
-
- Args:
- dir_path (str | Path): Path of the directory.
- list_dir (bool): List the directories. Default: True.
- list_file (bool): List the path of files. Default: True.
- suffix (str or tuple[str], optional): File suffix
- that we are interested in. Default: None.
- recursive (bool): If set to True, recursively scan the
- directory. Default: False.
-
- Yields:
- Iterable[str]: A relative path to ``dir_path``.
- """
- if list_dir and suffix is not None:
- raise TypeError('`suffix` should be None when `list_dir` is True')
-
- if (suffix is not None) and not isinstance(suffix, (str, tuple)):
- raise TypeError('`suffix` must be a string or tuple of strings')
-
- root = dir_path
-
- def _list_dir_or_file(dir_path, list_dir, list_file, suffix,
- recursive):
- for entry in os.scandir(dir_path):
- if not entry.name.startswith('.') and entry.is_file():
- rel_path = osp.relpath(entry.path, root)
- if (suffix is None
- or rel_path.endswith(suffix)) and list_file:
- yield rel_path
- elif osp.isdir(entry.path):
- if list_dir:
- rel_dir = osp.relpath(entry.path, root)
- yield rel_dir
- if recursive:
- yield from _list_dir_or_file(entry.path, list_dir,
- list_file, suffix,
- recursive)
-
- return _list_dir_or_file(dir_path, list_dir, list_file, suffix,
- recursive)
-
-
-class HTTPBackend(BaseStorageBackend):
- """HTTP and HTTPS storage bachend."""
-
- def get(self, filepath):
- value_buf = urlopen(filepath).read()
- return value_buf
-
- def get_text(self, filepath, encoding='utf-8'):
- value_buf = urlopen(filepath).read()
- return value_buf.decode(encoding)
-
- @contextmanager
- def get_local_path(self, filepath: str) -> Iterable[str]:
- """Download a file from ``filepath``.
-
- ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It
- can be called with ``with`` statement, and when exists from the
- ``with`` statement, the temporary path will be released.
-
- Args:
- filepath (str): Download a file from ``filepath``.
-
- Examples:
- >>> client = HTTPBackend()
- >>> # After existing from the ``with`` clause,
- >>> # the path will be removed
- >>> with client.get_local_path('http://path/of/your/file') as path:
- ... # do something here
- """
- try:
- f = tempfile.NamedTemporaryFile(delete=False)
- f.write(self.get(filepath))
- f.close()
- yield f.name
- finally:
- os.remove(f.name)
-
-
-class FileClient:
- """A general file client to access files in different backends.
-
- The client loads a file or text in a specified backend from its path
- and returns it as a binary or text file. There are two ways to choose a
- backend, the name of backend and the prefix of path. Although both of them
- can be used to choose a storage backend, ``backend`` has a higher priority
- that is if they are all set, the storage backend will be chosen by the
- backend argument. If they are all `None`, the disk backend will be chosen.
- Note that It can also register other backend accessor with a given name,
- prefixes, and backend class. In addition, We use the singleton pattern to
- avoid repeated object creation. If the arguments are the same, the same
- object will be returned.
-
- Args:
- backend (str, optional): The storage backend type. Options are "disk",
- "ceph", "memcached", "lmdb", "http" and "petrel". Default: None.
- prefix (str, optional): The prefix of the registered storage backend.
- Options are "s3", "http", "https". Default: None.
-
- Examples:
- >>> # only set backend
- >>> file_client = FileClient(backend='petrel')
- >>> # only set prefix
- >>> file_client = FileClient(prefix='s3')
- >>> # set both backend and prefix but use backend to choose client
- >>> file_client = FileClient(backend='petrel', prefix='s3')
- >>> # if the arguments are the same, the same object is returned
- >>> file_client1 = FileClient(backend='petrel')
- >>> file_client1 is file_client
- True
-
- Attributes:
- client (:obj:`BaseStorageBackend`): The backend object.
- """
-
- _backends = {
- 'disk': HardDiskBackend,
- 'ceph': CephBackend,
- 'memcached': MemcachedBackend,
- 'lmdb': LmdbBackend,
- 'petrel': PetrelBackend,
- 'http': HTTPBackend,
- }
- # This collection is used to record the overridden backends, and when a
- # backend appears in the collection, the singleton pattern is disabled for
- # that backend, because if the singleton pattern is used, then the object
- # returned will be the backend before overwriting
- _overridden_backends = set()
- _prefix_to_backends = {
- 's3': PetrelBackend,
- 'http': HTTPBackend,
- 'https': HTTPBackend,
- }
- _overridden_prefixes = set()
-
- _instances = {}
-
- def __new__(cls, backend=None, prefix=None, **kwargs):
- if backend is None and prefix is None:
- backend = 'disk'
- if backend is not None and backend not in cls._backends:
- raise ValueError(
- f'Backend {backend} is not supported. Currently supported ones'
- f' are {list(cls._backends.keys())}')
- if prefix is not None and prefix not in cls._prefix_to_backends:
- raise ValueError(
- f'prefix {prefix} is not supported. Currently supported ones '
- f'are {list(cls._prefix_to_backends.keys())}')
-
- # concatenate the arguments to a unique key for determining whether
- # objects with the same arguments were created
- arg_key = f'{backend}:{prefix}'
- for key, value in kwargs.items():
- arg_key += f':{key}:{value}'
-
- # if a backend was overridden, it will create a new object
- if (arg_key in cls._instances
- and backend not in cls._overridden_backends
- and prefix not in cls._overridden_prefixes):
- _instance = cls._instances[arg_key]
- else:
- # create a new object and put it to _instance
- _instance = super().__new__(cls)
- if backend is not None:
- _instance.client = cls._backends[backend](**kwargs)
- else:
- _instance.client = cls._prefix_to_backends[prefix](**kwargs)
-
- cls._instances[arg_key] = _instance
-
- return _instance
-
- @property
- def name(self):
- return self.client.name
-
- @property
- def allow_symlink(self):
- return self.client.allow_symlink
-
- @staticmethod
- def parse_uri_prefix(uri: Union[str, Path]) -> Optional[str]:
- """Parse the prefix of a uri.
-
- Args:
- uri (str | Path): Uri to be parsed that contains the file prefix.
-
- Examples:
- >>> FileClient.parse_uri_prefix('s3://path/of/your/file')
- 's3'
-
- Returns:
- str | None: Return the prefix of uri if the uri contains '://'
- else ``None``.
- """
- assert is_filepath(uri)
- uri = str(uri)
- if '://' not in uri:
- return None
- else:
- prefix, _ = uri.split('://')
- # In the case of PetrelBackend, the prefix may contains the cluster
- # name like clusterName:s3
- if ':' in prefix:
- _, prefix = prefix.split(':')
- return prefix
-
- @classmethod
- def infer_client(cls,
- file_client_args: Optional[dict] = None,
- uri: Optional[Union[str, Path]] = None) -> 'FileClient':
- """Infer a suitable file client based on the URI and arguments.
-
- Args:
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. Default: None.
- uri (str | Path, optional): Uri to be parsed that contains the file
- prefix. Default: None.
-
- Examples:
- >>> uri = 's3://path/of/your/file'
- >>> file_client = FileClient.infer_client(uri=uri)
- >>> file_client_args = {'backend': 'petrel'}
- >>> file_client = FileClient.infer_client(file_client_args)
-
- Returns:
- FileClient: Instantiated FileClient object.
- """
- assert file_client_args is not None or uri is not None
- if file_client_args is None:
- file_prefix = cls.parse_uri_prefix(uri) # type: ignore
- return cls(prefix=file_prefix)
- else:
- return cls(**file_client_args)
-
- @classmethod
- def _register_backend(cls, name, backend, force=False, prefixes=None):
- if not isinstance(name, str):
- raise TypeError('the backend name should be a string, '
- f'but got {type(name)}')
- if not inspect.isclass(backend):
- raise TypeError(
- f'backend should be a class but got {type(backend)}')
- if not issubclass(backend, BaseStorageBackend):
- raise TypeError(
- f'backend {backend} is not a subclass of BaseStorageBackend')
- if not force and name in cls._backends:
- raise KeyError(
- f'{name} is already registered as a storage backend, '
- 'add "force=True" if you want to override it')
-
- if name in cls._backends and force:
- cls._overridden_backends.add(name)
- cls._backends[name] = backend
-
- if prefixes is not None:
- if isinstance(prefixes, str):
- prefixes = [prefixes]
- else:
- assert isinstance(prefixes, (list, tuple))
- for prefix in prefixes:
- if prefix not in cls._prefix_to_backends:
- cls._prefix_to_backends[prefix] = backend
- elif (prefix in cls._prefix_to_backends) and force:
- cls._overridden_prefixes.add(prefix)
- cls._prefix_to_backends[prefix] = backend
- else:
- raise KeyError(
- f'{prefix} is already registered as a storage backend,'
- ' add "force=True" if you want to override it')
-
- @classmethod
- def register_backend(cls, name, backend=None, force=False, prefixes=None):
- """Register a backend to FileClient.
-
- This method can be used as a normal class method or a decorator.
-
- .. code-block:: python
-
- class NewBackend(BaseStorageBackend):
-
- def get(self, filepath):
- return filepath
-
- def get_text(self, filepath):
- return filepath
-
- FileClient.register_backend('new', NewBackend)
-
- or
-
- .. code-block:: python
-
- @FileClient.register_backend('new')
- class NewBackend(BaseStorageBackend):
-
- def get(self, filepath):
- return filepath
-
- def get_text(self, filepath):
- return filepath
-
- Args:
- name (str): The name of the registered backend.
- backend (class, optional): The backend class to be registered,
- which must be a subclass of :class:`BaseStorageBackend`.
- When this method is used as a decorator, backend is None.
- Defaults to None.
- force (bool, optional): Whether to override the backend if the name
- has already been registered. Defaults to False.
- prefixes (str or list[str] or tuple[str], optional): The prefixes
- of the registered storage backend. Default: None.
- `New in version 1.3.15.`
- """
- if backend is not None:
- cls._register_backend(
- name, backend, force=force, prefixes=prefixes)
- return
-
- def _register(backend_cls):
- cls._register_backend(
- name, backend_cls, force=force, prefixes=prefixes)
- return backend_cls
-
- return _register
-
- def get(self, filepath: Union[str, Path]) -> Union[bytes, memoryview]:
- """Read data from a given ``filepath`` with 'rb' mode.
-
- Note:
- There are two types of return values for ``get``, one is ``bytes``
- and the other is ``memoryview``. The advantage of using memoryview
- is that you can avoid copying, and if you want to convert it to
- ``bytes``, you can use ``.tobytes()``.
-
- Args:
- filepath (str or Path): Path to read data.
-
- Returns:
- bytes | memoryview: Expected bytes object or a memory view of the
- bytes object.
- """
- return self.client.get(filepath)
-
- def get_text(self, filepath: Union[str, Path], encoding='utf-8') -> str:
- """Read data from a given ``filepath`` with 'r' mode.
-
- Args:
- filepath (str or Path): Path to read data.
- encoding (str): The encoding format used to open the ``filepath``.
- Default: 'utf-8'.
-
- Returns:
- str: Expected text reading from ``filepath``.
- """
- return self.client.get_text(filepath, encoding)
-
- def put(self, obj: bytes, filepath: Union[str, Path]) -> None:
- """Write data to a given ``filepath`` with 'wb' mode.
-
- Note:
- ``put`` should create a directory if the directory of ``filepath``
- does not exist.
-
- Args:
- obj (bytes): Data to be written.
- filepath (str or Path): Path to write data.
- """
- self.client.put(obj, filepath)
-
- def put_text(self, obj: str, filepath: Union[str, Path]) -> None:
- """Write data to a given ``filepath`` with 'w' mode.
-
- Note:
- ``put_text`` should create a directory if the directory of
- ``filepath`` does not exist.
-
- Args:
- obj (str): Data to be written.
- filepath (str or Path): Path to write data.
- encoding (str, optional): The encoding format used to open the
- `filepath`. Default: 'utf-8'.
- """
- self.client.put_text(obj, filepath)
-
- def remove(self, filepath: Union[str, Path]) -> None:
- """Remove a file.
-
- Args:
- filepath (str, Path): Path to be removed.
- """
- self.client.remove(filepath)
-
- def exists(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path exists.
-
- Args:
- filepath (str or Path): Path to be checked whether exists.
-
- Returns:
- bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise.
- """
- return self.client.exists(filepath)
-
- def isdir(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a directory.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a
- directory.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a directory,
- ``False`` otherwise.
- """
- return self.client.isdir(filepath)
-
- def isfile(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a file.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a file.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a file, ``False``
- otherwise.
- """
- return self.client.isfile(filepath)
-
- def join_path(self, filepath: Union[str, Path],
- *filepaths: Union[str, Path]) -> str:
- """Concatenate all file paths.
-
- Join one or more filepath components intelligently. The return value
- is the concatenation of filepath and any members of *filepaths.
-
- Args:
- filepath (str or Path): Path to be concatenated.
-
- Returns:
- str: The result of concatenation.
- """
- return self.client.join_path(filepath, *filepaths)
-
- @contextmanager
- def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]:
- """Download data from ``filepath`` and write the data to local path.
-
- ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It
- can be called with ``with`` statement, and when exists from the
- ``with`` statement, the temporary path will be released.
-
- Note:
- If the ``filepath`` is a local path, just return itself.
-
- .. warning::
- ``get_local_path`` is an experimental interface that may change in
- the future.
-
- Args:
- filepath (str or Path): Path to be read data.
-
- Examples:
- >>> file_client = FileClient(prefix='s3')
- >>> with file_client.get_local_path('s3://bucket/abc.jpg') as path:
- ... # do something here
-
- Yields:
- Iterable[str]: Only yield one path.
- """
- with self.client.get_local_path(str(filepath)) as local_path:
- yield local_path
-
- def list_dir_or_file(self,
- dir_path: Union[str, Path],
- list_dir: bool = True,
- list_file: bool = True,
- suffix: Optional[Union[str, Tuple[str]]] = None,
- recursive: bool = False) -> Iterator[str]:
- """Scan a directory to find the interested directories or files in
- arbitrary order.
-
- Note:
- :meth:`list_dir_or_file` returns the path relative to ``dir_path``.
-
- Args:
- dir_path (str | Path): Path of the directory.
- list_dir (bool): List the directories. Default: True.
- list_file (bool): List the path of files. Default: True.
- suffix (str or tuple[str], optional): File suffix
- that we are interested in. Default: None.
- recursive (bool): If set to True, recursively scan the
- directory. Default: False.
-
- Yields:
- Iterable[str]: A relative path to ``dir_path``.
- """
- yield from self.client.list_dir_or_file(dir_path, list_dir, list_file,
- suffix, recursive)
diff --git a/spaces/adirik/stylemc-demo/encoder4editing/options/train_options.py b/spaces/adirik/stylemc-demo/encoder4editing/options/train_options.py
deleted file mode 100644
index 583ea1423fdc9a649cd7044d74d554bf0ac2bf51..0000000000000000000000000000000000000000
--- a/spaces/adirik/stylemc-demo/encoder4editing/options/train_options.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from argparse import ArgumentParser
-from configs.paths_config import model_paths
-
-
-class TrainOptions:
-
- def __init__(self):
- self.parser = ArgumentParser()
- self.initialize()
-
- def initialize(self):
- self.parser.add_argument('--exp_dir', type=str, help='Path to experiment output directory')
- self.parser.add_argument('--dataset_type', default='ffhq_encode', type=str,
- help='Type of dataset/experiment to run')
- self.parser.add_argument('--encoder_type', default='Encoder4Editing', type=str, help='Which encoder to use')
-
- self.parser.add_argument('--batch_size', default=4, type=int, help='Batch size for training')
- self.parser.add_argument('--test_batch_size', default=2, type=int, help='Batch size for testing and inference')
- self.parser.add_argument('--workers', default=4, type=int, help='Number of train dataloader workers')
- self.parser.add_argument('--test_workers', default=2, type=int,
- help='Number of test/inference dataloader workers')
-
- self.parser.add_argument('--learning_rate', default=0.0001, type=float, help='Optimizer learning rate')
- self.parser.add_argument('--optim_name', default='ranger', type=str, help='Which optimizer to use')
- self.parser.add_argument('--train_decoder', default=False, type=bool, help='Whether to train the decoder model')
- self.parser.add_argument('--start_from_latent_avg', action='store_true',
- help='Whether to add average latent vector to generate codes from encoder.')
- self.parser.add_argument('--lpips_type', default='alex', type=str, help='LPIPS backbone')
-
- self.parser.add_argument('--lpips_lambda', default=0.8, type=float, help='LPIPS loss multiplier factor')
- self.parser.add_argument('--id_lambda', default=0.1, type=float, help='ID loss multiplier factor')
- self.parser.add_argument('--l2_lambda', default=1.0, type=float, help='L2 loss multiplier factor')
-
- self.parser.add_argument('--stylegan_weights', default=model_paths['stylegan_ffhq'], type=str,
- help='Path to StyleGAN model weights')
- self.parser.add_argument('--stylegan_size', default=1024, type=int,
- help='size of pretrained StyleGAN Generator')
- self.parser.add_argument('--checkpoint_path', default=None, type=str, help='Path to pSp model checkpoint')
-
- self.parser.add_argument('--max_steps', default=500000, type=int, help='Maximum number of training steps')
- self.parser.add_argument('--image_interval', default=100, type=int,
- help='Interval for logging train images during training')
- self.parser.add_argument('--board_interval', default=50, type=int,
- help='Interval for logging metrics to tensorboard')
- self.parser.add_argument('--val_interval', default=1000, type=int, help='Validation interval')
- self.parser.add_argument('--save_interval', default=None, type=int, help='Model checkpoint interval')
-
- # Discriminator flags
- self.parser.add_argument('--w_discriminator_lambda', default=0, type=float, help='Dw loss multiplier')
- self.parser.add_argument('--w_discriminator_lr', default=2e-5, type=float, help='Dw learning rate')
- self.parser.add_argument("--r1", type=float, default=10, help="weight of the r1 regularization")
- self.parser.add_argument("--d_reg_every", type=int, default=16,
- help="interval for applying r1 regularization")
- self.parser.add_argument('--use_w_pool', action='store_true',
- help='Whether to store a latnet codes pool for the discriminator\'s training')
- self.parser.add_argument("--w_pool_size", type=int, default=50,
- help="W\'s pool size, depends on --use_w_pool")
-
- # e4e specific
- self.parser.add_argument('--delta_norm', type=int, default=2, help="norm type of the deltas")
- self.parser.add_argument('--delta_norm_lambda', type=float, default=2e-4, help="lambda for delta norm loss")
-
- # Progressive training
- self.parser.add_argument('--progressive_steps', nargs='+', type=int, default=None,
- help="The training steps of training new deltas. steps[i] starts the delta_i training")
- self.parser.add_argument('--progressive_start', type=int, default=None,
- help="The training step to start training the deltas, overrides progressive_steps")
- self.parser.add_argument('--progressive_step_every', type=int, default=2_000,
- help="Amount of training steps for each progressive step")
-
- # Save additional training info to enable future training continuation from produced checkpoints
- self.parser.add_argument('--save_training_data', action='store_true',
- help='Save intermediate training data to resume training from the checkpoint')
- self.parser.add_argument('--sub_exp_dir', default=None, type=str, help='Name of sub experiment directory')
- self.parser.add_argument('--keep_optimizer', action='store_true',
- help='Whether to continue from the checkpoint\'s optimizer')
- self.parser.add_argument('--resume_training_from_ckpt', default=None, type=str,
- help='Path to training checkpoint, works when --save_training_data was set to True')
- self.parser.add_argument('--update_param_list', nargs='+', type=str, default=None,
- help="Name of training parameters to update the loaded training checkpoint")
-
- def parse(self):
- opts = self.parser.parse_args()
- return opts
diff --git a/spaces/ahmedriad1/vehicle-identifier/README.md b/spaces/ahmedriad1/vehicle-identifier/README.md
deleted file mode 100644
index 4c3637978c8115032a83d0aae039b7084369dd33..0000000000000000000000000000000000000000
--- a/spaces/ahmedriad1/vehicle-identifier/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Vehicle Identifier
-emoji: ⚡
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/Music_Source_Separation/bytesep/dataset_creation/pack_audios_to_hdf5s/__init__.py b/spaces/akhaliq/Music_Source_Separation/bytesep/dataset_creation/pack_audios_to_hdf5s/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/infinibatch/closablequeue.py b/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/infinibatch/closablequeue.py
deleted file mode 100644
index 08a2a29690f9ebacae8576f78edd4a9132413ad1..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/infinibatch/closablequeue.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT license.
-
-from collections import deque
-from threading import Condition, Lock, Thread
-
-
-class ClosedException(Exception):
- pass
-
-
-class ClosableQueue:
- """
- A thread-safe queue that can be closed
-
- As long as the the queue is not closed, it behaves just like a thread-safe queue with a capacity limit:
- - put blocks until the item can be added
- - get blocks until there is an item to be returned
-
- Once the queue is closed, no more items can be added but existing items can be removed:
- - put always raises a ClosedException
- - get returns an item if the queue is not empty and otherwise raises a ClosedException
- """
-
- def __init__(self, maxsize: int = 1000):
- self._maxsize = maxsize
- self._queue = deque()
- self._mutex = Lock()
- self._not_empty = Condition(self._mutex)
- self._not_full = Condition(self._mutex)
- self._closed = False
-
- def put(self, item):
- with self._not_full:
- if self._closed:
- raise ClosedException(
- "This queue has been closed, no more items can be added."
- )
- while len(self._queue) >= self._maxsize:
- self._not_full.wait()
- if self._closed:
- raise ClosedException(
- "This queue has been closed, no more items can be added."
- )
- self._queue.append(item)
- self._not_empty.notify()
-
- def get(self):
- with self._not_empty:
- if self._closed and len(self._queue) == 0:
- raise ClosedException(
- "This queue has been closed and is empty, no more items can be retrieved."
- )
- while len(self._queue) == 0:
- self._not_empty.wait()
- if self._closed and len(self._queue) == 0:
- raise ClosedException(
- "This queue has been closed and is empty, no more items can be retrieved."
- )
- item = self._queue.popleft()
- self._not_full.notify()
- return item
-
- def close(self):
- with self._mutex:
- self._closed = True
- self._not_empty.notify_all()
- self._not_full.notify_all()
diff --git a/spaces/aliceoq/vozes-da-loirinha/lib/uvr5_pack/lib_v5/layers_537238KB.py b/spaces/aliceoq/vozes-da-loirinha/lib/uvr5_pack/lib_v5/layers_537238KB.py
deleted file mode 100644
index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000
--- a/spaces/aliceoq/vozes-da-loirinha/lib/uvr5_pack/lib_v5/layers_537238KB.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv6 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv7 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- feat6 = self.conv6(x)
- feat7 = self.conv7(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/allknowingroger/Image-Models-Test188/app.py b/spaces/allknowingroger/Image-Models-Test188/app.py
deleted file mode 100644
index 19b80db34d95053ab94ec9097a16e8b96b2ab5bc..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test188/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "niyala/my-pet-dog-xzg",
- "papanton/lora-trained-xl-colab",
- "ShilpaManaji/dreambooth-project",
- "frankmoire/nyjha",
- "joachimsallstrom/aether-bubbles-foam-lora-for-sdxl",
- "Devil2D2/Model_anything_diffusers",
- "Yntec/AbsoluteRemix",
- "ranajithore/stable-diffusion-v2-1-trained-for-plant-cell-structure-diagram-without-captions-new",
- "Abhinandpv/dog",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test76/README.md b/spaces/allknowingroger/Image-Models-Test76/README.md
deleted file mode 100644
index bafe734e7fc813c7e4e636049efdf882aeb183a5..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test76/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test75
----
-
-
\ No newline at end of file
diff --git a/spaces/altafalam3/Text-Summarizer/utils.py b/spaces/altafalam3/Text-Summarizer/utils.py
deleted file mode 100644
index e16c95418891126bd4f3d5573cbbc96a24c6b85b..0000000000000000000000000000000000000000
--- a/spaces/altafalam3/Text-Summarizer/utils.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import re
-import requests
-import docx2txt
-from io import StringIO
-from PyPDF2 import PdfFileReader
-
-from bs4 import BeautifulSoup
-from nltk.tokenize import sent_tokenize
-
-emoji_pattern = re.compile(
- "["
- u"\U0001F600-\U0001F64F" # emoticons
- u"\U0001F300-\U0001F5FF" # symbols & pictographs
- u"\U0001F680-\U0001F6FF" # transport & map symbols
- u"\U0001F1E0-\U0001F1FF" # flags (iOS)
- u"\U00002702-\U000027B0"
- u"\U000024C2-\U0001F251"
- "]+",
- flags=re.UNICODE,
-)
-
-
-def clean_text(x):
- # x = x.lower() # lowercase
- x = x.encode("ascii", "ignore").decode() # unicode
- x = re.sub(r"https*\S+", " ", x) # url
- x = re.sub(r"@\S+", " ", x) # mentions
- x = re.sub(r"#\S+", " ", x) # hastags
- # x = x.replace("'", "") # remove ticks
- # x = re.sub("[%s]" % re.escape(string.punctuation), " ", x) # punctuation
- # x = re.sub(r"\w*\d+\w*", "", x) # numbers
- x = re.sub(r"\s{2,}", " ", x) # over spaces
- x = emoji_pattern.sub(r"", x) # emojis
- x = re.sub("[^.,!?A-Za-z0-9]+", " ", x) # special charachters except .,!?
-
- return x
-
-
-def fetch_article_text(url: str):
-
- r = requests.get(url)
- soup = BeautifulSoup(r.text, "html.parser")
- results = soup.find_all(["h1", "p"])
- text = [result.text for result in results]
- ARTICLE = " ".join(text)
- ARTICLE = ARTICLE.replace(".", ".")
- ARTICLE = ARTICLE.replace("!", "!")
- ARTICLE = ARTICLE.replace("?", "?")
- sentences = ARTICLE.split("")
- current_chunk = 0
- chunks = []
- for sentence in sentences:
- if len(chunks) == current_chunk + 1:
- if len(chunks[current_chunk]) + len(sentence.split(" ")) <= 500:
- chunks[current_chunk].extend(sentence.split(" "))
- else:
- current_chunk += 1
- chunks.append(sentence.split(" "))
- else:
- print(current_chunk)
- chunks.append(sentence.split(" "))
-
- for chunk_id in range(len(chunks)):
- chunks[chunk_id] = " ".join(chunks[chunk_id])
-
- return ARTICLE, chunks
-
-
-def preprocess_text_for_abstractive_summarization(tokenizer, text):
- sentences = sent_tokenize(text)
-
- # initialize
- length = 0
- chunk = ""
- chunks = []
- count = -1
- for sentence in sentences:
- count += 1
- combined_length = (
- len(tokenizer.tokenize(sentence)) + length
- ) # add the no. of sentence tokens to the length counter
-
- if combined_length <= tokenizer.max_len_single_sentence: # if it doesn't exceed
- chunk += sentence + " " # add the sentence to the chunk
- length = combined_length # update the length counter
-
- # if it is the last sentence
- if count == len(sentences) - 1:
- chunks.append(chunk.strip()) # save the chunk
-
- else:
- chunks.append(chunk.strip()) # save the chunk
-
- # reset
- length = 0
- chunk = ""
-
- # take care of the overflow sentence
- chunk += sentence + " "
- length = len(tokenizer.tokenize(sentence))
-
- return chunks
-
-
-def read_pdf(file):
- pdfReader = PdfFileReader(file)
- count = pdfReader.numPages
- all_page_text = ""
- for i in range(count):
- page = pdfReader.getPage(i)
- all_page_text += page.extractText()
-
- return all_page_text
-
-
-def read_text_from_file(file):
-
- # read text file
- if file.type == "text/plain":
- # To convert to a string based IO:
- stringio = StringIO(file.getvalue().decode("utf-8"))
-
- # To read file as string:
- file_content = stringio.read()
-
- # read pdf file
- elif file.type == "application/pdf":
- file_content = read_pdf(file)
-
- # read docx file
- elif (
- file.type
- == "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
- ):
- file_content = docx2txt.process(file)
-
- return file_content
diff --git a/spaces/alvin888/GeoGenie/app.py b/spaces/alvin888/GeoGenie/app.py
deleted file mode 100644
index 5772c3f9587e2518c3cbf761f6f10b2945e57fad..0000000000000000000000000000000000000000
--- a/spaces/alvin888/GeoGenie/app.py
+++ /dev/null
@@ -1,488 +0,0 @@
-import os
-import random
-import time
-from pathlib import Path
-import re
-# from types import SimpleNamespace
-import gradio as gr
-import psutil
-from about_time import about_time
-from loguru import logger
-import torch
-import json
-from transformers import AutoTokenizer, BitsAndBytesConfig, AutoModelForCausalLM, TextStreamer, TextIteratorStreamer, \
- pipeline
-import librosa
-from threading import Thread
-import sys
-from functions import *
-from sentence_transformers import SentenceTransformer, util
-
-runtimeFlag = device = "cuda:0" if torch.cuda.is_available() else "cpu"
-runtime = "gpu"
-prompt_template = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
-
-### Instruction: {user_prompt}
-
-### Response:
-"""
-
-prompt_template = """System: You are a helpful,
-respectful and honest assistant. Always answer as
-helpfully as possible, while being safe. Your answers
-should not include any harmful, unethical, racist,
-sexist, toxic, dangerous, or illegal content. Please
-ensure that your responses are socially unbiased and
-positive in nature. If a question does not make any
-sense, or is not factually coherent, explain why instead
-of answering something not correct. If you don't know
-the answer to a question, please don't share false
-information.
-User: {prompt}
-Assistant: """
-
-prompt_template = """System: You are a helpful assistant.
-User: {prompt}
-Assistant: """
-
-prompt_template = """Question: {question}
-Answer: Let's work this out in a step by step way to be sure we have the right answer."""
-
-prompt_template = """[INST] <>
-You are a helpful, respectful and honest assistant. Always answer as helpfully as possible assistant. Think step by step.
-<>
-
-What NFL team won the Super Bowl in the year Justin Bieber was born?
-[/INST]"""
-
-prompt_template = """[INST] <>
-You are an helpful assistant. Always answer as helpfully as possible. Think step by step. <>
-
-{question} [/INST]
-"""
-
-# prompt_template = """[INST] <>
-# You are a helpful assistant.
-# <>
-
-# {question} [/INST]
-# """
-
-cache_dir = "huggingface_cache"
-model_id = "Trelis/Llama-2-7b-chat-hf-function-calling-v2"
-
-# Load the model in 4-bit
-bnb_config = BitsAndBytesConfig(
- load_in_4bit=True,
- bnb_4bit_use_double_quant=True, # adds speed with minimal loss of quality.
- bnb_4bit_quant_type="nf4",
- bnb_4bit_compute_dtype=torch.bfloat16,
- # bnb_4bit_compute_dtype=torch.float16,
-
-)
-model = AutoModelForCausalLM.from_pretrained(
- model_id,
- quantization_config=bnb_config,
- device_map='auto', # for inference use 'auto', for training use device_map={"":0}
- # device_map=runtimeFlag,
- trust_remote_code=True,
- # rope_scaling = {"type": "dynamic", "factor": 2.0}, # allows for a max sequence length of 8192 tokens
- cache_dir=cache_dir)
-
-# model = None
-
-tokenizer = AutoTokenizer.from_pretrained(model_id, cache_dir=cache_dir, use_fast=True)
-
-model_st = SentenceTransformer('sentence-transformers/all-mpnet-base-v2', device=device)
-
-# transcribe
-transcriber = pipeline("automatic-speech-recognition", model="openai/whisper-base.en", device=device)
-# transcriber = None
-
-# load functions
-with open("functions.json", 'r') as file:
- funcs = json.load(file)
-
-format_output = {
- "name": "function_name",
- "paprameters": {
- "property 1": "property_value",
- "property 2": "property_value",
- "property 3": "property_value",
- },
-}
-# get all sentences
-sentences = [func["name"].replace("_", " ") + ". " + func["description"] for func in funcs]
-categories = ["all"] + list(set([cat for func in funcs for cat in func["categories"]]))
-
-
-def stream(user_prompt, category="all"):
- # Define the roles and markers
- B_INST, E_INST = "[INST]\n\n", "[/INST]\n\n"
- B_FUNC, E_FUNC = "", "\n\n"
- B_SYS, E_SYS = "<>\n", "\n<>\n\n"
- top_k = 3
- if category == "all":
- category_funcs = funcs
- top_k = 5
- else:
- category_funcs = [item for item in funcs if category in item['categories']]
- sentences = [func["name"].replace("_", " ") + ". " + func["description"] for func in category_funcs]
- corpus_embeddings = util.normalize_embeddings(model_st.encode(sentences, convert_to_tensor=True).to(device))
- query_embeddings = util.normalize_embeddings(model_st.encode([user_prompt], convert_to_tensor=True).to(device))
- hits = util.semantic_search(query_embeddings, corpus_embeddings, top_k=top_k, score_function=util.dot_score)
- topn_func = [category_funcs[hit["corpus_id"]] for hit in hits[0]]
-
- functionList = ''
- for func in topn_func:
- functionList += json.dumps(func, indent=4, separators=(',', ': '))
- format_output_promt = json.dumps(format_output, indent=4, separators=(',', ': '))
- system_prompt = "To call a function, respond - immediately and only - with a JSON object of must have the following format:\n" + format_output_promt + "\nThe answer must contain only properties.\nThe answer must contain only words from the question.\nHint: The values in parameters are usually nouns."
-
- prompt = f"{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST}{B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()}{E_INST}\n\n"
- inputs = tokenizer([prompt], return_tensors="pt").to(runtimeFlag)
- streamer = TextStreamer(tokenizer)
- result = model.generate(**inputs, streamer=streamer, max_new_tokens=500)
- return result, topn_func
-
-
-def stream2(user_prompt, response_func):
- B_INST, E_INST = "[INST]\n\n", "[/INST]\n\n"
- B_SYS, E_SYS = "<>\n", "\n<>\n\n"
-
- format_output_promt = json.dumps(response_func, indent=4, separators=(',', ': '))
-
- system_prompt = f"You are a OneMap wayfinding assistant. Based on the answer in the JSON data available: {format_output_promt}, please provide concise answers to the question in writing, not in JSON format.\n Only include information found in the results and don't add any additional information.\n If the result returned from the JSON is not found then respond to {{object}} not found"
-
- prompt = f"{B_INST}{B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()}{E_INST}\n\n"
- print(prompt)
- inputs = tokenizer([prompt], return_tensors="pt").to(runtimeFlag)
- streamer = TextIteratorStreamer(tokenizer)
- generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=5000)
- thread = Thread(target=model.generate, kwargs=generation_kwargs)
- thread.start()
- return streamer
-
-
-def generate(
- question: str,
- category: str,
-):
- """Run model inference, will return a Generator if streaming is true."""
- # Call the model with functions
- result, topn_func = stream(question, category)
- response_func = json.loads(tokenizer.decode(result[0]).split("[/INST]\n\n")[-1][:-4])
- all_name_topn_func = [func["name"] for func in topn_func]
- name_func = response_func.get("name", None)
- if (name_func is None) or (name_func not in all_name_topn_func):
- name_func = all_name_topn_func[0]
- print("update name_func")
-
- # call API
- if name_func == "get_mariral_status":
- answer = get_mariral_status(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_language_literate":
- answer = get_language_literate(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_economic_status":
- answer = get_economic_status(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"],
- gender=response_func["parameters"]["gender"])
- elif name_func == "get_education_attending":
- answer = get_education_attending(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_ethnic_distribution":
- answer = get_ethnic_distribution(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"],
- gender=response_func["parameters"]["gender"])
- elif name_func == "get_household_monthly_income_work":
- answer = get_household_monthly_income_work(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_household_size":
- answer = get_household_size(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_household_structure":
- answer = get_household_structure(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_income_from_work":
- answer = get_income_from_work(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_industry":
- answer = get_industry(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_mode_of_transport_school":
- answer = get_mode_of_transport_school(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_mode_of_transport_work":
- answer = get_mode_of_transport_work(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_population_age_group":
- answer = get_population_age_group(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"],
- gender=response_func["parameters"]["gender"])
- elif name_func == "get_religion":
- answer = get_religion(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_spoken_at_home":
- answer = get_spoken_at_home(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_tenancy":
- answer = get_tenancy(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_type_of_dwelling_household":
- answer = get_type_of_dwelling_household(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "get_type_of_dwelling_population":
- answer = get_type_of_dwelling_population(planingArea=response_func["parameters"]["planningArea"],
- year=response_func["parameters"]["year"])
- elif name_func == "search":
- answer = search(searchVal=response_func["parameters"]["searchVal"],
- returnGeom=response_func["parameters"]["returnGeom"],
- getAddrDetails=response_func["parameters"]["getAddrDetails"],
- pageNum=response_func["parameters"]["pageNum"])
- elif name_func == "revgeocodexy":
- answer = revgeocodexy(latitude=response_func["parameters"]["latitude"],
- longtitude=response_func["parameters"]["longtitude"],
- buffer=response_func["parameters"]["buffer"],
- addressType=response_func["parameters"]["addressType"],
- otherFeatures=response_func["parameters"]["otherFeatures"])
- elif name_func == "get_location_from_planning_area":
- answer = get_the_near_location(currentLocation=response_func["parameters"]["planningArea"],
- targetLocation=response_func["parameters"]["targetLocation"])
- elif name_func == "get_suitable_location_from_list_of_locations":
- answer = get_suitable_location(targetLocation=response_func["parameters"]["targetLocation"],
- featureLocations=response_func["parameters"]["featureLocations"],
- distance_requirement=8)
- elif name_func == "get_location_from_language":
- answer = get_location_from_language(location=response_func["parameters"]["targetLocation"],
- language=response_func["parameters"]["language"],
- year=response_func["parameters"]["year"])
- # Recall model with API result
- return stream2(question, str(answer))
-
- # prompt = prompt_template.format(question=question)
- # inputs = tokenizer([prompt], return_tensors="pt").to(runtimeFlag)
- # streamer = TextIteratorStreamer(tokenizer)
- # generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=5000)
- # thread = Thread(target=model.generate, kwargs=generation_kwargs)
- # thread.start()
- # # generated_text =""
- # # for new_text in streamer:
- # # generated_text += new_text
- # return streamer
-
-
-def transcribe(audio):
- y, sr = librosa.load(audio, sr=16000)
- y = y.astype(np.float32)
- y /= np.max(np.abs(y))
- text = transcriber({"sampling_rate": sr, "raw": y})["text"]
- return text
-
-
-def user(user_message, input_audio_mic, history):
- # return user_message, history + [[user_message, None]]
- if user_message == "":
- user_message = transcribe(input_audio_mic)
- history.append([user_message, None])
- return user_message, history # keep user_message
-
-
-def user1(user_message, input_audio_mic, history):
- # return user_message, history + [[user_message, None]]
- if user_message == "":
- user_message = transcribe(input_audio_mic)
- history.append([user_message, None])
- return "", history # clear user_message
-
-
-def bot_(history):
- user_message = history[-1][0]
- resp = random.choice(["How are you?", "I love you", "I'm very hungry"])
- bot_message = user_message + ": " + resp
- history[-1][1] = ""
- for character in bot_message:
- history[-1][1] += character
- time.sleep(0.02)
- yield history
-
- history[-1][1] = resp
- yield history
-
-
-def bot(history, category):
- user_message = history[-1][0]
- response = []
-
- logger.debug(f"{user_message=}")
-
- with about_time() as atime: # type: ignore
- flag = idx = 1
- prefix = ""
- then = time.time()
-
- logger.debug("about to generate")
- pattern = r'\{.*?\}|\{|\}'
-
- for elm in generate(user_message, category):
- if idx == 1:
- idx = 0
- continue
- if flag == 1:
- logger.debug("in the loop")
- prefix = f"({time.time() - then:.2f}s) "
- flag = 0
- print(prefix, end="", flush=True)
- logger.debug(f"{prefix=}")
- elm = re.sub(pattern, '', elm)
- print(elm, end="", flush=True)
- # logger.debug(f"{elm}")
- response.append(elm)
- history[-1][1] = prefix + "".join(response)
- yield history
-
- _ = (
- f"(time elapsed: {atime.duration_human}, " # type: ignore
- f"{atime.duration / len(''.join(response)):.2f}s/char)" # type: ignore
- )
-
- history[-1][1] = "".join(response) + f"\n{_}"
- yield history
-
-
-css = """
- .importantButton {
- background: linear-gradient(45deg, #7e0570,#5d1c99, #6e00ff) !important;
- border: none !important;
- }
- .importantButton:hover {
- background: linear-gradient(45deg, #ff00e0,#8500ff, #6e00ff) !important;
- border: none !important;
- }
- .disclaimer {font-variant-caps: all-small-caps; font-size: xx-small;}
- .xsmall {font-size: x-small;}
-
-"""
-etext = """In America, where cars are an important part of the national psyche, a decade ago people had suddenly started to drive less, which had not happened since the oil shocks of the 1970s. """
-examples_list = [
- ["I'm sitting at Bedok. Are there any hotel nearby?"],
- ["Tell me the monthly family income of Bedok in 2020"],
- ["How many males were married in Bedok in 2020?"],
- ["How many females live in bedok in 2020 ?"],
- ["Please indicate the most used means of transport in Bedok in 2015 ?"],
- ["I want to find a flat, that is near the feature locations: gym, childcare center, supermarket, and MRT station"],
- ["Find a flat where people speak Mandarin."],
-
-]
-
-logger.info("start block")
-
-with gr.Blocks(
- title="GeoGenie",
- theme=gr.themes.Base(),
- css=css,
-) as block:
- title = """
GeoGenie: OneMap Multimodal ChatBot Demo
"""
- gr.Markdown(title)
- # chatbot = gr.Chatbot().style(height=500) # 500
- chatbot = gr.Chatbot(height=500)
-
- # buff = gr.Textbox(show_label=False, visible=True)
-
- with gr.Row():
- with gr.Column(scale=5):
- with gr.Row():
- with gr.Column(scale=5):
- input_audio_mic = gr.Audio(
- label="Input speech",
- type="filepath",
- # source="microphone",
- # streaming=True
- )
- transcribe_button = gr.Button("Transcribe", elem_classes="xsmall")
-
- msg = gr.Textbox(
- label="Chat Message Box",
- placeholder="Ask me anything (press Shift+Enter or click Submit to send)",
- show_label=False,
- # container=False,
- lines=3,
- max_lines=20,
- show_copy_button=True,
- # ).style(container=False)
- )
-
- with gr.Column(scale=1, min_width=50):
- with gr.Row():
- category = gr.Dropdown(
- label="category",
- choices=sorted(categories),
- interactive=True,
- value="all",
- )
- submit = gr.Button("Submit", elem_classes="xsmall")
- stop = gr.Button("Stop", visible=True)
- clear = gr.Button("Clear History", visible=True)
-
- with gr.Row(visible=False):
- with gr.Accordion("Advanced Options:", open=False):
- with gr.Row():
- with gr.Column(scale=2):
- system = gr.Textbox(
- label="System Prompt",
- value=prompt_template,
- show_label=False,
- container=False,
- # ).style(container=False)
- )
- with gr.Column():
- with gr.Row():
- change = gr.Button("Change System Prompt")
- reset = gr.Button("Reset System Prompt")
-
- transcribe_button.click(fn=transcribe,
- inputs=[input_audio_mic],
- outputs=[msg],
- queue=False)
-
- with gr.Accordion("Example Inputs", open=True):
- examples = gr.Examples(
- examples=examples_list,
- inputs=[msg],
- examples_per_page=40,
- )
-
- msg_submit_event = msg.submit(
- # fn=conversation.user_turn,
- fn=user,
- inputs=[msg, input_audio_mic, chatbot],
- outputs=[msg, chatbot],
- queue=True,
- show_progress="full",
- # api_name=None,
- ).then(bot, [chatbot, category], [chatbot], queue=True)
- submit_click_event = submit.click(
- # fn=lambda x, y: ("",) + user(x, y)[1:], # clear msg
- fn=user1, # clear msg
- inputs=[msg, input_audio_mic, chatbot],
- outputs=[msg, chatbot],
- queue=True,
- # queue=False,
- show_progress="full",
- # api_name=None,
- ).then(bot, [chatbot, category], [chatbot], queue=True)
- stop.click(
- fn=None,
- inputs=None,
- outputs=None,
- cancels=[msg_submit_event, submit_click_event],
- queue=False,
- )
- clear.click(lambda: None, None, chatbot, queue=False)
-
-concurrency_count = 1
-logger.info(f"{concurrency_count=}")
-
-# block.queue(concurrency_count=concurrency_count, max_size=5).launch(debug=False, share=True)
-block.queue(max_size=5).launch(debug=False)
diff --git a/spaces/anasanchezf/cloome/src/training/datasets.py b/spaces/anasanchezf/cloome/src/training/datasets.py
deleted file mode 100644
index 19428330265cd30408f3b3f6dca1706aac5f0894..0000000000000000000000000000000000000000
--- a/spaces/anasanchezf/cloome/src/training/datasets.py
+++ /dev/null
@@ -1,240 +0,0 @@
-import os
-import torch
-import numpy as np
-import pandas as pd
-from pathlib import Path
-from scipy.io import mmread
-from torchvision.transforms import Compose
-from torch.utils.data import Dataset
-
-class CellPainting(Dataset):
- def __init__(self, sample_index_file: str, image_directory_path: str = None, molecule_file: str = None, label_matrix_file: str = None,
- label_row_index_file: str = None, label_col_index_file: str = None, auxiliary_labels=None,
- transforms=None, group_views: bool = False,
- subset: float = 1., num_classes: int = None, verbose: bool = False):
- """ Read samples from cellpainting dataset."""
- self.verbose = verbose
- self.molecules = False
- self.images = False
-
- assert (os.path.exists(sample_index_file))
- print(image_directory_path)
- print(molecule_file)
- # Read sample index
- sample_index = pd.read_csv(sample_index_file, sep=",", header=0)
- sample_index.set_index(["SAMPLE_KEY"])
-
- # read auxiliary labels if provided
- if auxiliary_labels is not None:
- pddata = pd.read_csv(auxiliary_labels, sep=",", header=0)
- self.auxiliary_data = pddata.as_matrix()[:, 2:].astype(np.float32)
- # threshold
- self.auxiliary_data[self.auxiliary_data < 0.75] = -1
- self.auxiliary_data[self.auxiliary_data >= 0.75] = 1
- self.auxiliary_assays = list(pddata)[2:]
- self.n_auxiliary_classes = len(self.auxiliary_assays)
- self.auxiliary_smiles = pddata["SMILES"].tolist()
- else:
- self.n_auxiliary_classes = 0
-
- if image_directory_path:
- self.images = True
- assert (os.path.exists(image_directory_path))
-
- if group_views:
- sample_groups = sample_index.groupby(['PLATE_ID', 'WELL_POSITION'])
- sample_keys = list(sample_groups.groups.keys())
- sample_index = sample_groups
- self.sample_to_smiles = None # TODO
- else:
- sample_keys = sample_index['SAMPLE_KEY'].tolist()
-
- if auxiliary_labels is not None:
- self.sample_to_smiles = dict(zip(sample_index.SAMPLE_KEY, [self.auxiliary_smiles.index(s) for s in sample_index.SMILES]))
- else:
- self.sample_to_smiles = None
-
- if molecule_file:
- self.molecules = True
-
- assert (os.path.exists(molecule_file))
-
- molecule_df = pd.read_hdf(molecule_file, key="df")
- #molecule_objs = {index: row.values for index, row in molecule_df.iterrows()}
-
- #keys = list(set(sample_keys) & set(list(molecule_df.index.values)))
- mol_keys = list(molecule_df.index.values)
-
- if self.images and self.molecules:
- keys = list(set(sample_keys) & set(list(molecule_df.index.values)))
- elif self.images:
- keys = sample_keys
- elif self.molecules:
- keys = mol_keys
-
-
- if len(keys) == 0:
- raise Exception("Empty dataset!")
- else:
- self.log("Found {} samples".format(len(keys)))
-
- if subset != 1.:
- sample_keys = sample_keys[:int(len(sample_keys) * subset)]
-
- # Read Label Matrix if specified
- if label_matrix_file is not None:
- assert (os.path.exists(label_matrix_file))
-
- assert (os.path.exists(label_row_index_file))
-
- assert (os.path.exists(label_col_index_file))
-
-
- if label_row_index_file is not None and label_col_index_file is not None:
- col_index = pd.read_csv(label_col_index_file, sep=",", header=0)
- row_index = pd.read_csv(label_row_index_file, sep=",", header=0)
- label_matrix = mmread(label_matrix_file).tocsr()
- # --
- self.label_matrix = label_matrix
- self.row_index = row_index
- self.col_index = col_index
- if group_views:
- self.label_dict = dict(
- (key, sample_groups.get_group(key).iloc[0].ROW_NR_LABEL_MAT) for key in sample_keys)
- else:
- self.label_dict = dict(zip(sample_index.SAMPLE_KEY, sample_index.ROW_NR_LABEL_MAT))
- self.n_classes = label_matrix.shape[1]
- else:
- raise Exception("If label is specified index files must be passed!")
- else:
- self.label_matrix = None
- self.row_index = None
- self.col_index = None
- self.label_dict = None
- self.n_classes = num_classes
-
- if auxiliary_labels is not None:
- self.n_classes += self.n_auxiliary_classes
-
- # expose everything important
- self.data_directory = image_directory_path
- self.sample_index = sample_index
- if self.molecules:
- self.molecule_objs = molecule_df
- self.keys = keys
- self.n_samples = len(keys)
- self.sample_keys = list(keys)
- self.group_views = group_views
- self.transforms = transforms
-
- # load first sample and check shape
- i = 0
-
- sample = self[i][0] if self.molecules else self[i] #getitem returns tuple of img and fp
-
-
- # while sample["input"] is np.nan and i < len(self):
- # sample = self[i][0] if self.molecules else self[i]
- # i += 1
- #
- # if sample["input"] is not None and not np.nan:
- # self.data_shape = sample["input"].shape
- # else:
- # self.data_shape = "Unknown"
- # self.log("Discovered {} samples (subset={}) with shape {}".format(self.n_samples, subset, self.data_shape))
-
-
- def __len__(self):
- return len(self.keys)
-
-## TODO: Clean!
- def __getitem__(self, idx):
- sample_key = self.keys[idx]
-
-
- if self.molecules and self.images:
- mol = self.molecule_objs.loc[sample_key].values
- img = self.read_img(sample_key)
- # mol = list(self.molecule_objs.loc[sample_key].values)
- return img, mol
- elif self.images:
- img = self.read_img(sample_key)
- return img
- elif self.molecules:
- mol = self.molecule_objs.loc[sample_key].values
- return mol
-
-
- @property
- def shape(self):
- return self.data_shape
-
- @property
- def num_classes(self):
- return self.n_classes
-
- def log(self, message):
- if self.verbose:
- print(message)
-
-
- def read_img(self, key):
- if self.group_views:
- X = self.load_view_group(key)
- else:
- filepath = os.path.join(self.data_directory, "{}.npz".format(key))
- if os.path.exists(filepath):
- X = self.load_view(filepath=filepath)
-
- index = int(np.where(self.sample_index["SAMPLE_KEY"]==key)[0])
-
- #cpd = str(self.sample_index["CPD_NAME"])
-
- else:
- #print("ERROR: Missing sample '{}'".format(key))
- return dict(input=np.nan, ID=key)
-
- if self.transforms:
- X = self.transforms(X)
-
- # get label
- if self.label_dict is not None:
- label_idx = self.label_dict[key]
- y = self.label_matrix[label_idx].toarray()[0].astype(np.float32)
- if self.sample_to_smiles is not None and key in self.sample_to_smiles:
- y = np.concatenate([y, self.auxiliary_data[self.sample_to_smiles[key], :]])
-
- return dict(input=X, target=y, ID=key)
- else:
- return dict(input=X, row_id=index, ID=key)
-
-
- def get_sample_keys(self):
- return self.sample_keys.copy()
-
- def load_view(self, filepath):
- """Load all channels for one sample"""
- npz = np.load(filepath, allow_pickle=True)
- if "sample" in npz:
- image = npz["sample"].astype(np.float32)
- #image_reshaped = np.transpose(image, (2, 0, 1))
- # for c in range(image.shape[-1]):
- # image[:, :, c] = (image[:, :, c] - image[:, :, c].mean()) / image[:, :, c].std()
- # image[:, :, c] = ((image[:, :, c] - image[:, :, c].mean()) / image[:, :, c].std() * 255).astype(np.uint8)
- # image = (image - image.mean()) / image.std()
- return image
-
- return None
-
- def load_view_group(self, groupkey):
- result = np.empty((1040, 2088 - 12, 5), dtype=np.uint8)
- viewgroup = self.sample_index.get_group(groupkey)
- for i, view in enumerate(viewgroup.sort_values("SITE", ascending=True).iterrows()):
- corner = (0 if int(i / 3) == 0 else 520, i % 3 * 692)
- filepath = os.path.join(self.data_directory, "{}.npz".format(view[1].SAMPLE_KEY))
- v = self.load_view(filepath=filepath)[:, 4:, :]
- # for j in range(v.shape[-1]):
- # plt.imshow(v[:, :, j])
- # plt.savefig("{}-{}-{}-{}.png".format(groupkey[0], groupkey[1], i, j))
- result[corner[0]:corner[0] + 520, corner[1]:corner[1] + 692, :] = v
- return result
diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/preprocess.py b/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/preprocess.py
deleted file mode 100644
index e1902115c97a076ace06e07f3a2e94085cb707cf..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/preprocess.py
+++ /dev/null
@@ -1,230 +0,0 @@
-import os
-from PIL import Image, ImageOps
-import math
-import platform
-import sys
-import tqdm
-import time
-
-from modules import paths, shared, images, deepbooru
-from modules.shared import opts, cmd_opts
-from modules.textual_inversion import autocrop
-
-
-def preprocess(id_task, process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru=False, split_threshold=0.5, overlap_ratio=0.2, process_focal_crop=False, process_focal_crop_face_weight=0.9, process_focal_crop_entropy_weight=0.3, process_focal_crop_edges_weight=0.5, process_focal_crop_debug=False, process_multicrop=None, process_multicrop_mindim=None, process_multicrop_maxdim=None, process_multicrop_minarea=None, process_multicrop_maxarea=None, process_multicrop_objective=None, process_multicrop_threshold=None):
- try:
- if process_caption:
- shared.interrogator.load()
-
- if process_caption_deepbooru:
- deepbooru.model.start()
-
- preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru, split_threshold, overlap_ratio, process_focal_crop, process_focal_crop_face_weight, process_focal_crop_entropy_weight, process_focal_crop_edges_weight, process_focal_crop_debug, process_multicrop, process_multicrop_mindim, process_multicrop_maxdim, process_multicrop_minarea, process_multicrop_maxarea, process_multicrop_objective, process_multicrop_threshold)
-
- finally:
-
- if process_caption:
- shared.interrogator.send_blip_to_ram()
-
- if process_caption_deepbooru:
- deepbooru.model.stop()
-
-
-def listfiles(dirname):
- return os.listdir(dirname)
-
-
-class PreprocessParams:
- src = None
- dstdir = None
- subindex = 0
- flip = False
- process_caption = False
- process_caption_deepbooru = False
- preprocess_txt_action = None
-
-
-def save_pic_with_caption(image, index, params: PreprocessParams, existing_caption=None):
- caption = ""
-
- if params.process_caption:
- caption += shared.interrogator.generate_caption(image)
-
- if params.process_caption_deepbooru:
- if len(caption) > 0:
- caption += ", "
- caption += deepbooru.model.tag_multi(image)
-
- filename_part = params.src
- filename_part = os.path.splitext(filename_part)[0]
- filename_part = os.path.basename(filename_part)
-
- basename = f"{index:05}-{params.subindex}-{filename_part}"
- image.save(os.path.join(params.dstdir, f"{basename}.png"))
-
- if params.preprocess_txt_action == 'prepend' and existing_caption:
- caption = existing_caption + ' ' + caption
- elif params.preprocess_txt_action == 'append' and existing_caption:
- caption = caption + ' ' + existing_caption
- elif params.preprocess_txt_action == 'copy' and existing_caption:
- caption = existing_caption
-
- caption = caption.strip()
-
- if len(caption) > 0:
- with open(os.path.join(params.dstdir, f"{basename}.txt"), "w", encoding="utf8") as file:
- file.write(caption)
-
- params.subindex += 1
-
-
-def save_pic(image, index, params, existing_caption=None):
- save_pic_with_caption(image, index, params, existing_caption=existing_caption)
-
- if params.flip:
- save_pic_with_caption(ImageOps.mirror(image), index, params, existing_caption=existing_caption)
-
-
-def split_pic(image, inverse_xy, width, height, overlap_ratio):
- if inverse_xy:
- from_w, from_h = image.height, image.width
- to_w, to_h = height, width
- else:
- from_w, from_h = image.width, image.height
- to_w, to_h = width, height
- h = from_h * to_w // from_w
- if inverse_xy:
- image = image.resize((h, to_w))
- else:
- image = image.resize((to_w, h))
-
- split_count = math.ceil((h - to_h * overlap_ratio) / (to_h * (1.0 - overlap_ratio)))
- y_step = (h - to_h) / (split_count - 1)
- for i in range(split_count):
- y = int(y_step * i)
- if inverse_xy:
- splitted = image.crop((y, 0, y + to_h, to_w))
- else:
- splitted = image.crop((0, y, to_w, y + to_h))
- yield splitted
-
-# not using torchvision.transforms.CenterCrop because it doesn't allow float regions
-def center_crop(image: Image, w: int, h: int):
- iw, ih = image.size
- if ih / h < iw / w:
- sw = w * ih / h
- box = (iw - sw) / 2, 0, iw - (iw - sw) / 2, ih
- else:
- sh = h * iw / w
- box = 0, (ih - sh) / 2, iw, ih - (ih - sh) / 2
- return image.resize((w, h), Image.Resampling.LANCZOS, box)
-
-
-def multicrop_pic(image: Image, mindim, maxdim, minarea, maxarea, objective, threshold):
- iw, ih = image.size
- err = lambda w, h: 1-(lambda x: x if x < 1 else 1/x)(iw/ih/(w/h))
- wh = max(((w, h) for w in range(mindim, maxdim+1, 64) for h in range(mindim, maxdim+1, 64)
- if minarea <= w * h <= maxarea and err(w, h) <= threshold),
- key= lambda wh: (wh[0]*wh[1], -err(*wh))[::1 if objective=='Maximize area' else -1],
- default=None
- )
- return wh and center_crop(image, *wh)
-
-
-def preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru=False, split_threshold=0.5, overlap_ratio=0.2, process_focal_crop=False, process_focal_crop_face_weight=0.9, process_focal_crop_entropy_weight=0.3, process_focal_crop_edges_weight=0.5, process_focal_crop_debug=False, process_multicrop=None, process_multicrop_mindim=None, process_multicrop_maxdim=None, process_multicrop_minarea=None, process_multicrop_maxarea=None, process_multicrop_objective=None, process_multicrop_threshold=None):
- width = process_width
- height = process_height
- src = os.path.abspath(process_src)
- dst = os.path.abspath(process_dst)
- split_threshold = max(0.0, min(1.0, split_threshold))
- overlap_ratio = max(0.0, min(0.9, overlap_ratio))
-
- assert src != dst, 'same directory specified as source and destination'
-
- os.makedirs(dst, exist_ok=True)
-
- files = listfiles(src)
-
- shared.state.job = "preprocess"
- shared.state.textinfo = "Preprocessing..."
- shared.state.job_count = len(files)
-
- params = PreprocessParams()
- params.dstdir = dst
- params.flip = process_flip
- params.process_caption = process_caption
- params.process_caption_deepbooru = process_caption_deepbooru
- params.preprocess_txt_action = preprocess_txt_action
-
- pbar = tqdm.tqdm(files)
- for index, imagefile in enumerate(pbar):
- params.subindex = 0
- filename = os.path.join(src, imagefile)
- try:
- img = Image.open(filename).convert("RGB")
- except Exception:
- continue
-
- description = f"Preprocessing [Image {index}/{len(files)}]"
- pbar.set_description(description)
- shared.state.textinfo = description
-
- params.src = filename
-
- existing_caption = None
- existing_caption_filename = os.path.splitext(filename)[0] + '.txt'
- if os.path.exists(existing_caption_filename):
- with open(existing_caption_filename, 'r', encoding="utf8") as file:
- existing_caption = file.read()
-
- if shared.state.interrupted:
- break
-
- if img.height > img.width:
- ratio = (img.width * height) / (img.height * width)
- inverse_xy = False
- else:
- ratio = (img.height * width) / (img.width * height)
- inverse_xy = True
-
- process_default_resize = True
-
- if process_split and ratio < 1.0 and ratio <= split_threshold:
- for splitted in split_pic(img, inverse_xy, width, height, overlap_ratio):
- save_pic(splitted, index, params, existing_caption=existing_caption)
- process_default_resize = False
-
- if process_focal_crop and img.height != img.width:
-
- dnn_model_path = None
- try:
- dnn_model_path = autocrop.download_and_cache_models(os.path.join(paths.models_path, "opencv"))
- except Exception as e:
- print("Unable to load face detection model for auto crop selection. Falling back to lower quality haar method.", e)
-
- autocrop_settings = autocrop.Settings(
- crop_width = width,
- crop_height = height,
- face_points_weight = process_focal_crop_face_weight,
- entropy_points_weight = process_focal_crop_entropy_weight,
- corner_points_weight = process_focal_crop_edges_weight,
- annotate_image = process_focal_crop_debug,
- dnn_model_path = dnn_model_path,
- )
- for focal in autocrop.crop_image(img, autocrop_settings):
- save_pic(focal, index, params, existing_caption=existing_caption)
- process_default_resize = False
-
- if process_multicrop:
- cropped = multicrop_pic(img, process_multicrop_mindim, process_multicrop_maxdim, process_multicrop_minarea, process_multicrop_maxarea, process_multicrop_objective, process_multicrop_threshold)
- if cropped is not None:
- save_pic(cropped, index, params, existing_caption=existing_caption)
- else:
- print(f"skipped {img.width}x{img.height} image {filename} (can't find suitable size within error threshold)")
- process_default_resize = False
-
- if process_default_resize:
- img = images.resize_image(1, img, width, height)
- save_pic(img, index, params, existing_caption=existing_caption)
-
- shared.state.nextjob()
diff --git a/spaces/aphenx/bingo/src/components/chat-scroll-anchor.tsx b/spaces/aphenx/bingo/src/components/chat-scroll-anchor.tsx
deleted file mode 100644
index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000
--- a/spaces/aphenx/bingo/src/components/chat-scroll-anchor.tsx
+++ /dev/null
@@ -1,29 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { useInView } from 'react-intersection-observer'
-
-import { useAtBottom } from '@/lib/hooks/use-at-bottom'
-
-interface ChatScrollAnchorProps {
- trackVisibility?: boolean
-}
-
-export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) {
- const isAtBottom = useAtBottom()
- const { ref, entry, inView } = useInView({
- trackVisibility,
- delay: 100,
- rootMargin: '0px 0px -150px 0px'
- })
-
- React.useEffect(() => {
- if (isAtBottom && trackVisibility && !inView) {
- entry?.target.scrollIntoView({
- block: 'start'
- })
- }
- }, [inView, entry, isAtBottom, trackVisibility])
-
- return
-}
diff --git a/spaces/aphenx/bingo/src/lib/hooks/use-bing.ts b/spaces/aphenx/bingo/src/lib/hooks/use-bing.ts
deleted file mode 100644
index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000
--- a/spaces/aphenx/bingo/src/lib/hooks/use-bing.ts
+++ /dev/null
@@ -1,173 +0,0 @@
-'use client'
-
-import { useState, useCallback, useEffect, useMemo } from 'react'
-import { useAtom, useAtomValue } from 'jotai'
-import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state'
-import { setConversationMessages } from './chat-history'
-import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types'
-import { nanoid } from '../utils'
-import { TTS } from '../bots/bing/tts'
-
-export function useBing(botId: BotId = 'bing') {
- const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId])
- const [enableTTS] = useAtom(voiceAtom)
- const speaker = useMemo(() => new TTS(), [])
- const [hash, setHash] = useAtom(hashAtom)
- const bingConversationStyle = useAtomValue(bingConversationStyleAtom)
- const [chatState, setChatState] = useAtom(chatAtom)
- const [input, setInput] = useState('')
- const [attachmentList, setAttachmentList] = useState([])
-
- const updateMessage = useCallback(
- (messageId: string, updater: (message: ChatMessageModel) => void) => {
- setChatState((draft) => {
- const message = draft.messages.find((m) => m.id === messageId)
- if (message) {
- updater(message)
- }
- })
- },
- [setChatState],
- )
-
- const sendMessage = useCallback(
- async (input: string, options = {}) => {
- const botMessageId = nanoid()
- const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined
- setChatState((draft) => {
- const text = imageUrl ? `${input}\n\n` : input
- draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' })
- setAttachmentList([])
- })
- const abortController = new AbortController()
- setChatState((draft) => {
- draft.generatingMessageId = botMessageId
- draft.abortController = abortController
- })
- speaker.reset()
- await chatState.bot.sendMessage({
- prompt: input,
- imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl,
- options: {
- ...options,
- bingConversationStyle,
- },
- signal: abortController.signal,
- onEvent(event) {
- if (event.type === 'UPDATE_ANSWER') {
- updateMessage(botMessageId, (message) => {
- if (event.data.text.length > message.text.length) {
- message.text = event.data.text
- }
-
- if (event.data.spokenText && enableTTS) {
- speaker.speak(event.data.spokenText)
- }
-
- message.throttling = event.data.throttling || message.throttling
- message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions
- message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses
- })
- } else if (event.type === 'ERROR') {
- updateMessage(botMessageId, (message) => {
- message.error = event.error
- })
- setChatState((draft) => {
- draft.abortController = undefined
- draft.generatingMessageId = ''
- })
- } else if (event.type === 'DONE') {
- setChatState((draft) => {
- draft.abortController = undefined
- draft.generatingMessageId = ''
- })
- }
- },
- })
- },
- [botId, attachmentList, chatState.bot, setChatState, updateMessage],
- )
-
- const uploadImage = useCallback(async (imgUrl: string) => {
- setAttachmentList([{ url: imgUrl, status: 'loading' }])
- const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle)
- if (response?.blobId) {
- setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }])
- } else {
- setAttachmentList([{ url: imgUrl, status: 'error' }])
- }
- }, [chatState.bot])
-
- const resetConversation = useCallback(() => {
- chatState.bot.resetConversation()
- speaker.abort()
- setChatState((draft) => {
- draft.abortController = undefined
- draft.generatingMessageId = ''
- draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }]
- draft.conversationId = nanoid()
- })
- }, [chatState.bot, setChatState])
-
- const stopGenerating = useCallback(() => {
- chatState.abortController?.abort()
- if (chatState.generatingMessageId) {
- updateMessage(chatState.generatingMessageId, (message) => {
- if (!message.text && !message.error) {
- message.text = 'Cancelled'
- }
- })
- }
- setChatState((draft) => {
- draft.generatingMessageId = ''
- })
- }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage])
-
- useEffect(() => {
- if (chatState.messages.length) {
- setConversationMessages(botId, chatState.conversationId, chatState.messages)
- }
- }, [botId, chatState.conversationId, chatState.messages])
-
- useEffect(() => {
- if (hash === 'reset') {
- resetConversation()
- setHash('')
- }
- }, [hash, setHash])
-
- const chat = useMemo(
- () => ({
- botId,
- bot: chatState.bot,
- isSpeaking: speaker.isSpeaking,
- messages: chatState.messages,
- sendMessage,
- setInput,
- input,
- resetConversation,
- generating: !!chatState.generatingMessageId,
- stopGenerating,
- uploadImage,
- setAttachmentList,
- attachmentList,
- }),
- [
- botId,
- bingConversationStyle,
- chatState.bot,
- chatState.generatingMessageId,
- chatState.messages,
- speaker.isSpeaking,
- setInput,
- input,
- setAttachmentList,
- attachmentList,
- resetConversation,
- sendMessage,
- stopGenerating,
- ],
- )
-
- return chat
-}
diff --git a/spaces/apsys/hetfit/app.py b/spaces/apsys/hetfit/app.py
deleted file mode 100644
index 047491996a6c9dc63136c0fe5177c760038adcd1..0000000000000000000000000000000000000000
--- a/spaces/apsys/hetfit/app.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import streamlit as st
-
-from nets.envs import SCI
-
-
-st.set_page_config(
- page_title="HET_sci",
- menu_items={
- 'About':'https://advpropsys.github.io'
- }
-)
-
-st.title('HETfit_scientific')
-st.markdown("#### Imagine a package which was engineered primarly for data driven plasma physics devices design, mainly low power hall effect thrusters, yup that's it"
- "\n### :orange[Don't be scared away though, it has much simpler interface than anything you ever used for such designs]")
-st.markdown('### Main concepts:')
-st.markdown( "- Each observational/design session is called an **environment**, for now it can be either RCI or SCI (Real or scaled interface)"
- "\n In this overview we will only touch SCI, since RCI is using PINNs which are different topic"
- "\n- You specify most of the run parameters on this object init, :orange[**including generation of new samples**] via GAN"
- "\n- You may want to generate new features, do it !"
- "\n- Want to select best features for more effctive work? Done!"
- "\n- Compile environment with your model of choice, can be ***any*** torch model or sklearn one"
- "\n- Train !"
- "\n- Plot, inference, save, export to jit/onnx, measure performance - **they all are one liners** "
- )
-st.markdown('### tl;dr \n- Create environment'
- '\n```run = SCI(*args,**kwargs)```'
- '\n - Generate features ```run.feature_gen()``` '
- '\n - Select features ```run.feature_importance()```'
- '\n - Compile env ```run.compile()```'
- '\n - Train model in env ```run.train()```'
- '\n - Inference, plot, performance, ex. ```run.plot3d()```'
- '\n #### And yes, it all will work even without any additional arguments from user besides column indexes'
- )
-st.write('Comparison with *arXiv:2206.04440v3*')
-col1, col2 = st.columns(2)
-col1.metric('Geometry accuracy on domain',value='83%',delta='15%')
-col2.metric('$d \mapsto h$ prediction',value='98%',delta='14%')
-
-st.header('Example:')
-
-st.markdown('Remeber indexes and column names on this example: $P$ - 1, $d$ - 3, $h$ - 3, $m_a$ - 6,$T$ - 7')
-st.code('run = SCI(*args,**kwargs)')
-
-run = SCI()
-st.code('run.feature_gen()')
-run.feature_gen()
-st.write('New features: (index-0:22 original samples, else is GAN generated)',run.df.iloc[1:,9:].astype(float))
-st.write('Most of real dataset is from *doi:0.2514/1.B37424*, hence the results mostly agree with it in specific')
-st.code('run.feature_importance(run.df.iloc[1:,1:7].astype(float),run.df.iloc[1:,7]) # Clear and easy example')
-
-st.write(run.feature_importance(run.df.iloc[1:,1:6].astype(float),run.df.iloc[1:,6]))
-st.markdown(' As we can see only $h$ and $d$ passed for $m_a$ model, not only that linear dependacy was proven experimantally, but now we got this from data driven source')
-st.code('run.compile(idx=(1,3,7))')
-run.compile(idx=(1,3,7))
-st.code('run.train(epochs=10)')
-if st.button('Start Training⏳'):
- run.train(epochs=10)
- st.code('run.plot3d()')
- st.write(run.plot3d())
- st.code('run.performance()')
- st.write(run.performance())
-else:
- st.markdown('#')
-
-st.markdown('---\nTry it out yourself! Select a column from 1 to 10')
-
-
-number = st.number_input('Here',min_value=1, max_value=10, step=1)
-
-if number:
- if st.button('Compile And Train💅'):
- st.code(f'run.compile(idx=(1,3,{number}))')
- run.compile(idx=(1,3,number))
- st.code('run.train(epochs=10)')
- run.train(epochs=10)
- st.code('run.plot3d()')
- st.write(run.plot3d())
-
-
-
-st.markdown('In this intro we covered simplest userflow while using HETFit package, resulted data can be used to leverage PINN and analytical models of Hall effect thrusters'
- '\n #### :orange[To cite please contact author on https://github.com/advpropsys]')
\ No newline at end of file
diff --git a/spaces/arcosx/CHO-cytotoxicity/app.py b/spaces/arcosx/CHO-cytotoxicity/app.py
deleted file mode 100644
index 297e488ddfeac164a3fb363392c1deedbfa7b996..0000000000000000000000000000000000000000
--- a/spaces/arcosx/CHO-cytotoxicity/app.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import gradio as gr
-import pandas as pd
-from rdkit import Chem
-from rdkit.Chem import AllChem
-import joblib
-
-model = joblib.load('CHO.pkl')
-
-def predict(smiles):
- if smiles.strip() == "":
- raise gr.Error("SMILES input error")
-
- mol = Chem.MolFromSmiles(smiles)
- if mol == None:
- raise gr.Error("SMILES input error")
- mol_ECFP4 = list(AllChem.GetMorganFingerprintAsBitVect(mol, 2, nBits=1024).ToBitString())
- preprocess_data = pd.DataFrame([mol_ECFP4])
- result = model.predict(preprocess_data)
- postprocess_data = '{:.2e}'.format(pow(10, result[0]))
- return postprocess_data
-
-
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- inputs=gr.Textbox(lines=2, label="Please enter SMILES for the compound")
- with gr.Row():
- btn = gr.Button(variant="primary",value="submit")
- clear_btn = gr.ClearButton(value="clear")
- with gr.Column():
- outputs=gr.Textbox(lines=1, label="Predicted CHO cytotoxicity of the chemical is:",info="Unit: mol/L")
-
- btn.click(predict, inputs=[inputs], outputs=[outputs])
- clear_btn.add([inputs,outputs])
-
- gr.Examples(
- [["O=C(O)CBr"],["O=CC(Br)(Br)Br"],["IC(Br)Br"]],
- [inputs],
- )
-
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/xtransformers.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/xtransformers.py
deleted file mode 100644
index 1eb3f77269c0e7b718d350217796ec704543c681..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/xtransformers.py
+++ /dev/null
@@ -1,1259 +0,0 @@
-import math
-from collections import namedtuple
-from functools import partial
-from inspect import isfunction
-
-import torch
-import torch.nn.functional as F
-from einops import rearrange, repeat
-from torch import einsum, nn
-
-DEFAULT_DIM_HEAD = 64
-
-Intermediates = namedtuple("Intermediates", ["pre_softmax_attn", "post_softmax_attn"])
-
-LayerIntermediates = namedtuple(
- "Intermediates",
- [
- "hiddens",
- "attn_intermediates",
- "past_key_values",
- ],
-)
-
-
-# helpers
-
-
-def exists(val):
- return val is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def cast_tuple(val, depth):
- return val if isinstance(val, tuple) else (val,) * depth
-
-
-class always:
- def __init__(self, val):
- self.val = val
-
- def __call__(self, *args, **kwargs):
- return self.val
-
-
-class not_equals:
- def __init__(self, val):
- self.val = val
-
- def __call__(self, x, *args, **kwargs):
- return x != self.val
-
-
-class equals:
- def __init__(self, val):
- self.val = val
-
- def __call__(self, x, *args, **kwargs):
- return x == self.val
-
-
-def max_neg_value(tensor):
- return -torch.finfo(tensor.dtype).max
-
-
-def l2norm(t):
- return F.normalize(t, p=2, dim=-1)
-
-
-# init helpers
-
-
-def init_zero_(layer):
- nn.init.constant_(layer.weight, 0.0)
- if exists(layer.bias):
- nn.init.constant_(layer.bias, 0.0)
-
-
-# keyword argument helpers
-
-
-def pick_and_pop(keys, d):
- values = list(map(lambda key: d.pop(key), keys))
- return dict(zip(keys, values))
-
-
-def group_dict_by_key(cond, d):
- return_val = [dict(), dict()]
- for key in d.keys():
- match = bool(cond(key))
- ind = int(not match)
- return_val[ind][key] = d[key]
- return (*return_val,)
-
-
-def string_begins_with(prefix, str):
- return str.startswith(prefix)
-
-
-def group_by_key_prefix(prefix, d):
- return group_dict_by_key(partial(string_begins_with, prefix), d)
-
-
-def groupby_prefix_and_trim(prefix, d):
- kwargs_with_prefix, kwargs = group_dict_by_key(partial(string_begins_with, prefix), d)
- kwargs_without_prefix = dict(map(lambda x: (x[0][len(prefix) :], x[1]), tuple(kwargs_with_prefix.items())))
- return kwargs_without_prefix, kwargs
-
-
-# activations
-
-
-class ReluSquared(nn.Module):
- def forward(self, x):
- return F.relu(x) ** 2
-
-
-# positional embeddings
-
-
-class AbsolutePositionalEmbedding(nn.Module):
- def __init__(self, dim, max_seq_len):
- super().__init__()
- self.scale = dim**-0.5
- self.emb = nn.Embedding(max_seq_len, dim)
-
- def forward(self, x):
- n = torch.arange(x.shape[1], device=x.device)
- pos_emb = self.emb(n)
- pos_emb = rearrange(pos_emb, "n d -> () n d")
- return pos_emb * self.scale
-
-
-class FixedPositionalEmbedding(nn.Module):
- def __init__(self, dim):
- super().__init__()
- inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2).float() / dim))
- self.register_buffer("inv_freq", inv_freq)
-
- def forward(self, x, seq_dim=1, offset=0):
- t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq) + offset
- sinusoid_inp = torch.einsum("i , j -> i j", t, self.inv_freq)
- emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1)
- return rearrange(emb, "n d -> () n d")
-
-
-class RelativePositionBias(nn.Module):
- def __init__(self, scale, causal=False, num_buckets=32, max_distance=128, heads=8):
- super().__init__()
- self.scale = scale
- self.causal = causal
- self.num_buckets = num_buckets
- self.max_distance = max_distance
- self.relative_attention_bias = nn.Embedding(num_buckets, heads)
-
- @staticmethod
- def _relative_position_bucket(relative_position, causal=True, num_buckets=32, max_distance=128):
- ret = 0
- n = -relative_position
- if not causal:
- num_buckets //= 2
- ret += (n < 0).long() * num_buckets
- n = torch.abs(n)
- else:
- n = torch.max(n, torch.zeros_like(n))
-
- max_exact = num_buckets // 2
- is_small = n < max_exact
-
- val_if_large = (
- max_exact
- + (torch.log(n.float() / max_exact) / math.log(max_distance / max_exact) * (num_buckets - max_exact)).long()
- )
- val_if_large = torch.min(val_if_large, torch.full_like(val_if_large, num_buckets - 1))
-
- ret += torch.where(is_small, n, val_if_large)
- return ret
-
- def forward(self, qk_dots):
- i, j, device = *qk_dots.shape[-2:], qk_dots.device
- q_pos = torch.arange(i, dtype=torch.long, device=device)
- k_pos = torch.arange(j, dtype=torch.long, device=device)
- rel_pos = k_pos[None, :] - q_pos[:, None]
- rp_bucket = self._relative_position_bucket(
- rel_pos, causal=self.causal, num_buckets=self.num_buckets, max_distance=self.max_distance
- )
- values = self.relative_attention_bias(rp_bucket)
- bias = rearrange(values, "i j h -> () h i j")
- return qk_dots + (bias * self.scale)
-
-
-class AlibiPositionalBias(nn.Module):
- def __init__(self, heads, **kwargs):
- super().__init__()
- self.heads = heads
- slopes = torch.Tensor(self._get_slopes(heads))
- slopes = rearrange(slopes, "h -> () h () ()")
- self.register_buffer("slopes", slopes, persistent=False)
- self.register_buffer("bias", None, persistent=False)
-
- @staticmethod
- def _get_slopes(heads):
- def get_slopes_power_of_2(n):
- start = 2 ** (-(2 ** -(math.log2(n) - 3)))
- ratio = start
- return [start * ratio**i for i in range(n)]
-
- if math.log2(heads).is_integer():
- return get_slopes_power_of_2(heads)
-
- closest_power_of_2 = 2 ** math.floor(math.log2(heads))
- return (
- get_slopes_power_of_2(closest_power_of_2)
- + get_slopes_power_of_2(2 * closest_power_of_2)[0::2][: heads - closest_power_of_2]
- )
-
- def forward(self, qk_dots):
- h, i, j, device = *qk_dots.shape[-3:], qk_dots.device
-
- if exists(self.bias) and self.bias.shape[-1] >= j:
- return qk_dots + self.bias[..., :j]
-
- bias = torch.arange(j, device=device)
- bias = rearrange(bias, "j -> () () () j")
- bias = bias * self.slopes
-
- num_heads_unalibied = h - bias.shape[1]
- bias = F.pad(bias, (0, 0, 0, 0, 0, num_heads_unalibied))
-
- self.register_buffer("bias", bias, persistent=False)
- return qk_dots + self.bias
-
-
-class LearnedAlibiPositionalBias(AlibiPositionalBias):
- def __init__(self, heads, bidirectional=False):
- super().__init__(heads)
- los_slopes = torch.log(self.slopes)
- self.learned_logslopes = nn.Parameter(los_slopes)
-
- self.bidirectional = bidirectional
- if self.bidirectional:
- self.learned_logslopes_future = nn.Parameter(los_slopes)
-
- def forward(self, qk_dots):
- h, i, j, device = *qk_dots.shape[-3:], qk_dots.device
-
- def get_slopes(param):
- return F.pad(param.exp(), (0, 0, 0, 0, 0, h - param.shape[1]))
-
- if exists(self.bias) and self.bias.shape[-1] >= j:
- bias = self.bias[..., :i, :j]
- else:
- i_arange = torch.arange(i, device=device)
- j_arange = torch.arange(j, device=device)
- bias = rearrange(j_arange, "j -> 1 1 1 j") - rearrange(i_arange, "i -> 1 1 i 1")
- self.register_buffer("bias", bias, persistent=False)
-
- if self.bidirectional:
- past_slopes = get_slopes(self.learned_logslopes)
- future_slopes = get_slopes(self.learned_logslopes_future)
- bias = torch.tril(bias * past_slopes) + torch.triu(bias * future_slopes)
- else:
- slopes = get_slopes(self.learned_logslopes)
- bias = bias * slopes
-
- return qk_dots + bias
-
-
-class RotaryEmbedding(nn.Module):
- def __init__(self, dim):
- super().__init__()
- inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2).float() / dim))
- self.register_buffer("inv_freq", inv_freq)
-
- def forward(self, max_seq_len, device):
- t = torch.arange(max_seq_len, device=device).type_as(self.inv_freq)
- freqs = torch.einsum("i , j -> i j", t, self.inv_freq)
- emb = torch.cat((freqs, freqs), dim=-1)
- return rearrange(emb, "n d -> () () n d")
-
-
-def rotate_half(x):
- x = rearrange(x, "... (j d) -> ... j d", j=2)
- x1, x2 = x.unbind(dim=-2)
- return torch.cat((-x2, x1), dim=-1)
-
-
-def apply_rotary_pos_emb(t, freqs):
- seq_len = t.shape[-2]
- freqs = freqs[:, :, -seq_len:]
- return (t * freqs.cos()) + (rotate_half(t) * freqs.sin())
-
-
-# norms
-
-
-class Scale(nn.Module):
- def __init__(self, value, fn):
- super().__init__()
- self.value = value
- self.fn = fn
-
- def forward(self, x, **kwargs):
- out = self.fn(x, **kwargs)
- scale_fn = lambda t: t * self.value
-
- if not isinstance(out, tuple):
- return scale_fn(out)
-
- return (scale_fn(out[0]), *out[1:])
-
-
-class Rezero(nn.Module):
- def __init__(self, fn):
- super().__init__()
- self.fn = fn
- self.g = nn.Parameter(torch.zeros(1))
-
- def forward(self, x, **kwargs):
- out = self.fn(x, **kwargs)
- rezero_fn = lambda t: t * self.g
-
- if not isinstance(out, tuple):
- return rezero_fn(out)
-
- return (rezero_fn(out[0]), *out[1:])
-
-
-class ScaleNorm(nn.Module):
- def __init__(self, dim, eps=1e-5):
- super().__init__()
- self.scale = dim**-0.5
- self.eps = eps
- self.g = nn.Parameter(torch.ones(1))
-
- def forward(self, x):
- norm = torch.norm(x, dim=-1, keepdim=True) * self.scale
- return x / norm.clamp(min=self.eps) * self.g
-
-
-class RMSNorm(nn.Module):
- def __init__(self, dim, eps=1e-8):
- super().__init__()
- self.scale = dim**-0.5
- self.eps = eps
- self.g = nn.Parameter(torch.ones(dim))
-
- def forward(self, x):
- norm = torch.norm(x, dim=-1, keepdim=True) * self.scale
- return x / norm.clamp(min=self.eps) * self.g
-
-
-class RMSScaleShiftNorm(nn.Module):
- def __init__(self, dim, eps=1e-8):
- super().__init__()
- self.scale = dim**-0.5
- self.eps = eps
- self.g = nn.Parameter(torch.ones(dim))
- self.scale_shift_process = nn.Linear(dim * 2, dim * 2)
-
- def forward(self, x, norm_scale_shift_inp):
- norm = torch.norm(x, dim=-1, keepdim=True) * self.scale
- norm = x / norm.clamp(min=self.eps) * self.g
-
- ss_emb = self.scale_shift_process(norm_scale_shift_inp)
- scale, shift = torch.chunk(ss_emb, 2, dim=1)
- h = norm * (1 + scale.unsqueeze(1)) + shift.unsqueeze(1)
- return h
-
-
-# residual and residual gates
-
-
-class Residual(nn.Module):
- def __init__(self, dim, scale_residual=False):
- super().__init__()
- self.residual_scale = nn.Parameter(torch.ones(dim)) if scale_residual else None
-
- def forward(self, x, residual):
- if exists(self.residual_scale):
- residual = residual * self.residual_scale
-
- return x + residual
-
-
-class GRUGating(nn.Module):
- def __init__(self, dim, scale_residual=False):
- super().__init__()
- self.gru = nn.GRUCell(dim, dim)
- self.residual_scale = nn.Parameter(torch.ones(dim)) if scale_residual else None
-
- def forward(self, x, residual):
- if exists(self.residual_scale):
- residual = residual * self.residual_scale
-
- gated_output = self.gru(rearrange(x, "b n d -> (b n) d"), rearrange(residual, "b n d -> (b n) d"))
-
- return gated_output.reshape_as(x)
-
-
-# token shifting
-
-
-def shift(t, amount, mask=None):
- if amount == 0:
- return t
-
- if exists(mask):
- t = t.masked_fill(~mask[..., None], 0.0)
-
- return F.pad(t, (0, 0, amount, -amount), value=0.0)
-
-
-class ShiftTokens(nn.Module):
- def __init__(self, shifts, fn):
- super().__init__()
- self.fn = fn
- self.shifts = tuple(shifts)
-
- def forward(self, x, **kwargs):
- mask = kwargs.get("mask", None)
- shifts = self.shifts
- segments = len(shifts)
- feats_per_shift = x.shape[-1] // segments
- splitted = x.split(feats_per_shift, dim=-1)
- segments_to_shift, rest = splitted[:segments], splitted[segments:]
- segments_to_shift = list(map(lambda args: shift(*args, mask=mask), zip(segments_to_shift, shifts)))
- x = torch.cat((*segments_to_shift, *rest), dim=-1)
- return self.fn(x, **kwargs)
-
-
-# feedforward
-
-
-class GLU(nn.Module):
- def __init__(self, dim_in, dim_out, activation):
- super().__init__()
- self.act = activation
- self.proj = nn.Linear(dim_in, dim_out * 2)
-
- def forward(self, x):
- x, gate = self.proj(x).chunk(2, dim=-1)
- return x * self.act(gate)
-
-
-class FeedForward(nn.Module):
- def __init__(
- self,
- dim,
- dim_out=None,
- mult=4,
- glu=False,
- relu_squared=False,
- post_act_ln=False,
- dropout=0.0,
- zero_init_output=False,
- ):
- super().__init__()
- inner_dim = int(dim * mult)
- dim_out = default(dim_out, dim)
- activation = ReluSquared() if relu_squared else nn.GELU()
-
- project_in = (
- nn.Sequential(nn.Linear(dim, inner_dim), activation) if not glu else GLU(dim, inner_dim, activation)
- )
-
- self.net = nn.Sequential(
- project_in,
- nn.LayerNorm(inner_dim) if post_act_ln else nn.Identity(),
- nn.Dropout(dropout),
- nn.Linear(inner_dim, dim_out),
- )
-
- # init last linear layer to 0
- if zero_init_output:
- init_zero_(self.net[-1])
-
- def forward(self, x):
- return self.net(x)
-
-
-# attention.
-
-
-class Attention(nn.Module):
- def __init__(
- self,
- dim,
- dim_head=DEFAULT_DIM_HEAD,
- heads=8,
- causal=False,
- talking_heads=False,
- head_scale=False,
- collab_heads=False,
- collab_compression=0.3,
- sparse_topk=None,
- use_entmax15=False,
- num_mem_kv=0,
- dropout=0.0,
- on_attn=False,
- gate_values=False,
- zero_init_output=False,
- max_attend_past=None,
- qk_norm=False,
- scale_init_value=None,
- rel_pos_bias=False,
- rel_pos_num_buckets=32,
- rel_pos_max_distance=128,
- ):
- super().__init__()
- self.scale = dim_head**-0.5
-
- self.heads = heads
- self.causal = causal
- self.max_attend_past = max_attend_past
-
- qk_dim = v_dim = dim_head * heads
-
- # collaborative heads
- self.collab_heads = collab_heads
- if self.collab_heads:
- qk_dim = int(collab_compression * qk_dim)
- self.collab_mixing = nn.Parameter(torch.randn(heads, qk_dim))
-
- self.to_q = nn.Linear(dim, qk_dim, bias=False)
- self.to_k = nn.Linear(dim, qk_dim, bias=False)
- self.to_v = nn.Linear(dim, v_dim, bias=False)
-
- self.dropout = nn.Dropout(dropout)
-
- # add GLU gating for aggregated values, from alphafold2
- self.to_v_gate = None
- if gate_values:
- self.to_v_gate = nn.Linear(dim, v_dim)
- nn.init.constant_(self.to_v_gate.weight, 0)
- nn.init.constant_(self.to_v_gate.bias, 1)
-
- # cosine sim attention
- self.qk_norm = qk_norm
- if qk_norm:
- scale_init_value = default(
- scale_init_value, -3
- ) # if not provided, initialize as though it were sequence length of 1024
- self.scale = nn.Parameter(torch.ones(1, heads, 1, 1) * scale_init_value)
-
- # talking heads
- self.talking_heads = talking_heads
- if talking_heads:
- self.pre_softmax_proj = nn.Parameter(torch.randn(heads, heads))
- self.post_softmax_proj = nn.Parameter(torch.randn(heads, heads))
-
- # head scaling
- self.head_scale = head_scale
- if head_scale:
- self.head_scale_params = nn.Parameter(torch.ones(1, heads, 1, 1))
-
- # explicit topk sparse attention
- self.sparse_topk = sparse_topk
-
- # entmax
- self.attn_fn = F.softmax
-
- # add memory key / values
- self.num_mem_kv = num_mem_kv
- if num_mem_kv > 0:
- self.mem_k = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))
- self.mem_v = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))
-
- # attention on attention
- self.attn_on_attn = on_attn
- self.to_out = nn.Sequential(nn.Linear(v_dim, dim * 2), nn.GLU()) if on_attn else nn.Linear(v_dim, dim)
-
- self.rel_pos_bias = rel_pos_bias
- if rel_pos_bias:
- assert (
- rel_pos_num_buckets <= rel_pos_max_distance
- ), "number of relative position buckets must be less than the relative position max distance"
- self.rel_pos = RelativePositionBias(
- scale=dim_head**0.5,
- causal=causal,
- heads=heads,
- num_buckets=rel_pos_num_buckets,
- max_distance=rel_pos_max_distance,
- )
-
- # init output projection 0
- if zero_init_output:
- init_zero_(self.to_out)
-
- def forward(
- self,
- x,
- context=None,
- mask=None,
- context_mask=None,
- attn_mask=None,
- sinusoidal_emb=None,
- rotary_pos_emb=None,
- prev_attn=None,
- mem=None,
- layer_past=None,
- ):
- b, n, _, h, talking_heads, collab_heads, head_scale, scale, device, has_context = (
- *x.shape,
- self.heads,
- self.talking_heads,
- self.collab_heads,
- self.head_scale,
- self.scale,
- x.device,
- exists(context),
- )
- kv_input = default(context, x)
-
- q_input = x
- k_input = kv_input
- v_input = kv_input
-
- if exists(mem):
- k_input = torch.cat((mem, k_input), dim=-2)
- v_input = torch.cat((mem, v_input), dim=-2)
-
- if exists(sinusoidal_emb):
- # in shortformer, the query would start at a position offset depending on the past cached memory
- offset = k_input.shape[-2] - q_input.shape[-2]
- q_input = q_input + sinusoidal_emb(q_input, offset=offset)
- k_input = k_input + sinusoidal_emb(k_input)
-
- q = self.to_q(q_input)
- k = self.to_k(k_input)
- v = self.to_v(v_input)
-
- if not collab_heads:
- q, k, v = map(lambda t: rearrange(t, "b n (h d) -> b h n d", h=h), (q, k, v))
- else:
- q = einsum("b i d, h d -> b h i d", q, self.collab_mixing)
- k = rearrange(k, "b n d -> b () n d")
- v = rearrange(v, "b n (h d) -> b h n d", h=h)
-
- if layer_past is not None:
- past_key, past_value = layer_past
- k = torch.cat([past_key, k], dim=-2)
- v = torch.cat([past_value, v], dim=-2)
- k_cache = k
- v_cache = v
-
- if exists(rotary_pos_emb) and not has_context:
- l = rotary_pos_emb.shape[-1]
- (ql, qr), (kl, kr), (vl, vr) = map(lambda t: (t[..., :l], t[..., l:]), (q, k, v))
- ql, kl, vl = map(lambda t: apply_rotary_pos_emb(t, rotary_pos_emb), (ql, kl, vl))
- q, k, v = map(lambda t: torch.cat(t, dim=-1), ((ql, qr), (kl, kr), (vl, vr)))
-
- input_mask = None
- if any(map(exists, (mask, context_mask))):
- q_mask = default(mask, lambda: torch.ones((b, n), device=device).bool())
- k_mask = q_mask if not exists(context) else context_mask
- k_mask = default(k_mask, lambda: torch.ones((b, k.shape[-2]), device=device).bool())
- q_mask = rearrange(q_mask, "b i -> b () i ()")
- k_mask = rearrange(k_mask, "b j -> b () () j")
- input_mask = q_mask * k_mask
-
- if self.num_mem_kv > 0:
- mem_k, mem_v = map(lambda t: repeat(t, "h n d -> b h n d", b=b), (self.mem_k, self.mem_v))
- k = torch.cat((mem_k, k), dim=-2)
- v = torch.cat((mem_v, v), dim=-2)
- if exists(input_mask):
- input_mask = F.pad(input_mask, (self.num_mem_kv, 0), value=True)
-
- if collab_heads:
- k = k.expand(-1, h, -1, -1)
-
- if self.qk_norm:
- q, k = map(l2norm, (q, k))
- scale = 1 / (self.scale.exp().clamp(min=1e-2))
-
- dots = einsum("b h i d, b h j d -> b h i j", q, k) * scale
- mask_value = max_neg_value(dots)
-
- if exists(prev_attn):
- dots = dots + prev_attn
-
- pre_softmax_attn = dots.clone()
-
- if talking_heads:
- dots = einsum("b h i j, h k -> b k i j", dots, self.pre_softmax_proj).contiguous()
-
- if self.rel_pos_bias:
- dots = self.rel_pos(dots)
-
- if exists(input_mask):
- dots.masked_fill_(~input_mask, mask_value)
- del input_mask
-
- if exists(attn_mask):
- assert (
- 2 <= attn_mask.ndim <= 4
- ), "attention mask must have greater than 2 dimensions but less than or equal to 4"
- if attn_mask.ndim == 2:
- attn_mask = rearrange(attn_mask, "i j -> () () i j")
- elif attn_mask.ndim == 3:
- attn_mask = rearrange(attn_mask, "h i j -> () h i j")
- dots.masked_fill_(~attn_mask, mask_value)
-
- if exists(self.max_attend_past):
- i, j = dots.shape[-2:]
- range_q = torch.arange(j - i, j, device=device)
- range_k = torch.arange(j, device=device)
- dist = rearrange(range_q, "i -> () () i ()") - rearrange(range_k, "j -> () () () j")
- mask = dist > self.max_attend_past
- dots.masked_fill_(mask, mask_value)
- del mask
-
- if self.causal:
- i, j = dots.shape[-2:]
- r = torch.arange(i, device=device)
- mask = rearrange(r, "i -> () () i ()") < rearrange(r, "j -> () () () j")
- mask = F.pad(mask, (j - i, 0), value=False)
- dots.masked_fill_(mask, mask_value)
- del mask
-
- if exists(self.sparse_topk) and self.sparse_topk < dots.shape[-1]:
- top, _ = dots.topk(self.sparse_topk, dim=-1)
- vk = top[..., -1].unsqueeze(-1).expand_as(dots)
- mask = dots < vk
- dots.masked_fill_(mask, mask_value)
- del mask
-
- attn = self.attn_fn(dots, dim=-1)
- post_softmax_attn = attn.clone()
-
- attn = self.dropout(attn)
-
- if talking_heads:
- attn = einsum("b h i j, h k -> b k i j", attn, self.post_softmax_proj).contiguous()
-
- out = einsum("b h i j, b h j d -> b h i d", attn, v)
-
- if head_scale:
- out = out * self.head_scale_params
-
- out = rearrange(out, "b h n d -> b n (h d)")
-
- if exists(self.to_v_gate):
- gates = self.to_v_gate(x)
- out = out * gates.sigmoid()
-
- intermediates = Intermediates(pre_softmax_attn=pre_softmax_attn, post_softmax_attn=post_softmax_attn)
-
- return self.to_out(out), intermediates, k_cache, v_cache
-
-
-class AttentionLayers(nn.Module):
- def __init__(
- self,
- dim,
- depth,
- heads=8,
- causal=False,
- cross_attend=False,
- only_cross=False,
- use_scalenorm=False,
- use_rms_scaleshift_norm=False,
- use_rmsnorm=False,
- use_rezero=False,
- alibi_pos_bias=False,
- alibi_num_heads=None,
- alibi_learned=False,
- position_infused_attn=False,
- rotary_pos_emb=False,
- rotary_emb_dim=None,
- custom_layers=None,
- sandwich_coef=None,
- par_ratio=None,
- residual_attn=False,
- cross_residual_attn=False,
- macaron=False,
- pre_norm=True,
- gate_residual=False,
- scale_residual=False,
- shift_tokens=0,
- sandwich_norm=False,
- use_qk_norm_attn=False,
- qk_norm_attn_seq_len=None,
- zero_init_branch_output=False,
- **kwargs,
- ):
- super().__init__()
- ff_kwargs, kwargs = groupby_prefix_and_trim("ff_", kwargs)
- attn_kwargs, _ = groupby_prefix_and_trim("attn_", kwargs)
-
- dim_head = attn_kwargs.get("dim_head", DEFAULT_DIM_HEAD)
-
- self.dim = dim
- self.depth = depth
- self.layers = nn.ModuleList([])
- self.causal = causal
-
- rel_pos_bias = "rel_pos_bias" in attn_kwargs
- self.has_pos_emb = position_infused_attn or rel_pos_bias or rotary_pos_emb
- self.pia_pos_emb = FixedPositionalEmbedding(dim) if position_infused_attn else None
-
- rotary_emb_dim = max(default(rotary_emb_dim, dim_head // 2), 32)
- self.rotary_pos_emb = RotaryEmbedding(rotary_emb_dim) if rotary_pos_emb else None
-
- assert not (
- alibi_pos_bias and rel_pos_bias
- ), "you can only choose Alibi positional bias or T5 relative positional bias, not both"
-
- if alibi_pos_bias:
- alibi_num_heads = default(alibi_num_heads, heads)
- assert alibi_num_heads <= heads, "number of ALiBi heads must be less than the total number of heads"
- alibi_pos_klass = LearnedAlibiPositionalBias if alibi_learned or not causal else AlibiPositionalBias
- self.rel_pos = alibi_pos_klass(heads=alibi_num_heads, bidirectional=not causal)
- else:
- self.rel_pos = None
-
- assert not (not pre_norm and sandwich_norm), "sandwich norm cannot be used when not using prenorm"
- self.pre_norm = pre_norm
- self.sandwich_norm = sandwich_norm
-
- self.residual_attn = residual_attn
- self.cross_residual_attn = cross_residual_attn
- self.cross_attend = cross_attend
-
- norm_class = ScaleNorm if use_scalenorm else nn.LayerNorm
- norm_class = RMSNorm if use_rmsnorm else norm_class
- norm_class = RMSScaleShiftNorm if use_rms_scaleshift_norm else norm_class
- norm_fn = partial(norm_class, dim)
-
- norm_fn = nn.Identity if use_rezero else norm_fn
- branch_fn = Rezero if use_rezero else None
-
- if cross_attend and not only_cross:
- default_block = ("a", "c", "f")
- elif cross_attend and only_cross:
- default_block = ("c", "f")
- else:
- default_block = ("a", "f")
-
- if macaron:
- default_block = ("f",) + default_block
-
- # qk normalization
-
- if use_qk_norm_attn:
- attn_scale_init_value = (
- -math.log(math.log2(qk_norm_attn_seq_len**2 - qk_norm_attn_seq_len))
- if exists(qk_norm_attn_seq_len)
- else None
- )
- attn_kwargs = {**attn_kwargs, "qk_norm": True, "scale_init_value": attn_scale_init_value}
-
- # zero init
-
- if zero_init_branch_output:
- attn_kwargs = {**attn_kwargs, "zero_init_output": True}
- ff_kwargs = {**ff_kwargs, "zero_init_output": True}
-
- # calculate layer block order
-
- if exists(custom_layers):
- layer_types = custom_layers
- elif exists(par_ratio):
- par_depth = depth * len(default_block)
- assert 1 < par_ratio <= par_depth, "par ratio out of range"
- default_block = tuple(filter(not_equals("f"), default_block))
- par_attn = par_depth // par_ratio
- depth_cut = par_depth * 2 // 3 # 2 / 3 attention layer cutoff suggested by PAR paper
- par_width = (depth_cut + depth_cut // par_attn) // par_attn
- assert len(default_block) <= par_width, "default block is too large for par_ratio"
- par_block = default_block + ("f",) * (par_width - len(default_block))
- par_head = par_block * par_attn
- layer_types = par_head + ("f",) * (par_depth - len(par_head))
- elif exists(sandwich_coef):
- assert sandwich_coef > 0 and sandwich_coef <= depth, "sandwich coefficient should be less than the depth"
- layer_types = ("a",) * sandwich_coef + default_block * (depth - sandwich_coef) + ("f",) * sandwich_coef
- else:
- layer_types = default_block * depth
-
- self.layer_types = layer_types
- self.num_attn_layers = len(list(filter(equals("a"), layer_types)))
-
- # calculate token shifting
-
- shift_tokens = cast_tuple(shift_tokens, len(layer_types))
-
- # iterate and construct layers
-
- for ind, (layer_type, layer_shift_tokens) in enumerate(zip(self.layer_types, shift_tokens)):
- is_last_layer = ind == (len(self.layer_types) - 1)
-
- if layer_type == "a":
- layer = Attention(dim, heads=heads, causal=causal, **attn_kwargs)
- elif layer_type == "c":
- layer = Attention(dim, heads=heads, **attn_kwargs)
- elif layer_type == "f":
- layer = FeedForward(dim, **ff_kwargs)
- layer = layer if not macaron else Scale(0.5, layer)
- else:
- raise Exception(f"invalid layer type {layer_type}")
-
- if layer_shift_tokens > 0:
- shift_range_upper = layer_shift_tokens + 1
- shift_range_lower = -layer_shift_tokens if not causal else 0
- layer = ShiftTokens(range(shift_range_lower, shift_range_upper), layer)
-
- if exists(branch_fn):
- layer = branch_fn(layer)
-
- residual_fn = GRUGating if gate_residual else Residual
- residual = residual_fn(dim, scale_residual=scale_residual)
-
- layer_uses_qk_norm = use_qk_norm_attn and layer_type in ("a", "c")
-
- pre_branch_norm = norm_fn() if pre_norm and not layer_uses_qk_norm else None
- post_branch_norm = norm_fn() if sandwich_norm or layer_uses_qk_norm else None
- post_main_norm = norm_fn() if not pre_norm and not is_last_layer else None
-
- norms = nn.ModuleList([pre_branch_norm, post_branch_norm, post_main_norm])
-
- self.layers.append(nn.ModuleList([norms, layer, residual]))
-
- def forward(
- self,
- x,
- context=None,
- full_context=None, # for passing a list of hidden states from an encoder
- mask=None,
- context_mask=None,
- attn_mask=None,
- mems=None,
- return_hiddens=False,
- norm_scale_shift_inp=None,
- past_key_values=None,
- expected_seq_len=None,
- ):
- assert not (
- self.cross_attend ^ (exists(context) or exists(full_context))
- ), "context must be passed in if cross_attend is set to True"
- assert context is None or full_context is None, "only one of full_context or context can be provided"
-
- hiddens = []
- intermediates = []
- prev_attn = None
- prev_cross_attn = None
-
- mems = mems.copy() if exists(mems) else [None] * self.num_attn_layers
- norm_args = {}
- if exists(norm_scale_shift_inp):
- norm_args["norm_scale_shift_inp"] = norm_scale_shift_inp
-
- rotary_pos_emb = None
- if exists(self.rotary_pos_emb):
- if not self.training and self.causal:
- assert (
- expected_seq_len is not None
- ), "To decode a transformer with rotary embeddings, you must specify an `expected_seq_len`"
- elif expected_seq_len is None:
- expected_seq_len = 0
- seq_len = x.shape[1]
- if past_key_values is not None:
- seq_len += past_key_values[0][0].shape[-2]
- max_rotary_emb_length = max(
- list(map(lambda m: (m.shape[1] if exists(m) else 0) + seq_len, mems)) + [expected_seq_len]
- )
- rotary_pos_emb = self.rotary_pos_emb(max_rotary_emb_length, x.device)
-
- present_key_values = []
- cross_attn_count = 0
- for ind, (layer_type, (norm, block, residual_fn)) in enumerate(zip(self.layer_types, self.layers)):
- if layer_type == "a":
- layer_mem = mems.pop(0) if mems else None
-
- residual = x
-
- pre_branch_norm, post_branch_norm, post_main_norm = norm
-
- if exists(pre_branch_norm):
- x = pre_branch_norm(x, **norm_args)
-
- if layer_type == "a" or layer_type == "c":
- if past_key_values is not None:
- layer_kv = past_key_values.pop(0)
- layer_past = tuple(s.to(x.device) for s in layer_kv)
- else:
- layer_past = None
-
- if layer_type == "a":
- out, inter, k, v = block(
- x, None, mask, None, attn_mask, self.pia_pos_emb, rotary_pos_emb, prev_attn, layer_mem, layer_past
- )
- elif layer_type == "c":
- if exists(full_context):
- out, inter, k, v = block(
- x,
- full_context[cross_attn_count],
- mask,
- context_mask,
- None,
- None,
- None,
- prev_attn,
- None,
- layer_past,
- )
- else:
- out, inter, k, v = block(
- x, context, mask, context_mask, None, None, None, prev_attn, None, layer_past
- )
- elif layer_type == "f":
- out = block(x)
-
- if layer_type == "a" or layer_type == "c" and present_key_values is not None:
- present_key_values.append((k.detach(), v.detach()))
-
- if exists(post_branch_norm):
- out = post_branch_norm(out, **norm_args)
-
- x = residual_fn(out, residual)
-
- if layer_type in ("a", "c"):
- intermediates.append(inter)
-
- if layer_type == "a" and self.residual_attn:
- prev_attn = inter.pre_softmax_attn
- elif layer_type == "c" and self.cross_residual_attn:
- prev_cross_attn = inter.pre_softmax_attn
-
- if exists(post_main_norm):
- x = post_main_norm(x, **norm_args)
-
- if layer_type == "c":
- cross_attn_count += 1
-
- if layer_type == "f":
- hiddens.append(x)
-
- if return_hiddens:
- intermediates = LayerIntermediates(
- hiddens=hiddens, attn_intermediates=intermediates, past_key_values=present_key_values
- )
-
- return x, intermediates
-
- return x
-
-
-class Encoder(AttentionLayers):
- def __init__(self, **kwargs):
- assert "causal" not in kwargs, "cannot set causality on encoder"
- super().__init__(causal=False, **kwargs)
-
-
-class Decoder(AttentionLayers):
- def __init__(self, **kwargs):
- assert "causal" not in kwargs, "cannot set causality on decoder"
- super().__init__(causal=True, **kwargs)
-
-
-class CrossAttender(AttentionLayers):
- def __init__(self, **kwargs):
- super().__init__(cross_attend=True, only_cross=True, **kwargs)
-
-
-class ViTransformerWrapper(nn.Module):
- def __init__(self, *, image_size, patch_size, attn_layers, num_classes=None, dropout=0.0, emb_dropout=0.0):
- super().__init__()
- assert isinstance(attn_layers, Encoder), "attention layers must be an Encoder"
- assert image_size % patch_size == 0, "image dimensions must be divisible by the patch size"
- dim = attn_layers.dim
- num_patches = (image_size // patch_size) ** 2
- patch_dim = 3 * patch_size**2
-
- self.patch_size = patch_size
-
- self.pos_embedding = nn.Parameter(torch.randn(1, num_patches + 1, dim))
- self.patch_to_embedding = nn.Linear(patch_dim, dim)
- self.cls_token = nn.Parameter(torch.randn(1, 1, dim))
- self.dropout = nn.Dropout(emb_dropout)
-
- self.attn_layers = attn_layers
- self.norm = nn.LayerNorm(dim)
- self.mlp_head = FeedForward(dim, dim_out=num_classes, dropout=dropout) if exists(num_classes) else None
-
- def forward(self, img, return_embeddings=False):
- p = self.patch_size
-
- x = rearrange(img, "b c (h p1) (w p2) -> b (h w) (p1 p2 c)", p1=p, p2=p)
- x = self.patch_to_embedding(x)
- b, n, _ = x.shape
-
- cls_tokens = repeat(self.cls_token, "() n d -> b n d", b=b)
- x = torch.cat((cls_tokens, x), dim=1)
- x = x + self.pos_embedding[:, : (n + 1)]
- x = self.dropout(x)
-
- x = self.attn_layers(x)
- x = self.norm(x)
-
- if not exists(self.mlp_head) or return_embeddings:
- return x
-
- return self.mlp_head(x[:, 0])
-
-
-class TransformerWrapper(nn.Module):
- def __init__(
- self,
- *,
- num_tokens,
- max_seq_len,
- attn_layers,
- emb_dim=None,
- max_mem_len=0.0,
- shift_mem_down=0,
- emb_dropout=0.0,
- num_memory_tokens=None,
- tie_embedding=False,
- use_pos_emb=True,
- ):
- super().__init__()
- assert isinstance(attn_layers, AttentionLayers), "attention layers must be one of Encoder or Decoder"
-
- dim = attn_layers.dim
- emb_dim = default(emb_dim, dim)
-
- self.max_seq_len = max_seq_len
- self.max_mem_len = max_mem_len
- self.shift_mem_down = shift_mem_down
-
- self.token_emb = nn.Embedding(num_tokens, emb_dim)
- self.pos_emb = (
- AbsolutePositionalEmbedding(emb_dim, max_seq_len)
- if (use_pos_emb and not attn_layers.has_pos_emb)
- else always(0)
- )
- self.emb_dropout = nn.Dropout(emb_dropout)
-
- self.project_emb = nn.Linear(emb_dim, dim) if emb_dim != dim else nn.Identity()
- self.attn_layers = attn_layers
- self.norm = nn.LayerNorm(dim)
-
- self.init_()
-
- self.to_logits = nn.Linear(dim, num_tokens) if not tie_embedding else lambda t: t @ self.token_emb.weight.t()
-
- # memory tokens (like [cls]) from Memory Transformers paper
- num_memory_tokens = default(num_memory_tokens, 0)
- self.num_memory_tokens = num_memory_tokens
- if num_memory_tokens > 0:
- self.memory_tokens = nn.Parameter(torch.randn(num_memory_tokens, dim))
-
- def init_(self):
- nn.init.kaiming_normal_(self.token_emb.weight)
-
- def forward(
- self,
- x,
- return_embeddings=False,
- mask=None,
- return_hiddens=False,
- return_attn=False,
- mems=None,
- use_cache=False,
- **kwargs,
- ):
- b, n, device, num_mem = *x.shape, x.device, self.num_memory_tokens
- x = self.token_emb(x)
- x = x + self.pos_emb(x)
- x = self.emb_dropout(x)
-
- x = self.project_emb(x)
-
- if num_mem > 0:
- mem = repeat(self.memory_tokens, "n d -> b n d", b=b)
- x = torch.cat((mem, x), dim=1)
-
- # auto-handle masking after appending memory tokens
- if exists(mask):
- mask = F.pad(mask, (num_mem, 0), value=True)
-
- if self.shift_mem_down and exists(mems):
- mems_l, mems_r = mems[: self.shift_mem_down], mems[self.shift_mem_down :]
- mems = [*mems_r, *mems_l]
-
- x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs)
- x = self.norm(x)
-
- mem, x = x[:, :num_mem], x[:, num_mem:]
-
- out = self.to_logits(x) if not return_embeddings else x
-
- if return_hiddens:
- hiddens = intermediates.hiddens
- return out, hiddens
-
- res = [out]
- if return_attn:
- attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates))
- res.append(attn_maps)
- if use_cache:
- res.append(intermediates.past_key_values)
-
- if len(res) > 1:
- return tuple(res)
- return res[0]
-
-
-class ContinuousTransformerWrapper(nn.Module):
- def __init__(
- self, *, max_seq_len, attn_layers, dim_in=None, dim_out=None, emb_dim=None, emb_dropout=0.0, use_pos_emb=True
- ):
- super().__init__()
- assert isinstance(attn_layers, AttentionLayers), "attention layers must be one of Encoder or Decoder"
-
- dim = attn_layers.dim
-
- self.max_seq_len = max_seq_len
-
- self.pos_emb = (
- AbsolutePositionalEmbedding(dim, max_seq_len)
- if (use_pos_emb and not attn_layers.has_pos_emb)
- else always(0)
- )
- self.emb_dropout = nn.Dropout(emb_dropout)
-
- self.project_in = nn.Linear(dim_in, dim) if exists(dim_in) else nn.Identity()
-
- self.attn_layers = attn_layers
- self.norm = nn.LayerNorm(dim)
-
- self.project_out = nn.Linear(dim, dim_out) if exists(dim_out) else nn.Identity()
-
- def forward(self, x, return_embeddings=False, mask=None, return_attn=False, mems=None, use_cache=False, **kwargs):
- b, n, _, device = *x.shape, x.device
-
- x = self.project_in(x)
- x = x + self.pos_emb(x)
- x = self.emb_dropout(x)
-
- x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs)
- x = self.norm(x)
-
- out = self.project_out(x) if not return_embeddings else x
-
- res = [out]
- if return_attn:
- attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates))
- res.append(attn_maps)
- if use_cache:
- res.append(intermediates.past_key_values)
-
- if len(res) > 1:
- return tuple(res)
- return res[0]
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHAKE.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHAKE.py
deleted file mode 100644
index 29bd34ede2eda2b89e2683e4472f3a270c09e5a2..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHAKE.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2015, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-"""Self-test suite for Crypto.Hash.SHAKE128 and SHAKE256"""
-
-import unittest
-from binascii import hexlify, unhexlify
-
-from Crypto.SelfTest.loader import load_test_vectors
-from Crypto.SelfTest.st_common import list_test_cases
-
-from Crypto.Hash import SHAKE128, SHAKE256
-from Crypto.Util.py3compat import b, bchr, bord, tobytes
-
-class SHAKETest(unittest.TestCase):
-
- def test_new_positive(self):
-
- xof1 = self.shake.new()
- xof2 = self.shake.new(data=b("90"))
- xof3 = self.shake.new().update(b("90"))
-
- self.assertNotEqual(xof1.read(10), xof2.read(10))
- xof3.read(10)
- self.assertEqual(xof2.read(10), xof3.read(10))
-
- def test_update(self):
- pieces = [bchr(10) * 200, bchr(20) * 300]
- h = self.shake.new()
- h.update(pieces[0]).update(pieces[1])
- digest = h.read(10)
- h = self.shake.new()
- h.update(pieces[0] + pieces[1])
- self.assertEqual(h.read(10), digest)
-
- def test_update_negative(self):
- h = self.shake.new()
- self.assertRaises(TypeError, h.update, u"string")
-
- def test_digest(self):
- h = self.shake.new()
- digest = h.read(90)
-
- # read returns a byte string of the right length
- self.assertTrue(isinstance(digest, type(b("digest"))))
- self.assertEqual(len(digest), 90)
-
- def test_update_after_read(self):
- mac = self.shake.new()
- mac.update(b("rrrr"))
- mac.read(90)
- self.assertRaises(TypeError, mac.update, b("ttt"))
-
-
-class SHAKE128Test(SHAKETest):
- shake = SHAKE128
-
-
-class SHAKE256Test(SHAKETest):
- shake = SHAKE256
-
-
-class SHAKEVectors(unittest.TestCase):
- pass
-
-
-test_vectors_128 = load_test_vectors(("Hash", "SHA3"),
- "ShortMsgKAT_SHAKE128.txt",
- "Short Messages KAT SHAKE128",
- { "len" : lambda x: int(x) } ) or []
-
-for idx, tv in enumerate(test_vectors_128):
- if tv.len == 0:
- data = b("")
- else:
- data = tobytes(tv.msg)
-
- def new_test(self, data=data, result=tv.md):
- hobj = SHAKE128.new(data=data)
- digest = hobj.read(len(result))
- self.assertEqual(digest, result)
-
- setattr(SHAKEVectors, "test_128_%d" % idx, new_test)
-
-
-test_vectors_256 = load_test_vectors(("Hash", "SHA3"),
- "ShortMsgKAT_SHAKE256.txt",
- "Short Messages KAT SHAKE256",
- { "len" : lambda x: int(x) } ) or []
-
-for idx, tv in enumerate(test_vectors_256):
- if tv.len == 0:
- data = b("")
- else:
- data = tobytes(tv.msg)
-
- def new_test(self, data=data, result=tv.md):
- hobj = SHAKE256.new(data=data)
- digest = hobj.read(len(result))
- self.assertEqual(digest, result)
-
- setattr(SHAKEVectors, "test_256_%d" % idx, new_test)
-
-
-def get_tests(config={}):
- tests = []
- tests += list_test_cases(SHAKE128Test)
- tests += list_test_cases(SHAKE256Test)
- tests += list_test_cases(SHAKEVectors)
- return tests
-
-
-if __name__ == '__main__':
- import unittest
- suite = lambda: unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Random/test_random.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Random/test_random.py
deleted file mode 100644
index 8fadc535adaf0e7b5bbebb44531901ad4d94bcc3..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Random/test_random.py
+++ /dev/null
@@ -1,167 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# SelfTest/Util/test_generic.py: Self-test for the Crypto.Random.new() function
-#
-# Written in 2008 by Dwayne C. Litzenberger
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-"""Self-test suite for Crypto.Random.new()"""
-
-import sys
-import unittest
-from Crypto.Util.py3compat import b
-
-class SimpleTest(unittest.TestCase):
- def runTest(self):
- """Crypto.Random.new()"""
- # Import the Random module and try to use it
- from Crypto import Random
- randobj = Random.new()
- x = randobj.read(16)
- y = randobj.read(16)
- self.assertNotEqual(x, y)
- z = Random.get_random_bytes(16)
- self.assertNotEqual(x, z)
- self.assertNotEqual(y, z)
- # Test the Random.random module, which
- # implements a subset of Python's random API
- # Not implemented:
- # seed(), getstate(), setstate(), jumpahead()
- # random(), uniform(), triangular(), betavariate()
- # expovariate(), gammavariate(), gauss(),
- # longnormvariate(), normalvariate(),
- # vonmisesvariate(), paretovariate()
- # weibullvariate()
- # WichmannHill(), whseed(), SystemRandom()
- from Crypto.Random import random
- x = random.getrandbits(16*8)
- y = random.getrandbits(16*8)
- self.assertNotEqual(x, y)
- # Test randrange
- if x>y:
- start = y
- stop = x
- else:
- start = x
- stop = y
- for step in range(1,10):
- x = random.randrange(start,stop,step)
- y = random.randrange(start,stop,step)
- self.assertNotEqual(x, y)
- self.assertEqual(start <= x < stop, True)
- self.assertEqual(start <= y < stop, True)
- self.assertEqual((x - start) % step, 0)
- self.assertEqual((y - start) % step, 0)
- for i in range(10):
- self.assertEqual(random.randrange(1,2), 1)
- self.assertRaises(ValueError, random.randrange, start, start)
- self.assertRaises(ValueError, random.randrange, stop, start, step)
- self.assertRaises(TypeError, random.randrange, start, stop, step, step)
- self.assertRaises(TypeError, random.randrange, start, stop, "1")
- self.assertRaises(TypeError, random.randrange, "1", stop, step)
- self.assertRaises(TypeError, random.randrange, 1, "2", step)
- self.assertRaises(ValueError, random.randrange, start, stop, 0)
- # Test randint
- x = random.randint(start,stop)
- y = random.randint(start,stop)
- self.assertNotEqual(x, y)
- self.assertEqual(start <= x <= stop, True)
- self.assertEqual(start <= y <= stop, True)
- for i in range(10):
- self.assertEqual(random.randint(1,1), 1)
- self.assertRaises(ValueError, random.randint, stop, start)
- self.assertRaises(TypeError, random.randint, start, stop, step)
- self.assertRaises(TypeError, random.randint, "1", stop)
- self.assertRaises(TypeError, random.randint, 1, "2")
- # Test choice
- seq = range(10000)
- x = random.choice(seq)
- y = random.choice(seq)
- self.assertNotEqual(x, y)
- self.assertEqual(x in seq, True)
- self.assertEqual(y in seq, True)
- for i in range(10):
- self.assertEqual(random.choice((1,2,3)) in (1,2,3), True)
- self.assertEqual(random.choice([1,2,3]) in [1,2,3], True)
- if sys.version_info[0] == 3:
- self.assertEqual(random.choice(bytearray(b('123'))) in bytearray(b('123')), True)
- self.assertEqual(1, random.choice([1]))
- self.assertRaises(IndexError, random.choice, [])
- self.assertRaises(TypeError, random.choice, 1)
- # Test shuffle. Lacks random parameter to specify function.
- # Make copies of seq
- seq = range(500)
- x = list(seq)
- y = list(seq)
- random.shuffle(x)
- random.shuffle(y)
- self.assertNotEqual(x, y)
- self.assertEqual(len(seq), len(x))
- self.assertEqual(len(seq), len(y))
- for i in range(len(seq)):
- self.assertEqual(x[i] in seq, True)
- self.assertEqual(y[i] in seq, True)
- self.assertEqual(seq[i] in x, True)
- self.assertEqual(seq[i] in y, True)
- z = [1]
- random.shuffle(z)
- self.assertEqual(z, [1])
- if sys.version_info[0] == 3:
- z = bytearray(b('12'))
- random.shuffle(z)
- self.assertEqual(b('1') in z, True)
- self.assertRaises(TypeError, random.shuffle, b('12'))
- self.assertRaises(TypeError, random.shuffle, 1)
- self.assertRaises(TypeError, random.shuffle, "11")
- self.assertRaises(TypeError, random.shuffle, (1,2))
- # 2to3 wraps a list() around it, alas - but I want to shoot
- # myself in the foot here! :D
- # if sys.version_info[0] == 3:
- # self.assertRaises(TypeError, random.shuffle, range(3))
- # Test sample
- x = random.sample(seq, 20)
- y = random.sample(seq, 20)
- self.assertNotEqual(x, y)
- for i in range(20):
- self.assertEqual(x[i] in seq, True)
- self.assertEqual(y[i] in seq, True)
- z = random.sample([1], 1)
- self.assertEqual(z, [1])
- z = random.sample((1,2,3), 1)
- self.assertEqual(z[0] in (1,2,3), True)
- z = random.sample("123", 1)
- self.assertEqual(z[0] in "123", True)
- z = random.sample(range(3), 1)
- self.assertEqual(z[0] in range(3), True)
- if sys.version_info[0] == 3:
- z = random.sample(b("123"), 1)
- self.assertEqual(z[0] in b("123"), True)
- z = random.sample(bytearray(b("123")), 1)
- self.assertEqual(z[0] in bytearray(b("123")), True)
- self.assertRaises(TypeError, random.sample, 1)
-
-def get_tests(config={}):
- return [SimpleTest()]
-
-if __name__ == '__main__':
- suite = lambda: unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
-
-# vim:set ts=4 sw=4 sts=4 expandtab:
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/multilingual/multilingual_data_manager.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/multilingual/multilingual_data_manager.py
deleted file mode 100644
index 876dfcec36e4cf9236c21e440e9657a68036a278..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/multilingual/multilingual_data_manager.py
+++ /dev/null
@@ -1,1156 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import itertools
-import json
-import logging
-import math
-import os
-from collections import OrderedDict, defaultdict
-from argparse import ArgumentError
-
-from fairseq import utils
-from fairseq.data import (
- AppendTokenDataset,
- ConcatDataset,
- Dictionary,
- LanguagePairDataset,
- PrependTokenDataset,
- SampledMultiDataset,
- SampledMultiEpochDataset,
- StripTokenDataset,
- TransformEosLangPairDataset,
- TruncateDataset,
- data_utils,
- indexed_dataset,
-)
-from fairseq.data.multilingual.multilingual_utils import (
- EncoderLangtok,
- LangTokSpec,
- LangTokStyle,
- augment_dictionary,
- get_lang_tok,
-)
-from fairseq.data.multilingual.sampled_multi_dataset import CollateFormat
-from fairseq.file_io import PathManager
-from fairseq.utils import FileContentsAction, csv_str_list, eval_str_dict
-
-
-logger = logging.getLogger(__name__)
-
-SRC_DICT_NAME = "src"
-TGT_DICT_NAME = "tgt"
-
-
-def _lang_id(dic: Dictionary, lang: str):
- """Return language ID index."""
- idx = dic.index(lang)
- assert idx != dic.unk_index, "cannot find language ID for lang {}".format(lang)
- return idx
-
-
-def load_sampling_weights(from_file):
- with open(from_file) as f:
- weights = json.load(f)
- return weights
-
-
-class MultilingualDatasetManager(object):
- def __init__(self, args, lang_pairs, langs, dicts, sampling_method):
- super().__init__()
- self.args = args
- self.seed = args.seed
- self.lang_pairs = lang_pairs
- self.extra_lang_pairs = (
- list({p for _, v in args.extra_lang_pairs.items() for p in v.split(",")})
- if args.extra_lang_pairs
- else []
- )
- self.src_langs = {
- p.split("-")[0] for p in args.lang_pairs + self.extra_lang_pairs
- }
- self.tgt_langs = {
- p.split("-")[1] for p in args.lang_pairs + self.extra_lang_pairs
- }
- self.langs = langs
- self.dicts = dicts
- self.lang_dict = self.create_lang_dictionary(self.langs)
- self.sampling_method = sampling_method
- self.sampling_scheduler = None
- self._has_sharded_data = False
- self._num_shards_dict = {}
- self._training_data_sizes = defaultdict(lambda: {})
-
- @classmethod
- def setup_data_manager(cls, args, lang_pairs, langs, dicts, sampling_method):
- return MultilingualDatasetManager(
- args, lang_pairs, langs, dicts, sampling_method
- )
-
- @staticmethod
- def add_args(parser):
- parser.add_argument(
- "data",
- help="colon separated path to data directories list, \
- will be iterated upon during epochs in round-robin manner",
- action=FileContentsAction,
- )
- parser.add_argument(
- "--langs",
- default=None,
- type=csv_str_list,
- help="a list of languages comma sperated languages which can appear in lang-pairs; "
- "note that the ordering determines language token IDs",
- )
- parser.add_argument(
- "--lang-dict",
- default=None,
- type=str,
- help="an external file which contains a list of "
- "languages which can appear in lang-pairs; "
- "note that the ordering determines language token IDs; "
- "--langs and --lang-dict are two exclusive options",
- )
- parser.add_argument(
- "--source-dict",
- default=None,
- type=str,
- help="path to source dictionary; if specified it will override per language dictionary loading",
- )
- parser.add_argument(
- "--target-dict",
- default=None,
- type=str,
- help="path to target dictionary; if specified it will override per language dictionary loading",
- )
- parser.add_argument(
- "--lang-tok-style",
- default=LangTokStyle.multilingual.value,
- type=str,
- choices=[LangTokStyle.multilingual.value, LangTokStyle.mbart.value],
- help="language token styles",
- )
-
- parser.add_argument(
- "--load-alignments",
- action="store_true",
- help="load the binarized alignments",
- )
- parser.add_argument(
- "--left-pad-source",
- default="True",
- type=str,
- metavar="BOOL",
- help="pad the source on the left",
- )
- parser.add_argument(
- "--left-pad-target",
- default="False",
- type=str,
- metavar="BOOL",
- help="pad the target on the left",
- )
- try:
- parser.add_argument(
- "--max-source-positions",
- default=1024,
- type=int,
- metavar="N",
- help="max number of tokens in the source sequence",
- )
- parser.add_argument(
- "--max-target-positions",
- default=1024,
- type=int,
- metavar="N",
- help="max number of tokens in the target sequence",
- )
- except ArgumentError:
- # this might have already been defined. Once we transition this to hydra it should be fine to add it here.
- pass
- parser.add_argument(
- "--upsample-primary",
- default=1,
- type=int,
- help="amount to upsample primary dataset",
- )
- parser.add_argument(
- "--truncate-source",
- action="store_true",
- default=False,
- help="truncate source to max-source-positions",
- )
- parser.add_argument(
- "--encoder-langtok",
- default=None,
- type=str,
- choices=[EncoderLangtok.src.value, EncoderLangtok.tgt.value],
- metavar="SRCTGT",
- help="prepend to the beginning of source sentence the source or target "
- "language token. (src/tgt)",
- )
- parser.add_argument(
- "--decoder-langtok",
- action="store_true",
- help="prepend to the beginning of target sentence the target language token",
- )
- parser.add_argument(
- "--lang-tok-replacing-bos-eos", action="store_true", default=False
- )
- parser.add_argument(
- "--enable-lang-ids",
- default=False,
- action="store_true",
- help="whether to include language IDs in samples",
- )
- parser.add_argument(
- "--enable-reservsed-directions-shared-datasets",
- default=False,
- action="store_true",
- help="whether to allow datasets be used in reversed directions",
- )
-
- parser.add_argument(
- "--extra-data",
- help='a dictionary of data name to this path, \
- e.g. {"mined", path_to_mined_data, "denoised": path_to_denoised_data}',
- type=lambda uf: eval_str_dict(uf, type=str),
- default=None,
- )
- parser.add_argument(
- "--extra-lang-pairs",
- help='a dictionary of data name to the language pairs they serve, \
- e.g. {"mined": comma-separated-lang-pairs, "denoised": comma-separated-lang-pairs}',
- type=lambda uf: eval_str_dict(uf, type=str),
- default=None,
- )
- parser.add_argument(
- "--fixed-dictionary",
- help="Fixed dictionary to use with model path",
- default=None,
- type=str,
- )
- parser.add_argument(
- "--langtoks-specs",
- help='a list of comma separated data types that a set of language tokens to be specialized for, \
- e.g. "main,dae,mined". There will be a set of language tokens added to the vocab to \
- distinguish languages in different training data types. If not specified, default language \
- tokens per languages will be added',
- default=LangTokSpec.main.value,
- type=csv_str_list,
- )
- parser.add_argument(
- "--langtoks",
- help='a dictionary of how to add language tokens, \
- e.g. {"mined": (None, "tgt"), "mono_dae": ("src.dae", "tgt"), "main": \
- ("src", "tgt")}, or {"mined": ("src.mined", "tgt")}',
- default=None,
- type=lambda uf: eval_str_dict(uf, type=str),
- )
- parser.add_argument(
- "--sampling-weights-from-file",
- help='a file contain a python dictionary of how to sample data sets, \
- e.g. { "main:en_XX-es_XX": 0.2, "mined:en_XX-pt_XX": 0.5, \
- "mono_dae:es_XX-es_XX: 0.3, "main:en_xx-fr_XX": 0.8 }',
- default=None,
- type=str,
- )
- parser.add_argument(
- "--sampling-weights",
- help='a dictionary of how to sample data sets, \
- e.g. { "main:en_XX-es_XX": 0.2, "mined:en_XX-pt_XX": 0.5, \
- "mono_dae:es_XX-es_XX: 0.3, "main:en_xx-fr_XX": 0.8 }',
- default=None,
- type=lambda uf: eval_str_dict(uf, type=str),
- )
- parser.add_argument(
- "--virtual-epoch-size",
- default=None,
- type=int,
- help="virtual epoch size to speed up data loading",
- )
- parser.add_argument(
- "--virtual-data-size",
- default=None,
- type=int,
- help="virtual data size of the whole joint dataset to speed"
- "up data loading and have specific dynamic sampling strategy interval",
- )
-
- @classmethod
- def load_langs(cls, args, **kwargs):
- if args.lang_dict and args.langs:
- raise ValueError("--langs and --lang-dict can not both be specified")
- if args.lang_dict is None and args.langs is None:
- logger.warning(
- "External language dictionary is not provided; "
- "use lang-pairs to infer the set of supported languages. "
- "The language ordering is not stable which might cause "
- "misalignment in pretraining and finetuning."
- )
- # infer from lang_pairs as it is
- langs = list(
- {x for lang_pair in args.lang_pairs for x in lang_pair.split("-")}
- )
- langs = sorted(langs)
- logger.info(f"inferred language list: {langs}")
- elif args.lang_dict:
- with open(
- PathManager.get_local_path(args.lang_dict), "r", encoding="utf-8"
- ) as f:
- langs = [lang.strip() for lang in f.readlines() if lang.strip()]
- logger.info(
- f"loaded language list from {args.lang_dict} as they are ordered in file"
- )
- elif args.langs:
- langs = args.langs
- logger.info(
- f"parsed the language list as they are ordered in the option: {langs}"
- )
- return langs
-
- def has_sharded_data(self, split):
- return self._has_sharded_data and split == getattr(
- self.args, "train_subset", None
- )
-
- def _shared_collater(self):
- return not (self.args.extra_data and "mono_dae" in self.args.extra_data) and (
- not self.args.lang_tok_replacing_bos_eos
- )
-
- def estimate_global_pass_epoch(self, epoch):
- if self.args.virtual_epoch_size is None or self.args.virtual_data_size is None:
- return None
- # one epoch more for remaining data in each shard
- virtual_epochs_per_shard = math.ceil(
- self.args.virtual_data_size / self.args.virtual_epoch_size
- )
- # note that fairseq epoch / shard_epoch starts from 1
- shard_epoch = (epoch - 1) // virtual_epochs_per_shard + 1
- return shard_epoch
-
- @classmethod
- def prepare(cls, load_dictionary, args, **kargs):
- args.left_pad_source = utils.eval_bool(args.left_pad_source)
- args.left_pad_target = utils.eval_bool(args.left_pad_target)
-
- if not hasattr(args, "shuffle_instance"):
- args.shuffle_instance = False
- if args.langtoks is None:
- args.langtoks = {}
- if "main" not in args.langtoks:
- src_langtok_spec = args.encoder_langtok if args.encoder_langtok else None
- tgt_langtok_spec = "tgt" if args.decoder_langtok else None
- args.langtoks["main"] = (src_langtok_spec, tgt_langtok_spec)
-
- def check_langs(langs, pairs):
- messages = []
- for src, tgt in pairs:
- if src not in langs or tgt not in langs:
- messages.append(
- f"language pair {src}-{tgt} contains languages "
- "that are not in the language dictionary"
- )
- if len(messages) > 0:
- raise ValueError(" ".join(messages) + f"; langs: {langs}")
-
- if args.lang_pairs is None:
- raise ValueError(
- "--lang-pairs is required. List all the language pairs in the training objective."
- )
- if isinstance(args.lang_pairs, str):
- args.lang_pairs = args.lang_pairs.split(",")
- if args.source_lang is not None or args.target_lang is not None:
- training = False
- else:
- training = True
- language_list = cls.load_langs(args, **kargs)
- check_langs(
- language_list,
- (
- [p.split("-") for p in args.lang_pairs]
- if training
- else [(args.source_lang, args.target_lang)]
- ),
- )
-
- def load_dictionary_and_postproc(path):
- d = load_dictionary(path)
- augment_dictionary(
- dictionary=d,
- language_list=language_list,
- lang_tok_style=args.lang_tok_style,
- langtoks_specs=args.langtoks_specs,
- extra_data=args.extra_data,
- )
- return d
-
- dicts = cls.load_all_dictionaries(
- args, language_list, load_dictionary_and_postproc, training
- )
- return language_list, dicts, training
-
- @classmethod
- def load_all_dictionaries(cls, args, language_list, load_dictionary, training):
- dicts = OrderedDict()
- if args.source_dict is not None:
- dicts[SRC_DICT_NAME] = load_dictionary(args.source_dict)
- if args.target_dict is not None:
- dicts[TGT_DICT_NAME] = load_dictionary(args.target_dict)
-
- if training:
- extra_lang_pairs = (
- list(
- {p for _, v in args.extra_lang_pairs.items() for p in v.split(",")}
- )
- if args.extra_lang_pairs
- else []
- )
- src_langs_to_load_dicts = sorted(
- {p.split("-")[0] for p in (args.lang_pairs + extra_lang_pairs)}
- )
- tgt_langs_to_load_dicts = sorted(
- {p.split("-")[1] for p in (args.lang_pairs + extra_lang_pairs)}
- )
- else:
- src_langs_to_load_dicts = [args.source_lang]
- tgt_langs_to_load_dicts = [args.target_lang]
-
- paths = utils.split_paths(args.data)
- assert len(paths) > 0
-
- def load_dicts(langs_to_load_dicts):
- for lang in langs_to_load_dicts:
- dicts[lang] = load_dictionary(
- os.path.join(paths[0], "dict.{}.txt".format(lang))
- )
- if len(dicts) > 0:
- dict0 = next(iter(dicts.values()))
- assert dicts[lang].pad() == dict0.pad()
- assert dicts[lang].eos() == dict0.eos()
- assert dicts[lang].unk() == dict0.unk()
- logger.info("[{}] dictionary: {} types".format(lang, len(dicts[lang])))
-
- if args.fixed_dictionary is not None:
- fixed_dict = load_dictionary(args.fixed_dictionary)
- dicts = {
- lang: fixed_dict
- for lang in src_langs_to_load_dicts + tgt_langs_to_load_dicts
- }
- else:
- if args.source_dict is None:
- load_dicts(src_langs_to_load_dicts)
- if args.target_dict is None:
- load_dicts(tgt_langs_to_load_dicts)
- return dicts
-
- def get_source_dictionary(self, lang):
- if self.args.source_dict is not None:
- return self.dicts[SRC_DICT_NAME]
- else:
- return self.dicts[lang]
-
- def get_target_dictionary(self, lang):
- if self.args.target_dict is not None:
- return self.dicts[TGT_DICT_NAME]
- else:
- return self.dicts[lang]
-
- @classmethod
- def create_lang_dictionary(cls, langs):
- unk = ""
- # hack to remove symbols other than unk as they are not needed by lang dict
- lang_dict = Dictionary(pad=unk, eos=unk, unk=unk, bos=unk)
- for lang in langs:
- lang_dict.add_symbol(lang)
- return lang_dict
-
- @classmethod
- def get_langtok_index(cls, lang_tok, dic):
- idx = dic.index(lang_tok)
- assert (
- idx != dic.unk_index
- ), "cannot find language token {} in the dictionary".format(lang_tok)
- return idx
-
- def get_encoder_langtok(self, src_lang, tgt_lang, spec=None):
- if spec is None:
- return None
- if spec and spec.startswith("src"):
- if src_lang is None:
- return None
- langtok = get_lang_tok(
- lang=src_lang, lang_tok_style=self.args.lang_tok_style, spec=spec
- )
- else:
- if tgt_lang is None:
- return None
- langtok = get_lang_tok(
- lang=tgt_lang, lang_tok_style=self.args.lang_tok_style, spec=spec
- )
- return self.get_langtok_index(
- langtok,
- self.get_source_dictionary(src_lang)
- if src_lang
- else self.get_target_dictionary(tgt_lang),
- )
-
- def get_decoder_langtok(self, tgt_lang, spec=None):
- if spec is None:
- return None
- langtok = get_lang_tok(
- lang=tgt_lang, lang_tok_style=self.args.lang_tok_style, spec=spec
- )
- return self.get_langtok_index(langtok, self.get_target_dictionary(tgt_lang))
-
- @classmethod
- def load_data(cls, path, vdict, impl):
- dataset = data_utils.load_indexed_dataset(path, vdict, impl)
- return dataset
-
- @classmethod
- def split_exists(cls, split, src, tgt, lang, data_path, dataset_impl):
- filename = os.path.join(data_path, "{}.{}-{}.{}".format(split, src, tgt, lang))
- return indexed_dataset.dataset_exists(filename, impl=dataset_impl)
-
- def load_lang_dataset(
- self,
- data_path,
- split,
- src,
- src_dict,
- tgt,
- tgt_dict,
- combine,
- dataset_impl,
- upsample_primary,
- max_source_positions,
- prepend_bos=False,
- load_alignments=False,
- truncate_source=False,
- ):
-
- src_datasets = []
- tgt_datasets = []
-
- for k in itertools.count():
- split_k = split + (str(k) if k > 0 else "")
-
- # infer langcode
- if self.split_exists(split_k, src, tgt, src, data_path, dataset_impl):
- prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, src, tgt))
- elif self.split_exists(split_k, tgt, src, src, data_path, dataset_impl):
- prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, tgt, src))
- else:
- if k > 0:
- break
- else:
- logger.error(
- f"Dataset not found: {data_path}, {split_k}, {src}, {tgt}"
- )
- raise FileNotFoundError(
- "Dataset not found: {} ({})".format(split, data_path)
- )
-
- src_dataset = self.load_data(prefix + src, src_dict, dataset_impl)
- if truncate_source:
- src_dataset = AppendTokenDataset(
- TruncateDataset(
- StripTokenDataset(src_dataset, src_dict.eos()),
- max_source_positions - 1,
- ),
- src_dict.eos(),
- )
- src_datasets.append(src_dataset)
- tgt_datasets.append(self.load_data(prefix + tgt, tgt_dict, dataset_impl))
-
- logger.info(
- "{} {} {}-{} {} examples".format(
- data_path, split_k, src, tgt, len(src_datasets[-1])
- )
- )
-
- if not combine:
- break
-
- assert len(src_datasets) == len(tgt_datasets)
-
- if len(src_datasets) == 1:
- src_dataset, tgt_dataset = src_datasets[0], tgt_datasets[0]
- else:
- sample_ratios = [1] * len(src_datasets)
- sample_ratios[0] = upsample_primary
- src_dataset = ConcatDataset(src_datasets, sample_ratios)
- tgt_dataset = ConcatDataset(tgt_datasets, sample_ratios)
-
- if prepend_bos:
- assert hasattr(src_dict, "bos_index") and hasattr(tgt_dict, "bos_index")
- src_dataset = PrependTokenDataset(src_dataset, src_dict.bos())
- tgt_dataset = PrependTokenDataset(tgt_dataset, tgt_dict.bos())
-
- align_dataset = None
- if load_alignments:
- align_path = os.path.join(
- data_path, "{}.align.{}-{}".format(split, src, tgt)
- )
- if indexed_dataset.dataset_exists(align_path, impl=dataset_impl):
- align_dataset = data_utils.load_indexed_dataset(
- align_path, None, dataset_impl
- )
-
- return src_dataset, tgt_dataset, align_dataset
-
- def load_langpair_dataset(
- self,
- data_path,
- split,
- src,
- src_dict,
- tgt,
- tgt_dict,
- combine,
- dataset_impl,
- upsample_primary,
- left_pad_source,
- left_pad_target,
- max_source_positions,
- max_target_positions,
- prepend_bos=False,
- load_alignments=False,
- truncate_source=False,
- src_dataset_transform_func=lambda dataset: dataset,
- tgt_dataset_transform_func=lambda dataset: dataset,
- src_lang_id=None,
- tgt_lang_id=None,
- langpairs_sharing_datasets=None,
- ):
- norm_direction = "-".join(sorted([src, tgt]))
- if langpairs_sharing_datasets is not None:
- src_dataset = langpairs_sharing_datasets.get(
- (data_path, split, norm_direction, src), "NotInCache"
- )
- tgt_dataset = langpairs_sharing_datasets.get(
- (data_path, split, norm_direction, tgt), "NotInCache"
- )
- align_dataset = langpairs_sharing_datasets.get(
- (data_path, split, norm_direction, src, tgt), "NotInCache"
- )
-
- # a hack: any one is not in cache, we need to reload them
- if (
- langpairs_sharing_datasets is None
- or src_dataset == "NotInCache"
- or tgt_dataset == "NotInCache"
- or align_dataset == "NotInCache"
- or split != getattr(self.args, "train_subset", None)
- ):
- # source and target datasets can be reused in reversed directions to save memory
- # reversed directions of valid and test data will not share source and target datasets
- src_dataset, tgt_dataset, align_dataset = self.load_lang_dataset(
- data_path,
- split,
- src,
- src_dict,
- tgt,
- tgt_dict,
- combine,
- dataset_impl,
- upsample_primary,
- max_source_positions=max_source_positions,
- prepend_bos=prepend_bos,
- load_alignments=load_alignments,
- truncate_source=truncate_source,
- )
- src_dataset = src_dataset_transform_func(src_dataset)
- tgt_dataset = tgt_dataset_transform_func(tgt_dataset)
- if langpairs_sharing_datasets is not None:
- langpairs_sharing_datasets[
- (data_path, split, norm_direction, src)
- ] = src_dataset
- langpairs_sharing_datasets[
- (data_path, split, norm_direction, tgt)
- ] = tgt_dataset
- langpairs_sharing_datasets[
- (data_path, split, norm_direction, src, tgt)
- ] = align_dataset
- if align_dataset is None:
- # no align data so flag the reverse direction as well in sharing
- langpairs_sharing_datasets[
- (data_path, split, norm_direction, tgt, src)
- ] = align_dataset
- else:
- logger.info(
- f"Reusing source and target datasets of [{split}] {tgt}-{src} for reversed direction: "
- f"[{split}] {src}-{tgt}: src length={len(src_dataset)}; tgt length={len(tgt_dataset)}"
- )
-
- return LanguagePairDataset(
- src_dataset,
- src_dataset.sizes,
- src_dict,
- tgt_dataset,
- tgt_dataset.sizes if tgt_dataset is not None else None,
- tgt_dict,
- left_pad_source=left_pad_source,
- left_pad_target=left_pad_target,
- align_dataset=align_dataset,
- src_lang_id=src_lang_id,
- tgt_lang_id=tgt_lang_id,
- )
-
- def src_dataset_tranform_func(self, src_lang, tgt_lang, dataset, spec=None):
- if self.args.lang_tok_replacing_bos_eos:
- # it is handled by self.alter_dataset_langtok
- # TODO: Unifiy with alter_dataset_langtok
- return dataset
- if spec is None:
- return dataset
- tok = self.get_encoder_langtok(src_lang, tgt_lang, spec)
- if tok:
- return PrependTokenDataset(dataset, tok)
- return dataset
-
- def tgt_dataset_tranform_func(self, source_lang, target_lang, dataset, spec=None):
- if dataset is None:
- # note that target dataset can be None during inference time
- return None
- if self.args.lang_tok_replacing_bos_eos:
- # TODO: Unifiy with alter_dataset_langtok
- # It is handled by self.alter_dataset_langtok.
- # The complication in self.alter_dataset_langtok
- # makes a unified framework difficult.
- return dataset
- # if not self.args.decoder_langtok:
- if not spec:
- return dataset
- tok = self.get_decoder_langtok(target_lang, spec)
- if tok:
- return PrependTokenDataset(dataset, tok)
- return dataset
-
- def alter_dataset_langtok(
- self,
- lang_pair_dataset,
- src_eos=None,
- src_lang=None,
- tgt_eos=None,
- tgt_lang=None,
- src_langtok_spec=None,
- tgt_langtok_spec=None,
- ):
- if src_langtok_spec is None and tgt_langtok_spec is None:
- return lang_pair_dataset
-
- new_src_eos = None
- if (
- src_langtok_spec is not None
- and src_eos is not None
- and (src_lang is not None or tgt_lang is not None)
- ):
- new_src_eos = self.get_encoder_langtok(src_lang, tgt_lang, src_langtok_spec)
- else:
- src_eos = None
-
- new_tgt_bos = None
- if tgt_langtok_spec and tgt_eos is not None and tgt_lang is not None:
- new_tgt_bos = self.get_decoder_langtok(tgt_lang, tgt_langtok_spec)
- else:
- tgt_eos = None
-
- return TransformEosLangPairDataset(
- lang_pair_dataset,
- src_eos=src_eos,
- new_src_eos=new_src_eos,
- tgt_bos=tgt_eos,
- new_tgt_bos=new_tgt_bos,
- )
-
- def load_a_dataset(
- self,
- split,
- data_path,
- src,
- src_dict,
- tgt,
- tgt_dict,
- combine,
- prepend_bos=False,
- langpairs_sharing_datasets=None,
- data_category=None,
- **extra_kwargs,
- ):
- dataset_impl = self.args.dataset_impl
- upsample_primary = self.args.upsample_primary
- left_pad_source = self.args.left_pad_source
- left_pad_target = self.args.left_pad_target
- max_source_positions = self.args.max_source_positions
- max_target_positions = self.args.max_target_positions
- load_alignments = self.args.load_alignments
- truncate_source = self.args.truncate_source
- src_dataset_transform_func = self.src_dataset_tranform_func
- tgt_dataset_transform_func = self.tgt_dataset_tranform_func
- enable_lang_ids = self.args.enable_lang_ids
- lang_dictionary = self.lang_dict
- src_langtok_spec, tgt_langtok_spec = extra_kwargs["langtok_spec"]
-
- src_langtok = self.get_encoder_langtok(src, tgt, src_langtok_spec)
- tgt_langtok = self.get_decoder_langtok(tgt, tgt_langtok_spec)
- logger.info(
- f"{data_category}:{src}-{tgt} src_langtok: {src_langtok}; tgt_langtok: {tgt_langtok}"
- )
-
- langpair_ds = self.load_langpair_dataset(
- data_path,
- split,
- src,
- src_dict,
- tgt,
- tgt_dict,
- combine,
- dataset_impl,
- upsample_primary,
- left_pad_source,
- left_pad_target,
- max_source_positions,
- max_target_positions,
- prepend_bos,
- load_alignments,
- truncate_source,
- src_dataset_transform_func=lambda dataset: src_dataset_transform_func(
- src, tgt, dataset, src_langtok_spec
- ),
- tgt_dataset_transform_func=lambda dataset: tgt_dataset_transform_func(
- src, tgt, dataset, tgt_langtok_spec
- ),
- src_lang_id=_lang_id(lang_dictionary, src)
- if enable_lang_ids and lang_dictionary is not None
- else None,
- tgt_lang_id=_lang_id(lang_dictionary, tgt)
- if enable_lang_ids and lang_dictionary is not None
- else None,
- langpairs_sharing_datasets=langpairs_sharing_datasets,
- )
- # TODO: handle modified lang toks for mined data and dae data
- if self.args.lang_tok_replacing_bos_eos:
- ds = self.alter_dataset_langtok(
- langpair_ds,
- src_eos=self.get_source_dictionary(src).eos()
- if src
- else self.get_target_dictionary(tgt).eos(),
- src_lang=src,
- tgt_eos=self.get_target_dictionary(tgt).eos(),
- tgt_lang=tgt,
- src_langtok_spec=src_langtok_spec,
- tgt_langtok_spec=tgt_langtok_spec,
- )
- else:
- ds = langpair_ds
- return ds
-
- def load_split_langpair_datasets(self, split, data_param_list):
- datasets = []
- langpairs_sharing_datasets = (
- {} if self.args.enable_reservsed_directions_shared_datasets else None
- )
- for param in data_param_list:
- ds = self.load_a_dataset(
- split=split,
- langpairs_sharing_datasets=langpairs_sharing_datasets,
- **param,
- )
- datasets.append(ds)
- return datasets
-
- def get_data_paths_and_lang_pairs(self, split):
- datapaths = {"main": self.args.data}
- lang_pairs = {"main": self.lang_pairs}
- if split == getattr(self.args, "train_subset", None):
- # only training data can have extra data and extra language pairs
- if self.args.extra_data:
- extra_datapaths = self.args.extra_data
- datapaths.update(extra_datapaths)
- if self.args.extra_lang_pairs:
- extra_lang_pairs = {
- k: v.split(",") for k, v in self.args.extra_lang_pairs.items()
- }
- lang_pairs.update(extra_lang_pairs)
- return datapaths, lang_pairs
-
- @classmethod
- def get_dataset_key(cls, data_category, src, tgt):
- return f"{data_category}:{src}-{tgt}"
-
- @classmethod
- def _get_shard_num_dict(cls, split, paths):
- shards = defaultdict(int)
- for path in paths:
- files = PathManager.ls(path)
- directions = set()
- for f in files:
- if f.startswith(split) and f.endswith(".idx"):
- # idx files of the form "{split}.{src}-{tgt}.{lang}.idx"
- direction = f.split(".")[-3]
- directions.add(direction)
- for direction in directions:
- shards[direction] += 1
- return shards
-
- def get_split_num_data_shards(self, split):
- if split in self._num_shards_dict:
- return self._num_shards_dict[split]
- num_shards_dict = {}
- data_paths, lang_pairs = self.get_data_paths_and_lang_pairs(split)
-
- for data_category, paths in data_paths.items():
- if data_category not in lang_pairs:
- continue
- paths = utils.split_paths(paths)
- shards_dict = self._get_shard_num_dict(split, paths)
- lang_dirs = [
- lang_pair.split("-") for lang_pair in lang_pairs[data_category]
- ]
- lang_dirs = [x if len(x) > 1 else (x[0], x[0]) for x in lang_dirs]
- for src, tgt in lang_dirs:
- key = self.get_dataset_key(data_category, src, tgt)
- if "mono_" in data_category:
- # monolingual data requires tgt only
- assert src is None or src == tgt, (
- f"error: src={src}, "
- f"tgt={tgt} for data_category={data_category}"
- )
- num_shards_dict[key] = shards_dict[tgt]
- else:
- if f"{src}-{tgt}" in shards_dict:
- num_shards_dict[key] = shards_dict[f"{src}-{tgt}"]
- elif f"{tgt}-{src}" in shards_dict:
- # follow the fairseq tradition to use reversed direction data if it is not available
- num_shards_dict[key] = shards_dict[f"{tgt}-{src}"]
- self._num_shards_dict[split] = num_shards_dict
- logger.info(f"[{split}] num of shards: {num_shards_dict}")
- return num_shards_dict
-
- @classmethod
- def get_shard_id(cls, num_shards, epoch, shard_epoch=None):
- shard = epoch if shard_epoch is None else shard_epoch
- shard = (shard - 1) % num_shards
- return shard
-
- def get_split_data_path(self, paths, epoch, shard_epoch, num_shards):
- path = paths[self.get_shard_id(num_shards, epoch, shard_epoch)]
- return path
-
- def get_split_data_param_list(self, split, epoch, shard_epoch=None):
- # TODO: to extend with extra datasets and keys and loop over different shard data paths
- param_list = []
- data_paths, lang_pairs = self.get_data_paths_and_lang_pairs(split)
- logger.info(f"langtoks settings: {self.args.langtoks}")
- split_num_shards_dict = self.get_split_num_data_shards(split)
- for data_category, paths in data_paths.items():
- if data_category not in lang_pairs:
- continue
- paths = utils.split_paths(paths)
- assert len(paths) > 0
- if len(paths) > 1:
- self._has_sharded_data = True
- if split != getattr(self.args, "train_subset", None):
- # if not training data set, use the first shard for valid and test
- paths = paths[:1]
-
- if data_category in self.args.langtoks:
- lang_tok_spec = self.args.langtoks[data_category]
- else:
- # default to None
- lang_tok_spec = (None, None)
-
- # infer langcode
- lang_dirs = [
- lang_pair.split("-") for lang_pair in lang_pairs[data_category]
- ]
- lang_dirs = [x if len(x) > 1 else (x[0], x[0]) for x in lang_dirs]
- for src, tgt in lang_dirs:
- assert src is not None or data_category == "mono_dae", (
- f"error: src={src}, " f"tgt={tgt} for data_category={data_category}"
- )
- # logger.info(f"preparing param for {data_category}: {src} - {tgt}")
- key = self.get_dataset_key(data_category, src, tgt)
- data_path = self.get_split_data_path(
- paths, epoch, shard_epoch, split_num_shards_dict[key]
- )
- param_list.append(
- {
- "key": key,
- "data_path": data_path,
- "split": split,
- "src": src,
- "src_dict": self.get_source_dictionary(src)
- if src and data_category != "mono_dae"
- else None,
- "tgt": tgt,
- "tgt_dict": self.get_target_dictionary(tgt),
- "data_category": data_category,
- "langtok_spec": lang_tok_spec,
- }
- )
- return param_list
-
- def get_train_dataset_sizes(
- self, data_param_list, datasets, epoch, shard_epoch=None
- ):
- num_shards = [
- self.get_split_num_data_shards(param["split"])[param["key"]]
- for param in data_param_list
- ]
- data_sizes = []
- for (key, d), num_shard in zip(datasets, num_shards):
- my_data_sizes = self._training_data_sizes[key]
- shard_ind = self.get_shard_id(num_shard, epoch, shard_epoch)
- if shard_ind not in my_data_sizes:
- my_data_sizes[shard_ind] = len(d)
- known_size = max(my_data_sizes.values())
- data_sizes.append(
- # If we don't know the data size of the shard yet,
- # use the the max known data size to approximate.
- # Note that we preprocess shards by a designated shard size
- # and put any remaining data at the end into the last shard so
- # the max shard size approximation is almost correct before loading
- # the last shard; after loading the last shard, it will have the
- # exact data sizes of the whole data size.
- (key, sum(my_data_sizes.get(i, known_size) for i in range(num_shard)))
- )
- logger.info(
- f"estimated total data sizes of all shards used in sampling ratios: {data_sizes}. "
- "Note that if the data a shard has not been loaded yet, use the max known data size to approximate"
- )
- return [s for _, s in data_sizes]
-
- def get_train_sampling_ratios(
- self, data_param_list, datasets, epoch=1, shard_epoch=None
- ):
- data_sizes = self.get_train_dataset_sizes(
- data_param_list, datasets, epoch, shard_epoch
- )
- sampling_func = self.sampling_method.sampling_method_selector()
- sample_ratios = sampling_func(data_sizes) if sampling_func is not None else None
- return sample_ratios
-
- def get_sampling_ratios(self, data_param_list, datasets, epoch, shard_epoch=None):
- if self.args.sampling_weights_from_file:
- weights = load_sampling_weights(self.args.sampling_weights_from_file)
- sample_ratios = [weights[k] for k, _ in datasets]
- logger.info(
- "| ignoring --sampling-weights when loadding sampling weights "
- f"from file {self.args.sampling_weights_from_file}"
- )
- elif self.args.sampling_weights:
- sample_ratios = [self.args.sampling_weights[k] for k, _ in datasets]
- else:
- sample_ratios = self.get_train_sampling_ratios(
- data_param_list, datasets, epoch, shard_epoch
- )
-
- if sample_ratios is not None:
- logger.info(
- "| Upsample ratios: {}".format(
- list(zip(map(lambda x: x["key"], data_param_list), sample_ratios))
- )
- )
- assert len(sample_ratios) == len(datasets)
- return sample_ratios
-
- def load_split_datasets(
- self, split, training, epoch=1, combine=False, shard_epoch=None, **kwargs
- ):
- data_param_list = self.get_split_data_param_list(
- split, epoch, shard_epoch=shard_epoch
- )
- langpairs_sharing_datasets = (
- {} if self.args.enable_reservsed_directions_shared_datasets else None
- )
- datasets = [
- (
- param["key"],
- self.load_a_dataset(
- combine=combine,
- langpairs_sharing_datasets=langpairs_sharing_datasets,
- **param,
- ),
- )
- for param in data_param_list
- ]
- return datasets, data_param_list
-
- def load_into_concat_dataset(self, split, datasets, data_param_list):
- if self.args.lang_tok_replacing_bos_eos:
- # TODO: to investigate why TransformEosLangPairDataset doesn't work with ConcatDataset
- return SampledMultiDataset(
- OrderedDict(datasets),
- sampling_ratios=None,
- eval_key=None,
- collate_format=CollateFormat.single,
- virtual_size=None,
- split=split,
- )
- return ConcatDataset([d for _, d in datasets])
-
- def load_sampled_multi_epoch_dataset(
- self, split, training, epoch=0, combine=False, shard_epoch=None, **kwargs
- ):
- datasets, data_param_list = self.load_split_datasets(
- split, training, epoch, combine, shard_epoch=shard_epoch, **kwargs
- )
- if training and split == getattr(self.args, "train_subset", None):
- sample_ratios = self.get_sampling_ratios(data_param_list, datasets, epoch)
- return SampledMultiEpochDataset(
- OrderedDict(datasets),
- epoch=epoch,
- shard_epoch=shard_epoch,
- # valid and test datasets will be degenerate to concating datasets:
- sampling_ratios=sample_ratios,
- eval_key=None,
- collate_format=CollateFormat.single,
- virtual_size=self.args.virtual_data_size,
- split=split,
- virtual_epoch_size=self.args.virtual_epoch_size,
- # if not using lang_tok altering, simplified to use the same collater
- shared_collater=self._shared_collater(),
- )
- else:
- return self.load_into_concat_dataset(split, datasets, data_param_list)
-
- def load_sampled_multi_dataset(
- self, split, training, epoch=0, combine=False, shard_epoch=None, **kwargs
- ):
- datasets, data_param_list = self.load_split_datasets(
- split, training, epoch, combine, shard_epoch=shard_epoch, **kwargs
- )
- if training and split == getattr(self.args, "train_subset", None):
- sample_ratios = self.get_sampling_ratios(data_param_list, datasets, epoch)
- return SampledMultiDataset(
- OrderedDict(datasets),
- epoch=epoch,
- # valid and test datasets will be degerate to concating datasets:
- sampling_ratios=sample_ratios,
- eval_key=None,
- collate_format=CollateFormat.single,
- virtual_size=self.args.virtual_data_size,
- split=split,
- # if not using lang_tok altering, simplified to use the same collater
- shared_collater=self._shared_collater(),
- )
- else:
- return self.load_into_concat_dataset(split, datasets, data_param_list)
-
- def load_dataset(
- self, split, training, epoch=0, combine=False, shard_epoch=None, **kwargs
- ):
- if self.args.virtual_epoch_size is None:
- return self.load_sampled_multi_dataset(
- split, training, epoch, combine, shard_epoch, **kwargs
- )
- else:
- return self.load_sampled_multi_epoch_dataset(
- split, training, epoch, combine, shard_epoch, **kwargs
- )
diff --git a/spaces/ashercn97/AsherTesting/modules/models_settings.py b/spaces/ashercn97/AsherTesting/modules/models_settings.py
deleted file mode 100644
index 3f37e48db61843bbe1fdfca321850b671a3ce7b8..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/modules/models_settings.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import re
-from pathlib import Path
-
-import yaml
-
-from modules import shared, ui
-
-
-def get_model_settings_from_yamls(model):
- settings = shared.model_config
- model_settings = {}
- for pat in settings:
- if re.match(pat.lower(), model.lower()):
- for k in settings[pat]:
- model_settings[k] = settings[pat][k]
-
- return model_settings
-
-
-def infer_loader(model_name):
- path_to_model = Path(f'{shared.args.model_dir}/{model_name}')
- model_settings = get_model_settings_from_yamls(model_name)
- if not path_to_model.exists():
- loader = None
- elif Path(f'{shared.args.model_dir}/{model_name}/quantize_config.json').exists() or ('wbits' in model_settings and type(model_settings['wbits']) is int and model_settings['wbits'] > 0):
- loader = 'AutoGPTQ'
- elif len(list(path_to_model.glob('*ggml*.bin'))) > 0:
- loader = 'llama.cpp'
- elif re.match('.*ggml.*\.bin', model_name.lower()):
- loader = 'llama.cpp'
- elif re.match('.*rwkv.*\.pth', model_name.lower()):
- loader = 'RWKV'
- elif shared.args.flexgen:
- loader = 'FlexGen'
- else:
- loader = 'Transformers'
-
- return loader
-
-
-# UI: update the command-line arguments based on the interface values
-def update_model_parameters(state, initial=False):
- elements = ui.list_model_elements() # the names of the parameters
- gpu_memories = []
-
- for i, element in enumerate(elements):
- if element not in state:
- continue
-
- value = state[element]
- if element.startswith('gpu_memory'):
- gpu_memories.append(value)
- continue
-
- if initial and vars(shared.args)[element] != vars(shared.args_defaults)[element]:
- continue
-
- # Setting null defaults
- if element in ['wbits', 'groupsize', 'model_type'] and value == 'None':
- value = vars(shared.args_defaults)[element]
- elif element in ['cpu_memory'] and value == 0:
- value = vars(shared.args_defaults)[element]
-
- # Making some simple conversions
- if element in ['wbits', 'groupsize', 'pre_layer']:
- value = int(value)
- elif element == 'cpu_memory' and value is not None:
- value = f"{value}MiB"
-
- if element in ['pre_layer']:
- value = [value] if value > 0 else None
-
- setattr(shared.args, element, value)
-
- found_positive = False
- for i in gpu_memories:
- if i > 0:
- found_positive = True
- break
-
- if not (initial and vars(shared.args)['gpu_memory'] != vars(shared.args_defaults)['gpu_memory']):
- if found_positive:
- shared.args.gpu_memory = [f"{i}MiB" for i in gpu_memories]
- else:
- shared.args.gpu_memory = None
-
-
-# UI: update the state variable with the model settings
-def apply_model_settings_to_state(model, state):
- model_settings = get_model_settings_from_yamls(model)
- if 'loader' not in model_settings:
- loader = infer_loader(model)
- if 'wbits' in model_settings and type(model_settings['wbits']) is int and model_settings['wbits'] > 0:
- loader = 'AutoGPTQ'
-
- # If the user is using an alternative GPTQ loader, let them keep using it
- if not (loader == 'AutoGPTQ' and state['loader'] in ['GPTQ-for-LLaMa', 'ExLlama', 'ExLlama_HF']):
- state['loader'] = loader
-
- for k in model_settings:
- if k in state:
- if k in ['wbits', 'groupsize']:
- state[k] = str(model_settings[k])
- else:
- state[k] = model_settings[k]
-
- return state
-
-
-# Save the settings for this model to models/config-user.yaml
-def save_model_settings(model, state):
- if model == 'None':
- yield ("Not saving the settings because no model is loaded.")
- return
-
- with Path(f'{shared.args.model_dir}/config-user.yaml') as p:
- if p.exists():
- user_config = yaml.safe_load(open(p, 'r').read())
- else:
- user_config = {}
-
- model_regex = model + '$' # For exact matches
- for _dict in [user_config, shared.model_config]:
- if model_regex not in _dict:
- _dict[model_regex] = {}
-
- if model_regex not in user_config:
- user_config[model_regex] = {}
-
- for k in ui.list_model_elements():
- user_config[model_regex][k] = state[k]
- shared.model_config[model_regex][k] = state[k]
-
- with open(p, 'w') as f:
- f.write(yaml.dump(user_config, sort_keys=False))
-
- yield (f"Settings for {model} saved to {p}")
diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Triveni Putti.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Triveni Putti.html
deleted file mode 100644
index 004023e3898ea51f90bdda75a5d21be9d4aeea7b..0000000000000000000000000000000000000000
--- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Triveni Putti.html
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
-
- Triveni Putti
-
-
-
-
-
-
Triveni Putti
-
-
-
How did you hear about SM?
Looking at a friend's LI profile and saw they were a mentor
Sounds like a very good opportunity, been doing it informally, why not help people and grow connections
Brief background
Currently DS at Shopify (been there 3 months)
Previously DS at (Brunswick corp) for 3.5 years
They sell boats
ML and predictive maintenance on boats
during Master's had an internship at Brunswick
previously business analyst at company in India (SAS and R)
Mentorship exp
2022 started looking for a new jobs
after she was accepted at Shopify, her husband started looking (also a DS)
help him prep for interviews (particularly around resources for stats/prob/math/product sense)
Share what did not work out for me
Other friends, also interviewing, shared her notes and resources
What do beginners need and how can you help?
Role of a mentor? and how can you help
1. People don't know what to expect. I have done a lot of interviews at FANG companies and can help set expectations
2. People do not know the right resources. So I can share my resources
3. Can help people understand DS and stats concepts
Mock interviews! interview etiquette
finding the right roles, knowing where to apply, and what to expect
Cheat sheets for interviews
-
- Questions about SM:
How does the process work?
How quickly does the match happen?
How big is the platform?
What's the average profile?
How long do the mentorships last?
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/augmentedimaginationhackathon/paperstocode/promptStuff/main.py b/spaces/augmentedimaginationhackathon/paperstocode/promptStuff/main.py
deleted file mode 100644
index 65527346985364a83c1ee73a9fa1eff9d9cae361..0000000000000000000000000000000000000000
--- a/spaces/augmentedimaginationhackathon/paperstocode/promptStuff/main.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from retrieval.main import get_context
-from prompts import layoutPrompt
-
-# use langchain and ingested paper to query for what the jupyter notebook layout should look like
-# create functino decorato
-def getLayout(arxiv_link: str):
- """
- getLayout(url: str) -> layout: list[str]
- """
- return get_context(arxiv_link, layoutPrompt)
-
-## for each portion of the layout, generate a simple prompt that can be used to query langchain
-def getSectionPrompt(section: str):
- """
- getSectionPrompt(section: str) -> prompt: str
- """
- return
-
-### for each section of the layout, query langchain to get the portions of the paper that are most relevant to that code section
-def getSectionContext(prompt: str):
- """
- getSectionContext(prompt: str) -> context: str
- """
- return
-
-#### for each code section and provided context, generate the code for that section
-def getSectionCode(section: str, context: str):
- """
- getSectionCode(section: str, context: str) -> code: str
- """
- return
-
-##### for each code section, check formatting, correctness, etc.
-def checkSectionCode(code: str):
- """
- checkSectionCode(code: str) -> code: str
- """
- return
-
-# stitch together and markup all of the code sections to create the final jupyter notebook
-def stitchNotebook(layout: list[str], code: list[str]):
- """
- stitchNotebook(layout: list[str], code: list[str]) -> notebook: str
- """
- return
-
-# check formatting, correctness, etc. of the final jupyter notebook
-def checkNotebook(notebook: str):
- """
- checkNotebook(notebook: str) -> notebook: str
- """
- return
-
-# save the stitched together code as a jupyter notebook
-def saveNotebook(notebook: str):
- """
- saveNotebook(notebook: str) -> none
- """
- return
-
-
diff --git a/spaces/awacke1/AskMeAnythingSemanticSearch/app.py b/spaces/awacke1/AskMeAnythingSemanticSearch/app.py
deleted file mode 100644
index 15a470c0f296eae4504e42cb8bb6629780d80283..0000000000000000000000000000000000000000
--- a/spaces/awacke1/AskMeAnythingSemanticSearch/app.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import streamlit as st
-import pandas as pd
-import numpy as np
-import pickle
-from huggingface_hub import hf_hub_download
-from sentence_transformers import SentenceTransformer, util
-from langdetect import detect
-import plotly.express as px
-from collections import Counter
-
-# sidebar
-with st.sidebar:
- st.header("Examples:")
- st.markdown("This search finds content in Medium .")
-
-
-# main content
-st.header("Semantic Search Engine on [Medium](https://medium.com/) articles")
-st.markdown("This is a small demo project of a semantic search engine over a dataset of ~190k Medium articles.")
-
-st_placeholder_loading = st.empty()
-st_placeholder_loading.text('Loading medium articles data...')
-
-@st.cache(allow_output_mutation=True)
-def load_data():
- df_articles = pd.read_csv(hf_hub_download("fabiochiu/medium-articles", repo_type="dataset", filename="medium_articles_no_text.csv"))
- corpus_embeddings = pickle.load(open(hf_hub_download("fabiochiu/medium-articles", repo_type="dataset", filename="medium_articles_embeddings.pickle"), "rb"))
- embedder = SentenceTransformer('all-MiniLM-L6-v2')
- return df_articles, corpus_embeddings, embedder
-
-df_articles, corpus_embeddings, embedder = load_data()
-st_placeholder_loading.empty()
-
-n_top_tags = 20
-@st.cache()
-def load_chart_top_tags():
- # Occurrences of the top 50 tags
- print("we")
- all_tags = [tag for tags_list in df_articles["tags"] for tag in eval(tags_list)]
- d_tags_counter = Counter(all_tags)
- tags, frequencies = list(zip(*d_tags_counter.most_common(n=n_top_tags)))
- fig = px.bar(x=tags, y=frequencies)
- fig.update_xaxes(title="tags")
- fig.update_yaxes(title="frequencies")
- return fig
-
-fig_top_tags = load_chart_top_tags()
-
-st_query = st.text_input("Write your query here", max_chars=100)
-
-def on_click_search():
- if st_query != "":
- query_embedding = embedder.encode(st_query, convert_to_tensor=True)
- top_k = 10
- hits = util.semantic_search(query_embedding, corpus_embeddings, top_k=top_k*2)[0]
- article_dicts = []
- for hit in hits:
- score = hit['score']
- article_row = df_articles.iloc[hit['corpus_id']]
- try:
- detected_lang = detect(article_row["title"])
- except:
- detected_lang = ""
- if detected_lang == "en" and len(article_row["title"]) >= 10:
- article_dicts.append({
- "title": article_row['title'],
- "url": article_row['url'],
- "score": score
- })
- if len(article_dicts) >= top_k:
- break
- st.session_state.article_dicts = article_dicts
- st.session_state.empty_query = False
- else:
- st.session_state.article_dicts = []
- st.session_state.empty_query = True
-st.button("Search", on_click=on_click_search)
-if st_query != "":
- st.session_state.empty_query = False
- on_click_search()
-else:
- st.session_state.empty_query = True
-
-if not st.session_state.empty_query:
- st.markdown("### Results")
- st.markdown("*Scores between parentheses represent the similarity between the article and the query.*")
- for article_dict in st.session_state.article_dicts:
- st.markdown(f"""- [{article_dict['title'].capitalize()}]({article_dict['url']}) ({article_dict['score']:.2f})""")
-elif st.session_state.empty_query and "article_dicts" in st.session_state:
- st.markdown("Please write a query and then press the search button.")
diff --git a/spaces/awacke1/StreamlitCalendar/README.md b/spaces/awacke1/StreamlitCalendar/README.md
deleted file mode 100644
index 2e4058325b56ff799127b765e01d1f29931df283..0000000000000000000000000000000000000000
--- a/spaces/awacke1/StreamlitCalendar/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: StreamlitCalendar
-emoji: 🐨
-colorFrom: blue
-colorTo: green
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/visual_chatgpt/visual_foundation_models.py b/spaces/awacke1/visual_chatgpt/visual_foundation_models.py
deleted file mode 100644
index 6d26eb163faf9e3b8b72ee091409495167fb64ab..0000000000000000000000000000000000000000
--- a/spaces/awacke1/visual_chatgpt/visual_foundation_models.py
+++ /dev/null
@@ -1,735 +0,0 @@
-from diffusers import StableDiffusionPipeline, StableDiffusionInpaintPipeline, StableDiffusionInstructPix2PixPipeline
-from diffusers import EulerAncestralDiscreteScheduler
-from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
-from controlnet_aux import OpenposeDetector, MLSDdetector, HEDdetector
-
-from transformers import AutoModelForCausalLM, AutoTokenizer, CLIPSegProcessor, CLIPSegForImageSegmentation
-from transformers import pipeline, BlipProcessor, BlipForConditionalGeneration, BlipForQuestionAnswering
-from transformers import AutoImageProcessor, UperNetForSemanticSegmentation
-
-import os
-import random
-import torch
-import cv2
-import uuid
-from PIL import Image
-import numpy as np
-from pytorch_lightning import seed_everything
-
-def prompts(name, description):
- def decorator(func):
- func.name = name
- func.description = description
- return func
-
- return decorator
-
-def get_new_image_name(org_img_name, func_name="update"):
- head_tail = os.path.split(org_img_name)
- head = head_tail[0]
- tail = head_tail[1]
- name_split = tail.split('.')[0].split('_')
- this_new_uuid = str(uuid.uuid4())[0:4]
- if len(name_split) == 1:
- most_org_file_name = name_split[0]
- recent_prev_file_name = name_split[0]
- new_file_name = '{}_{}_{}_{}.png'.format(this_new_uuid, func_name, recent_prev_file_name, most_org_file_name)
- else:
- assert len(name_split) == 4
- most_org_file_name = name_split[3]
- recent_prev_file_name = name_split[0]
- new_file_name = '{}_{}_{}_{}.png'.format(this_new_uuid, func_name, recent_prev_file_name, most_org_file_name)
- return os.path.join(head, new_file_name)
-
-
-class MaskFormer:
- def __init__(self, device):
- print(f"Initializing MaskFormer to {device}")
- self.device = device
- self.processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
- self.model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined").to(device)
-
- def inference(self, image_path, text):
- threshold = 0.5
- min_area = 0.02
- padding = 20
- original_image = Image.open(image_path)
- image = original_image.resize((512, 512))
- inputs = self.processor(text=text, images=image, padding="max_length", return_tensors="pt").to(self.device)
- with torch.no_grad():
- outputs = self.model(**inputs)
- mask = torch.sigmoid(outputs[0]).squeeze().cpu().numpy() > threshold
- area_ratio = len(np.argwhere(mask)) / (mask.shape[0] * mask.shape[1])
- if area_ratio < min_area:
- return None
- true_indices = np.argwhere(mask)
- mask_array = np.zeros_like(mask, dtype=bool)
- for idx in true_indices:
- padded_slice = tuple(slice(max(0, i - padding), i + padding + 1) for i in idx)
- mask_array[padded_slice] = True
- visual_mask = (mask_array * 255).astype(np.uint8)
- image_mask = Image.fromarray(visual_mask)
- return image_mask.resize(original_image.size)
-
-
-class ImageEditing:
- def __init__(self, device):
- print(f"Initializing ImageEditing to {device}")
- self.device = device
- self.mask_former = MaskFormer(device=self.device)
- self.revision = 'fp16' if 'cuda' in device else None
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.inpaint = StableDiffusionInpaintPipeline.from_pretrained(
- "runwayml/stable-diffusion-inpainting", revision=self.revision, torch_dtype=self.torch_dtype).to(device)
-
- @prompts(name="Remove Something From The Photo",
- description="useful when you want to remove and object or something from the photo "
- "from its description or location. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the object need to be removed. ")
- def inference_remove(self, inputs):
- image_path, to_be_removed_txt = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- return self.inference_replace(f"{image_path},{to_be_removed_txt},background")
-
- @prompts(name="Replace Something From The Photo",
- description="useful when you want to replace an object from the object description or "
- "location with another object from its description. "
- "The input to this tool should be a comma separated string of three, "
- "representing the image_path, the object to be replaced, the object to be replaced with ")
- def inference_replace(self, inputs):
- image_path, to_be_replaced_txt, replace_with_txt = inputs.split(",")
- original_image = Image.open(image_path)
- original_size = original_image.size
- mask_image = self.mask_former.inference(image_path, to_be_replaced_txt)
- updated_image = self.inpaint(prompt=replace_with_txt, image=original_image.resize((512, 512)),
- mask_image=mask_image.resize((512, 512))).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="replace-something")
- updated_image = updated_image.resize(original_size)
- updated_image.save(updated_image_path)
- print(
- f"\nProcessed ImageEditing, Input Image: {image_path}, Replace {to_be_replaced_txt} to {replace_with_txt}, "
- f"Output Image: {updated_image_path}")
- return updated_image_path
-
-
-class InstructPix2Pix:
- def __init__(self, device):
- print(f"Initializing InstructPix2Pix to {device}")
- self.device = device
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained("timbrooks/instruct-pix2pix",
- safety_checker=None,
- torch_dtype=self.torch_dtype).to(device)
- self.pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(self.pipe.scheduler.config)
-
- @prompts(name="Instruct Image Using Text",
- description="useful when you want to the style of the image to be like the text. "
- "like: make it look like a painting. or make it like a robot. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the text. ")
- def inference(self, inputs):
- """Change style of image."""
- print("===>Starting InstructPix2Pix Inference")
- image_path, text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- original_image = Image.open(image_path)
- image = self.pipe(text, image=original_image, num_inference_steps=40, image_guidance_scale=1.2).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="pix2pix")
- image.save(updated_image_path)
- print(f"\nProcessed InstructPix2Pix, Input Image: {image_path}, Instruct Text: {text}, "
- f"Output Image: {updated_image_path}")
- return updated_image_path
-
-
-class Text2Image:
- def __init__(self, device):
- print(f"Initializing Text2Image to {device}")
- self.device = device
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",
- torch_dtype=self.torch_dtype)
- self.pipe.to(device)
- self.a_prompt = 'best quality, extremely detailed'
- self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
- 'fewer digits, cropped, worst quality, low quality'
-
- @prompts(name="Generate Image From User Input Text",
- description="useful when you want to generate an image from a user input text and save it to a file. "
- "like: generate an image of an object or something, or generate an image that includes some objects. "
- "The input to this tool should be a string, representing the text used to generate image. ")
- def inference(self, text):
- image_filename = os.path.join('image', f"{str(uuid.uuid4())[:8]}.png")
- prompt = text + ', ' + self.a_prompt
- image = self.pipe(prompt, negative_prompt=self.n_prompt).images[0]
- image.save(image_filename)
- print(
- f"\nProcessed Text2Image, Input Text: {text}, Output Image: {image_filename}")
- return image_filename
-
-
-class ImageCaptioning:
- def __init__(self, device):
- print(f"Initializing ImageCaptioning to {device}")
- self.device = device
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
- self.model = BlipForConditionalGeneration.from_pretrained(
- "Salesforce/blip-image-captioning-base", torch_dtype=self.torch_dtype).to(self.device)
-
- @prompts(name="Get Photo Description",
- description="useful when you want to know what is inside the photo. receives image_path as input. "
- "The input to this tool should be a string, representing the image_path. ")
- def inference(self, image_path):
- inputs = self.processor(Image.open(image_path), return_tensors="pt").to(self.device, self.torch_dtype)
- out = self.model.generate(**inputs)
- captions = self.processor.decode(out[0], skip_special_tokens=True)
- print(f"\nProcessed ImageCaptioning, Input Image: {image_path}, Output Text: {captions}")
- return captions
-
-
-class Image2Canny:
- def __init__(self, device):
- print("Initializing Image2Canny")
- self.low_threshold = 100
- self.high_threshold = 200
-
- @prompts(name="Edge Detection On Image",
- description="useful when you want to detect the edge of the image. "
- "like: detect the edges of this image, or canny detection on image, "
- "or perform edge detection on this image, or detect the canny image of this image. "
- "The input to this tool should be a string, representing the image_path")
- def inference(self, inputs):
- image = Image.open(inputs)
- image = np.array(image)
- canny = cv2.Canny(image, self.low_threshold, self.high_threshold)
- canny = canny[:, :, None]
- canny = np.concatenate([canny, canny, canny], axis=2)
- canny = Image.fromarray(canny)
- updated_image_path = get_new_image_name(inputs, func_name="edge")
- canny.save(updated_image_path)
- print(f"\nProcessed Image2Canny, Input Image: {inputs}, Output Text: {updated_image_path}")
- return updated_image_path
-
-
-class CannyText2Image:
- def __init__(self, device):
- print(f"Initializing CannyText2Image to {device}")
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-canny",
- torch_dtype=self.torch_dtype)
- self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
- torch_dtype=self.torch_dtype)
- self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.to(device)
- self.seed = -1
- self.a_prompt = 'best quality, extremely detailed'
- self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
- 'fewer digits, cropped, worst quality, low quality'
-
- @prompts(name="Generate Image Condition On Canny Image",
- description="useful when you want to generate a new real image from both the user description and a canny image."
- " like: generate a real image of a object or something from this canny image,"
- " or generate a new real image of a object or something from this edge image. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the user description. ")
- def inference(self, inputs):
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- image = Image.open(image_path)
- self.seed = random.randint(0, 65535)
- seed_everything(self.seed)
- prompt = f'{instruct_text}, {self.a_prompt}'
- image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
- guidance_scale=9.0).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="canny2image")
- image.save(updated_image_path)
- print(f"\nProcessed CannyText2Image, Input Canny: {image_path}, Input Text: {instruct_text}, "
- f"Output Text: {updated_image_path}")
- return updated_image_path
-
-
-class Image2Line:
- def __init__(self, device):
- print("Initializing Image2Line")
- self.detector = MLSDdetector.from_pretrained('lllyasviel/ControlNet')
-
- @prompts(name="Line Detection On Image",
- description="useful when you want to detect the straight line of the image. "
- "like: detect the straight lines of this image, or straight line detection on image, "
- "or perform straight line detection on this image, or detect the straight line image of this image. "
- "The input to this tool should be a string, representing the image_path")
- def inference(self, inputs):
- image = Image.open(inputs)
- mlsd = self.detector(image)
- updated_image_path = get_new_image_name(inputs, func_name="line-of")
- mlsd.save(updated_image_path)
- print(f"\nProcessed Image2Line, Input Image: {inputs}, Output Line: {updated_image_path}")
- return updated_image_path
-
-
-class LineText2Image:
- def __init__(self, device):
- print(f"Initializing LineText2Image to {device}")
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-mlsd",
- torch_dtype=self.torch_dtype)
- self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
- torch_dtype=self.torch_dtype
- )
- self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.to(device)
- self.seed = -1
- self.a_prompt = 'best quality, extremely detailed'
- self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
- 'fewer digits, cropped, worst quality, low quality'
-
- @prompts(name="Generate Image Condition On Line Image",
- description="useful when you want to generate a new real image from both the user description "
- "and a straight line image. "
- "like: generate a real image of a object or something from this straight line image, "
- "or generate a new real image of a object or something from this straight lines. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the user description. ")
- def inference(self, inputs):
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- image = Image.open(image_path)
- self.seed = random.randint(0, 65535)
- seed_everything(self.seed)
- prompt = f'{instruct_text}, {self.a_prompt}'
- image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
- guidance_scale=9.0).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="line2image")
- image.save(updated_image_path)
- print(f"\nProcessed LineText2Image, Input Line: {image_path}, Input Text: {instruct_text}, "
- f"Output Text: {updated_image_path}")
- return updated_image_path
-
-
-class Image2Hed:
- def __init__(self, device):
- print("Initializing Image2Hed")
- self.detector = HEDdetector.from_pretrained('lllyasviel/ControlNet')
-
- @prompts(name="Hed Detection On Image",
- description="useful when you want to detect the soft hed boundary of the image. "
- "like: detect the soft hed boundary of this image, or hed boundary detection on image, "
- "or perform hed boundary detection on this image, or detect soft hed boundary image of this image. "
- "The input to this tool should be a string, representing the image_path")
- def inference(self, inputs):
- image = Image.open(inputs)
- hed = self.detector(image)
- updated_image_path = get_new_image_name(inputs, func_name="hed-boundary")
- hed.save(updated_image_path)
- print(f"\nProcessed Image2Hed, Input Image: {inputs}, Output Hed: {updated_image_path}")
- return updated_image_path
-
-
-class HedText2Image:
- def __init__(self, device):
- print(f"Initializing HedText2Image to {device}")
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-hed",
- torch_dtype=self.torch_dtype)
- self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
- torch_dtype=self.torch_dtype
- )
- self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.to(device)
- self.seed = -1
- self.a_prompt = 'best quality, extremely detailed'
- self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
- 'fewer digits, cropped, worst quality, low quality'
-
- @prompts(name="Generate Image Condition On Soft Hed Boundary Image",
- description="useful when you want to generate a new real image from both the user description "
- "and a soft hed boundary image. "
- "like: generate a real image of a object or something from this soft hed boundary image, "
- "or generate a new real image of a object or something from this hed boundary. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the user description")
- def inference(self, inputs):
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- image = Image.open(image_path)
- self.seed = random.randint(0, 65535)
- seed_everything(self.seed)
- prompt = f'{instruct_text}, {self.a_prompt}'
- image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
- guidance_scale=9.0).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="hed2image")
- image.save(updated_image_path)
- print(f"\nProcessed HedText2Image, Input Hed: {image_path}, Input Text: {instruct_text}, "
- f"Output Image: {updated_image_path}")
- return updated_image_path
-
-
-class Image2Scribble:
- def __init__(self, device):
- print("Initializing Image2Scribble")
- self.detector = HEDdetector.from_pretrained('lllyasviel/ControlNet')
-
- @prompts(name="Sketch Detection On Image",
- description="useful when you want to generate a scribble of the image. "
- "like: generate a scribble of this image, or generate a sketch from this image, "
- "detect the sketch from this image. "
- "The input to this tool should be a string, representing the image_path")
- def inference(self, inputs):
- image = Image.open(inputs)
- scribble = self.detector(image, scribble=True)
- updated_image_path = get_new_image_name(inputs, func_name="scribble")
- scribble.save(updated_image_path)
- print(f"\nProcessed Image2Scribble, Input Image: {inputs}, Output Scribble: {updated_image_path}")
- return updated_image_path
-
-
-class ScribbleText2Image:
- def __init__(self, device):
- print(f"Initializing ScribbleText2Image to {device}")
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-scribble",
- torch_dtype=self.torch_dtype)
- self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
- torch_dtype=self.torch_dtype
- )
- self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.to(device)
- self.seed = -1
- self.a_prompt = 'best quality, extremely detailed'
- self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
- 'fewer digits, cropped, worst quality, low quality'
-
- @prompts(name="Generate Image Condition On Sketch Image",
- description="useful when you want to generate a new real image from both the user description and "
- "a scribble image or a sketch image. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the user description")
- def inference(self, inputs):
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- image = Image.open(image_path)
- self.seed = random.randint(0, 65535)
- seed_everything(self.seed)
- prompt = f'{instruct_text}, {self.a_prompt}'
- image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
- guidance_scale=9.0).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="scribble2image")
- image.save(updated_image_path)
- print(f"\nProcessed ScribbleText2Image, Input Scribble: {image_path}, Input Text: {instruct_text}, "
- f"Output Image: {updated_image_path}")
- return updated_image_path
-
-
-class Image2Pose:
- def __init__(self, device):
- print("Initializing Image2Pose")
- self.detector = OpenposeDetector.from_pretrained('lllyasviel/ControlNet')
-
- @prompts(name="Pose Detection On Image",
- description="useful when you want to detect the human pose of the image. "
- "like: generate human poses of this image, or generate a pose image from this image. "
- "The input to this tool should be a string, representing the image_path")
- def inference(self, inputs):
- image = Image.open(inputs)
- pose = self.detector(image)
- updated_image_path = get_new_image_name(inputs, func_name="human-pose")
- pose.save(updated_image_path)
- print(f"\nProcessed Image2Pose, Input Image: {inputs}, Output Pose: {updated_image_path}")
- return updated_image_path
-
-
-class PoseText2Image:
- def __init__(self, device):
- print(f"Initializing PoseText2Image to {device}")
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-openpose",
- torch_dtype=self.torch_dtype)
- self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
- torch_dtype=self.torch_dtype)
- self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.to(device)
- self.num_inference_steps = 20
- self.seed = -1
- self.unconditional_guidance_scale = 9.0
- self.a_prompt = 'best quality, extremely detailed'
- self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \
- ' fewer digits, cropped, worst quality, low quality'
-
- @prompts(name="Generate Image Condition On Pose Image",
- description="useful when you want to generate a new real image from both the user description "
- "and a human pose image. "
- "like: generate a real image of a human from this human pose image, "
- "or generate a new real image of a human from this pose. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the user description")
- def inference(self, inputs):
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- image = Image.open(image_path)
- self.seed = random.randint(0, 65535)
- seed_everything(self.seed)
- prompt = f'{instruct_text}, {self.a_prompt}'
- image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
- guidance_scale=9.0).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="pose2image")
- image.save(updated_image_path)
- print(f"\nProcessed PoseText2Image, Input Pose: {image_path}, Input Text: {instruct_text}, "
- f"Output Image: {updated_image_path}")
- return updated_image_path
-
-
-class Image2Seg:
- def __init__(self, device):
- print("Initializing Image2Seg")
- self.image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small")
- self.image_segmentor = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-small")
- self.ade_palette = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50],
- [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255],
- [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7],
- [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82],
- [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3],
- [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255],
- [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220],
- [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224],
- [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255],
- [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7],
- [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153],
- [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255],
- [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0],
- [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255],
- [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255],
- [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255],
- [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0],
- [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0],
- [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255],
- [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255],
- [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20],
- [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255],
- [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255],
- [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255],
- [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0],
- [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0],
- [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255],
- [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112],
- [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160],
- [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163],
- [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0],
- [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0],
- [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255],
- [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204],
- [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255],
- [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255],
- [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194],
- [102, 255, 0], [92, 0, 255]]
-
- @prompts(name="Segmentation On Image",
- description="useful when you want to detect segmentations of the image. "
- "like: segment this image, or generate segmentations on this image, "
- "or perform segmentation on this image. "
- "The input to this tool should be a string, representing the image_path")
- def inference(self, inputs):
- image = Image.open(inputs)
- pixel_values = self.image_processor(image, return_tensors="pt").pixel_values
- with torch.no_grad():
- outputs = self.image_segmentor(pixel_values)
- seg = self.image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
- color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) # height, width, 3
- palette = np.array(self.ade_palette)
- for label, color in enumerate(palette):
- color_seg[seg == label, :] = color
- color_seg = color_seg.astype(np.uint8)
- segmentation = Image.fromarray(color_seg)
- updated_image_path = get_new_image_name(inputs, func_name="segmentation")
- segmentation.save(updated_image_path)
- print(f"\nProcessed Image2Pose, Input Image: {inputs}, Output Pose: {updated_image_path}")
- return updated_image_path
-
-
-class SegText2Image:
- def __init__(self, device):
- print(f"Initializing SegText2Image to {device}")
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-seg",
- torch_dtype=self.torch_dtype)
- self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
- torch_dtype=self.torch_dtype)
- self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.to(device)
- self.seed = -1
- self.a_prompt = 'best quality, extremely detailed'
- self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \
- ' fewer digits, cropped, worst quality, low quality'
-
- @prompts(name="Generate Image Condition On Segmentations",
- description="useful when you want to generate a new real image from both the user description and segmentations. "
- "like: generate a real image of a object or something from this segmentation image, "
- "or generate a new real image of a object or something from these segmentations. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the user description")
- def inference(self, inputs):
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- image = Image.open(image_path)
- self.seed = random.randint(0, 65535)
- seed_everything(self.seed)
- prompt = f'{instruct_text}, {self.a_prompt}'
- image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
- guidance_scale=9.0).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="segment2image")
- image.save(updated_image_path)
- print(f"\nProcessed SegText2Image, Input Seg: {image_path}, Input Text: {instruct_text}, "
- f"Output Image: {updated_image_path}")
- return updated_image_path
-
-
-class Image2Depth:
- def __init__(self, device):
- print("Initializing Image2Depth")
- self.depth_estimator = pipeline('depth-estimation')
-
- @prompts(name="Predict Depth On Image",
- description="useful when you want to detect depth of the image. like: generate the depth from this image, "
- "or detect the depth map on this image, or predict the depth for this image. "
- "The input to this tool should be a string, representing the image_path")
- def inference(self, inputs):
- image = Image.open(inputs)
- depth = self.depth_estimator(image)['depth']
- depth = np.array(depth)
- depth = depth[:, :, None]
- depth = np.concatenate([depth, depth, depth], axis=2)
- depth = Image.fromarray(depth)
- updated_image_path = get_new_image_name(inputs, func_name="depth")
- depth.save(updated_image_path)
- print(f"\nProcessed Image2Depth, Input Image: {inputs}, Output Depth: {updated_image_path}")
- return updated_image_path
-
-
-class DepthText2Image:
- def __init__(self, device):
- print(f"Initializing DepthText2Image to {device}")
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.controlnet = ControlNetModel.from_pretrained(
- "fusing/stable-diffusion-v1-5-controlnet-depth", torch_dtype=self.torch_dtype)
- self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
- torch_dtype=self.torch_dtype)
- self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.to(device)
- self.seed = -1
- self.a_prompt = 'best quality, extremely detailed'
- self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \
- ' fewer digits, cropped, worst quality, low quality'
-
- @prompts(name="Generate Image Condition On Depth",
- description="useful when you want to generate a new real image from both the user description and depth image. "
- "like: generate a real image of a object or something from this depth image, "
- "or generate a new real image of a object or something from the depth map. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the user description")
- def inference(self, inputs):
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- image = Image.open(image_path)
- self.seed = random.randint(0, 65535)
- seed_everything(self.seed)
- prompt = f'{instruct_text}, {self.a_prompt}'
- image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
- guidance_scale=9.0).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="depth2image")
- image.save(updated_image_path)
- print(f"\nProcessed DepthText2Image, Input Depth: {image_path}, Input Text: {instruct_text}, "
- f"Output Image: {updated_image_path}")
- return updated_image_path
-
-
-class Image2Normal:
- def __init__(self, device):
- print("Initializing Image2Normal")
- self.depth_estimator = pipeline("depth-estimation", model="Intel/dpt-hybrid-midas")
- self.bg_threhold = 0.4
-
- @prompts(name="Predict Normal Map On Image",
- description="useful when you want to detect norm map of the image. "
- "like: generate normal map from this image, or predict normal map of this image. "
- "The input to this tool should be a string, representing the image_path")
- def inference(self, inputs):
- image = Image.open(inputs)
- original_size = image.size
- image = self.depth_estimator(image)['predicted_depth'][0]
- image = image.numpy()
- image_depth = image.copy()
- image_depth -= np.min(image_depth)
- image_depth /= np.max(image_depth)
- x = cv2.Sobel(image, cv2.CV_32F, 1, 0, ksize=3)
- x[image_depth < self.bg_threhold] = 0
- y = cv2.Sobel(image, cv2.CV_32F, 0, 1, ksize=3)
- y[image_depth < self.bg_threhold] = 0
- z = np.ones_like(x) * np.pi * 2.0
- image = np.stack([x, y, z], axis=2)
- image /= np.sum(image ** 2.0, axis=2, keepdims=True) ** 0.5
- image = (image * 127.5 + 127.5).clip(0, 255).astype(np.uint8)
- image = Image.fromarray(image)
- image = image.resize(original_size)
- updated_image_path = get_new_image_name(inputs, func_name="normal-map")
- image.save(updated_image_path)
- print(f"\nProcessed Image2Normal, Input Image: {inputs}, Output Depth: {updated_image_path}")
- return updated_image_path
-
-
-class NormalText2Image:
- def __init__(self, device):
- print(f"Initializing NormalText2Image to {device}")
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.controlnet = ControlNetModel.from_pretrained(
- "fusing/stable-diffusion-v1-5-controlnet-normal", torch_dtype=self.torch_dtype)
- self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
- torch_dtype=self.torch_dtype)
- self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.to(device)
- self.seed = -1
- self.a_prompt = 'best quality, extremely detailed'
- self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \
- ' fewer digits, cropped, worst quality, low quality'
-
- @prompts(name="Generate Image Condition On Normal Map",
- description="useful when you want to generate a new real image from both the user description and normal map. "
- "like: generate a real image of a object or something from this normal map, "
- "or generate a new real image of a object or something from the normal map. "
- "The input to this tool should be a comma separated string of two, "
- "representing the image_path and the user description")
- def inference(self, inputs):
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- image = Image.open(image_path)
- self.seed = random.randint(0, 65535)
- seed_everything(self.seed)
- prompt = f'{instruct_text}, {self.a_prompt}'
- image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
- guidance_scale=9.0).images[0]
- updated_image_path = get_new_image_name(image_path, func_name="normal2image")
- image.save(updated_image_path)
- print(f"\nProcessed NormalText2Image, Input Normal: {image_path}, Input Text: {instruct_text}, "
- f"Output Image: {updated_image_path}")
- return updated_image_path
-
-
-class VisualQuestionAnswering:
- def __init__(self, device):
- print(f"Initializing VisualQuestionAnswering to {device}")
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.device = device
- self.processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
- self.model = BlipForQuestionAnswering.from_pretrained(
- "Salesforce/blip-vqa-base", torch_dtype=self.torch_dtype).to(self.device)
-
- @prompts(name="Answer Question About The Image",
- description="useful when you need an answer for a question based on an image. "
- "like: what is the background color of the last image, how many cats in this figure, what is in this figure. "
- "The input to this tool should be a comma separated string of two, representing the image_path and the question")
- def inference(self, inputs):
- image_path, question = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- raw_image = Image.open(image_path).convert('RGB')
- inputs = self.processor(raw_image, question, return_tensors="pt").to(self.device, self.torch_dtype)
- out = self.model.generate(**inputs)
- answer = self.processor.decode(out[0], skip_special_tokens=True)
- print(f"\nProcessed VisualQuestionAnswering, Input Image: {image_path}, Input Question: {question}, "
- f"Output Answer: {answer}")
- return answer
\ No newline at end of file
diff --git a/spaces/awen666/web-ui/_next/static/chunks/webpack-0d7cb8d66c07cf1a.js b/spaces/awen666/web-ui/_next/static/chunks/webpack-0d7cb8d66c07cf1a.js
deleted file mode 100644
index 2569d27cab872c1860fd7b79bbee3a2adf6f734e..0000000000000000000000000000000000000000
--- a/spaces/awen666/web-ui/_next/static/chunks/webpack-0d7cb8d66c07cf1a.js
+++ /dev/null
@@ -1 +0,0 @@
-!function(){"use strict";var e,t,n,r,o,u,i,a,c,f,d,l,s={},p={};function h(e){var t=p[e];if(void 0!==t)return t.exports;var n=p[e]={id:e,loaded:!1,exports:{}},r=!0;try{s[e].call(n.exports,n,n.exports,h),r=!1}finally{r&&delete p[e]}return n.loaded=!0,n.exports}h.m=s,e=[],h.O=function(t,n,r,o){if(n){o=o||0;for(var u=e.length;u>0&&e[u-1][2]>o;u--)e[u]=e[u-1];e[u]=[n,r,o];return}for(var i=1/0,u=0;u=o&&Object.keys(h.O).every(function(e){return h.O[e](n[c])})?n.splice(c--,1):(a=!1,o
-
- LLaVA-1.5 achieves SoTA performance across 11 benchmarks.
-
-
-
-## LLaVA-v1
-
-*Note: We recommend using the most capable LLaVA-v1.5 series above for the best performance.*
-
-| Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Finetuning Data | Finetuning schedule | LLaVA-Bench-Conv | LLaVA-Bench-Detail | LLaVA-Bench-Complex | LLaVA-Bench-Overall | Download |
-|----------|----------------|---------------|----------------------|-----------------|--------------------|------------------|--------------------|---------------------|---------------------|---------------------|
-| Vicuna-13B-v1.3 | CLIP-L-336px | LCS-558K | 1e | LLaVA-Instruct-80K | proj-1e, lora-1e | 64.3 | 55.9 | 81.7 | 70.1 | [LoRA](https://huggingface.co/liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3) [LoRA-Merged](https://huggingface.co/liuhaotian/llava-v1-0719-336px-lora-merge-vicuna-13b-v1.3) |
-| LLaMA-2-13B-Chat | CLIP-L | LCS-558K | 1e | LLaVA-Instruct-80K | full_ft-1e | 56.7 | 58.6 | 80.0 | 67.9 | [ckpt](https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview) |
-| LLaMA-2-7B-Chat | CLIP-L | LCS-558K | 1e | LLaVA-Instruct-80K | lora-1e | 51.2 | 58.9 | 71.6 | 62.8 | [LoRA](https://huggingface.co/liuhaotian/llava-llama-2-7b-chat-lightning-lora-preview) |
-
-
-## Projector weights
-
-The model weights below are projector weights we have pretrained. You can use these projector weights for visual instruction tuning. We'll add more projector weights into model zoo very soon.
-
-**NOTE**: These projector weights are only compatible with the `llava>=1.0.0`, please check out the latest code base if your local code version is below `v1.0.0`.
-
-**NOTE**: When you use our pretrained projector for visual instruction tuning, it is very important to **use the same base LLM and vision encoder** as the one we used for pretraining the projector. Otherwise, the performance will be very bad.
-
-When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,
-
-```Shell
---mm_use_im_start_end False
---mm_use_im_patch_token False
-```
-
-| Base LLM | Vision Encoder | Projection | Pretrain Data | Pretraining schedule | Download |
-|----------|----------------|---------------|----------------------|----------|----------|
-| Vicuna-13B-v1.5 | CLIP-L-336px | MLP-2x | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-v1.5-mlp2x-336px-pretrain-vicuna-13b-v1.5) |
-| Vicuna-7B-v1.5 | CLIP-L-336px | MLP-2x | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-v1.5-mlp2x-336px-pretrain-vicuna-7b-v1.5) |
-| LLaMA-2-13B-Chat | CLIP-L-336px | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-336px-pretrain-llama-2-13b-chat) |
-| LLaMA-2-7B-Chat | CLIP-L-336px | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-336px-pretrain-llama-2-7b-chat) |
-| LLaMA-2-13B-Chat | CLIP-L | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-pretrain-llama-2-13b-chat) |
-| LLaMA-2-7B-Chat | CLIP-L | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-pretrain-llama-2-7b-chat) |
-| Vicuna-13B-v1.3 | CLIP-L-336px | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-336px-pretrain-vicuna-13b-v1.3) |
-| Vicuna-7B-v1.3 | CLIP-L-336px | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-336px-pretrain-vicuna-7b-v1.3) |
-| Vicuna-13B-v1.3 | CLIP-L | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-pretrain-vicuna-13b-v1.3) |
-| Vicuna-7B-v1.3 | CLIP-L | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-pretrain-vicuna-7b-v1.3) |
-
-
-## Science QA Checkpoints
-
-| Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Finetuning Data | Finetuning schedule | Download |
-|----------|----------------|---------------|----------------------|-----------------|--------------------|---------------------|
-| Vicuna-13B-v1.3 | CLIP-L | LCS-558K | 1e | ScienceQA | full_ft-12e | [ckpt](https://huggingface.co/liuhaotian/llava-lcs558k-scienceqa-vicuna-13b-v1.3) |
-
-
-## Legacy Models (merged weights)
-
-The model weights below are *merged* weights. You do not need to apply delta. The usage of LLaVA checkpoints should comply with the base LLM's model license.
-
-| Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Finetuning Data | Finetuning schedule | Download |
-|----------|----------------|---------------|----------------------|-----------------|--------------------|------------------|
-| MPT-7B-Chat | CLIP-L | LCS-558K | 1e | LLaVA-Instruct-80K | full_ft-1e | [preview](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview) |
-
-
-## Legacy Models (delta weights)
-
-The model weights below are *delta* weights. The usage of LLaVA checkpoints should comply with the base LLM's model license: [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
-
-You can add our delta to the original LLaMA weights to obtain the LLaVA weights.
-
-Instructions:
-
-1. Get the original LLaMA weights in the huggingface format by following the instructions [here](https://huggingface.co/docs/transformers/main/model_doc/llama).
-2. Use the following scripts to get LLaVA weights by applying our delta. It will automatically download delta weights from our Hugging Face account. In the script below, we use the delta weights of [`liuhaotian/LLaVA-7b-delta-v0`](https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0) as an example. It can be adapted for other delta weights by changing the `--delta` argument (and base/target accordingly).
-
-```bash
-python3 -m llava.model.apply_delta \
- --base /path/to/llama-7b \
- --target /output/path/to/LLaVA-7B-v0 \
- --delta liuhaotian/LLaVA-7b-delta-v0
-```
-
-| Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Finetuning Data | Finetuning schedule | Download |
-|----------|----------------|---------------|----------------------|-----------------|--------------------|------------------|
-| Vicuna-13B-v1.1 | CLIP-L | CC-595K | 1e | LLaVA-Instruct-158K | full_ft-3e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v1-1) |
-| Vicuna-7B-v1.1 | CLIP-L | LCS-558K | 1e | LLaVA-Instruct-80K | full_ft-1e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-Lightning-7B-delta-v1-1) |
-| Vicuna-13B-v0 | CLIP-L | CC-595K | 1e | LLaVA-Instruct-158K | full_ft-3e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0) |
-| Vicuna-13B-v0 | CLIP-L | CC-595K | 1e | ScienceQA | full_ft-12e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0-science_qa) |
-| Vicuna-7B-v0 | CLIP-L | CC-595K | 1e | LLaVA-Instruct-158K | full_ft-3e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0) |
-
-
-
-## Legacy Projector weights
-
-The following projector weights are deprecated, and the support for them may be removed in the future. They do not support zero-shot inference. Please use the projector weights in the [table above](#projector-weights) if possible.
-
-**NOTE**: When you use our pretrained projector for visual instruction tuning, it is very important to **use the same base LLM and vision encoder** as the one we used for pretraining the projector. Otherwise, the performance will be very bad.
-
-When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,
-
-```Shell
---mm_use_im_start_end True
---mm_use_im_patch_token False
-```
-
-| Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Download |
-|----------|----------------|---------------|----------------------|----------|
-| Vicuna-7B-v1.1 | CLIP-L | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-7b-pretrain-projector-v1-1-LCS-558K-blip_caption.bin) |
-| Vicuna-13B-v0 | CLIP-L | CC-595K | 1e | [projector](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-13b-pretrain-projector-v0-CC3M-595K-original_caption.bin) |
-| Vicuna-7B-v0 | CLIP-L | CC-595K | 1e | [projector](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-7b-pretrain-projector-v0-CC3M-595K-original_caption.bin) |
-
-When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,
-
-```Shell
---mm_use_im_start_end False
---mm_use_im_patch_token False
-```
-
-| Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Download |
-|----------|----------------|---------------|----------------------|----------|
-| Vicuna-13B-v0 | CLIP-L | CC-595K | 1e | [projector](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-13b-pretrain-projector-v0-CC3M-595K-original_caption-no_im_token.bin) |
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/utils/MathUtils.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/utils/MathUtils.js
deleted file mode 100644
index 951ef7d394486b4ef26d19ac84331177fb240a76..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/utils/MathUtils.js
+++ /dev/null
@@ -1,63 +0,0 @@
-/**
- * @author WestLangley / http://github.com/WestLangley
- * @author thezwap / http://github.com/thezwap
- */
-
-THREE.MathUtils = {
-
- setQuaternionFromProperEuler: function ( q, a, b, c, order ) {
-
- // Intrinsic Proper Euler Angles - see https://en.wikipedia.org/wiki/Euler_angles
-
- // rotations are applied to the axes in the order specified by 'order'
- // rotation by angle 'a' is applied first, then by angle 'b', then by angle 'c'
- // angles are in radians
-
- var cos = Math.cos;
- var sin = Math.sin;
-
- var c2 = cos( b / 2 );
- var s2 = sin( b / 2 );
-
- var c13 = cos( ( a + c ) / 2 );
- var s13 = sin( ( a + c ) / 2 );
-
- var c1_3 = cos( ( a - c ) / 2 );
- var s1_3 = sin( ( a - c ) / 2 );
-
- var c3_1 = cos( ( c - a ) / 2 );
- var s3_1 = sin( ( c - a ) / 2 );
-
- if ( order === 'XYX' ) {
-
- q.set( c2 * s13, s2 * c1_3, s2 * s1_3, c2 * c13 );
-
- } else if ( order === 'YZY' ) {
-
- q.set( s2 * s1_3, c2 * s13, s2 * c1_3, c2 * c13 );
-
- } else if ( order === 'ZXZ' ) {
-
- q.set( s2 * c1_3, s2 * s1_3, c2 * s13, c2 * c13 );
-
- } else if ( order === 'XZX' ) {
-
- q.set( c2 * s13, s2 * s3_1, s2 * c3_1, c2 * c13 );
-
- } else if ( order === 'YXY' ) {
-
- q.set( s2 * c3_1, c2 * s13, s2 * s3_1, c2 * c13 );
-
- } else if ( order === 'ZYZ' ) {
-
- q.set( s2 * s3_1, s2 * c3_1, c2 * s13, c2 * c13 );
-
- } else {
-
- console.warn( 'THREE.MathUtils: .setQuaternionFromProperEuler() encountered an unknown order.' );
-
- }
-
- }
-
-};
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/loaders/LoaderUtils.js b/spaces/banana-projects/web3d/node_modules/three/src/loaders/LoaderUtils.js
deleted file mode 100644
index ba13d4ba9c82719d8a46c569cbfdcbfb4bba2d6b..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/loaders/LoaderUtils.js
+++ /dev/null
@@ -1,44 +0,0 @@
-/**
- * @author Don McCurdy / https://www.donmccurdy.com
- */
-
-var LoaderUtils = {
-
- decodeText: function ( array ) {
-
- if ( typeof TextDecoder !== 'undefined' ) {
-
- return new TextDecoder().decode( array );
-
- }
-
- // Avoid the String.fromCharCode.apply(null, array) shortcut, which
- // throws a "maximum call stack size exceeded" error for large arrays.
-
- var s = '';
-
- for ( var i = 0, il = array.length; i < il; i ++ ) {
-
- // Implicitly assumes little-endian.
- s += String.fromCharCode( array[ i ] );
-
- }
-
- // Merges multi-byte utf-8 characters.
- return decodeURIComponent( escape( s ) );
-
- },
-
- extractUrlBase: function ( url ) {
-
- var index = url.lastIndexOf( '/' );
-
- if ( index === - 1 ) return './';
-
- return url.substr( 0, index + 1 );
-
- }
-
-};
-
-export { LoaderUtils };
diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/data/reds_dataset.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/data/reds_dataset.py
deleted file mode 100644
index 0d4a91724689626b7310bb08debda70e7e0186c0..0000000000000000000000000000000000000000
--- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/data/reds_dataset.py
+++ /dev/null
@@ -1,360 +0,0 @@
-import numpy as np
-import random
-import torch
-from pathlib import Path
-from torch.utils import data as data
-
-from basicsr.data.transforms import augment, paired_random_crop
-from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor
-from basicsr.utils.flow_util import dequantize_flow
-from basicsr.utils.registry import DATASET_REGISTRY
-
-
-@DATASET_REGISTRY.register()
-class REDSDataset(data.Dataset):
- """REDS dataset for training.
-
- The keys are generated from a meta info txt file.
- basicsr/data/meta_info/meta_info_REDS_GT.txt
-
- Each line contains:
- 1. subfolder (clip) name; 2. frame number; 3. image shape, separated by
- a white space.
- Examples:
- 000 100 (720,1280,3)
- 001 100 (720,1280,3)
- ...
-
- Key examples: "000/00000000"
- GT (gt): Ground-Truth;
- LQ (lq): Low-Quality, e.g., low-resolution/blurry/noisy/compressed frames.
-
- Args:
- opt (dict): Config for train dataset. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- dataroot_lq (str): Data root path for lq.
- dataroot_flow (str, optional): Data root path for flow.
- meta_info_file (str): Path for meta information file.
- val_partition (str): Validation partition types. 'REDS4' or
- 'official'.
- io_backend (dict): IO backend type and other kwarg.
-
- num_frame (int): Window size for input frames.
- gt_size (int): Cropped patched size for gt patches.
- interval_list (list): Interval list for temporal augmentation.
- random_reverse (bool): Random reverse input frames.
- use_hflip (bool): Use horizontal flips.
- use_rot (bool): Use rotation (use vertical flip and transposing h
- and w for implementation).
-
- scale (bool): Scale, which will be added automatically.
- """
-
- def __init__(self, opt):
- super(REDSDataset, self).__init__()
- self.opt = opt
- self.gt_root, self.lq_root = Path(opt['dataroot_gt']), Path(opt['dataroot_lq'])
- self.flow_root = Path(opt['dataroot_flow']) if opt['dataroot_flow'] is not None else None
- assert opt['num_frame'] % 2 == 1, (f'num_frame should be odd number, but got {opt["num_frame"]}')
- self.num_frame = opt['num_frame']
- self.num_half_frames = opt['num_frame'] // 2
-
- self.keys = []
- with open(opt['meta_info_file'], 'r') as fin:
- for line in fin:
- folder, frame_num, _ = line.split(' ')
- self.keys.extend([f'{folder}/{i:08d}' for i in range(int(frame_num))])
-
- # remove the video clips used in validation
- if opt['val_partition'] == 'REDS4':
- val_partition = ['000', '011', '015', '020']
- elif opt['val_partition'] == 'official':
- val_partition = [f'{v:03d}' for v in range(240, 270)]
- else:
- raise ValueError(f'Wrong validation partition {opt["val_partition"]}.'
- f"Supported ones are ['official', 'REDS4'].")
- self.keys = [v for v in self.keys if v.split('/')[0] not in val_partition]
-
- # file client (io backend)
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
- self.is_lmdb = False
- if self.io_backend_opt['type'] == 'lmdb':
- self.is_lmdb = True
- if self.flow_root is not None:
- self.io_backend_opt['db_paths'] = [self.lq_root, self.gt_root, self.flow_root]
- self.io_backend_opt['client_keys'] = ['lq', 'gt', 'flow']
- else:
- self.io_backend_opt['db_paths'] = [self.lq_root, self.gt_root]
- self.io_backend_opt['client_keys'] = ['lq', 'gt']
-
- # temporal augmentation configs
- self.interval_list = opt['interval_list']
- self.random_reverse = opt['random_reverse']
- interval_str = ','.join(str(x) for x in opt['interval_list'])
- logger = get_root_logger()
- logger.info(f'Temporal augmentation interval list: [{interval_str}]; '
- f'random reverse is {self.random_reverse}.')
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- scale = self.opt['scale']
- gt_size = self.opt['gt_size']
- key = self.keys[index]
- clip_name, frame_name = key.split('/') # key example: 000/00000000
- center_frame_idx = int(frame_name)
-
- # determine the neighboring frames
- interval = random.choice(self.interval_list)
-
- # ensure not exceeding the borders
- start_frame_idx = center_frame_idx - self.num_half_frames * interval
- end_frame_idx = center_frame_idx + self.num_half_frames * interval
- # each clip has 100 frames starting from 0 to 99
- while (start_frame_idx < 0) or (end_frame_idx > 99):
- center_frame_idx = random.randint(0, 99)
- start_frame_idx = (center_frame_idx - self.num_half_frames * interval)
- end_frame_idx = center_frame_idx + self.num_half_frames * interval
- frame_name = f'{center_frame_idx:08d}'
- neighbor_list = list(range(start_frame_idx, end_frame_idx + 1, interval))
- # random reverse
- if self.random_reverse and random.random() < 0.5:
- neighbor_list.reverse()
-
- assert len(neighbor_list) == self.num_frame, (f'Wrong length of neighbor list: {len(neighbor_list)}')
-
- # get the GT frame (as the center frame)
- if self.is_lmdb:
- img_gt_path = f'{clip_name}/{frame_name}'
- else:
- img_gt_path = self.gt_root / clip_name / f'{frame_name}.png'
- img_bytes = self.file_client.get(img_gt_path, 'gt')
- img_gt = imfrombytes(img_bytes, float32=True)
-
- # get the neighboring LQ frames
- img_lqs = []
- for neighbor in neighbor_list:
- if self.is_lmdb:
- img_lq_path = f'{clip_name}/{neighbor:08d}'
- else:
- img_lq_path = self.lq_root / clip_name / f'{neighbor:08d}.png'
- img_bytes = self.file_client.get(img_lq_path, 'lq')
- img_lq = imfrombytes(img_bytes, float32=True)
- img_lqs.append(img_lq)
-
- # get flows
- if self.flow_root is not None:
- img_flows = []
- # read previous flows
- for i in range(self.num_half_frames, 0, -1):
- if self.is_lmdb:
- flow_path = f'{clip_name}/{frame_name}_p{i}'
- else:
- flow_path = (self.flow_root / clip_name / f'{frame_name}_p{i}.png')
- img_bytes = self.file_client.get(flow_path, 'flow')
- cat_flow = imfrombytes(img_bytes, flag='grayscale', float32=False) # uint8, [0, 255]
- dx, dy = np.split(cat_flow, 2, axis=0)
- flow = dequantize_flow(dx, dy, max_val=20, denorm=False) # we use max_val 20 here.
- img_flows.append(flow)
- # read next flows
- for i in range(1, self.num_half_frames + 1):
- if self.is_lmdb:
- flow_path = f'{clip_name}/{frame_name}_n{i}'
- else:
- flow_path = (self.flow_root / clip_name / f'{frame_name}_n{i}.png')
- img_bytes = self.file_client.get(flow_path, 'flow')
- cat_flow = imfrombytes(img_bytes, flag='grayscale', float32=False) # uint8, [0, 255]
- dx, dy = np.split(cat_flow, 2, axis=0)
- flow = dequantize_flow(dx, dy, max_val=20, denorm=False) # we use max_val 20 here.
- img_flows.append(flow)
-
- # for random crop, here, img_flows and img_lqs have the same
- # spatial size
- img_lqs.extend(img_flows)
-
- # randomly crop
- img_gt, img_lqs = paired_random_crop(img_gt, img_lqs, gt_size, scale, img_gt_path)
- if self.flow_root is not None:
- img_lqs, img_flows = img_lqs[:self.num_frame], img_lqs[self.num_frame:]
-
- # augmentation - flip, rotate
- img_lqs.append(img_gt)
- if self.flow_root is not None:
- img_results, img_flows = augment(img_lqs, self.opt['use_hflip'], self.opt['use_rot'], img_flows)
- else:
- img_results = augment(img_lqs, self.opt['use_hflip'], self.opt['use_rot'])
-
- img_results = img2tensor(img_results)
- img_lqs = torch.stack(img_results[0:-1], dim=0)
- img_gt = img_results[-1]
-
- if self.flow_root is not None:
- img_flows = img2tensor(img_flows)
- # add the zero center flow
- img_flows.insert(self.num_half_frames, torch.zeros_like(img_flows[0]))
- img_flows = torch.stack(img_flows, dim=0)
-
- # img_lqs: (t, c, h, w)
- # img_flows: (t, 2, h, w)
- # img_gt: (c, h, w)
- # key: str
- if self.flow_root is not None:
- return {'lq': img_lqs, 'flow': img_flows, 'gt': img_gt, 'key': key}
- else:
- return {'lq': img_lqs, 'gt': img_gt, 'key': key}
-
- def __len__(self):
- return len(self.keys)
-
-
-@DATASET_REGISTRY.register()
-class REDSRecurrentDataset(data.Dataset):
- """REDS dataset for training recurrent networks.
-
- The keys are generated from a meta info txt file.
- basicsr/data/meta_info/meta_info_REDS_GT.txt
-
- Each line contains:
- 1. subfolder (clip) name; 2. frame number; 3. image shape, separated by
- a white space.
- Examples:
- 000 100 (720,1280,3)
- 001 100 (720,1280,3)
- ...
-
- Key examples: "000/00000000"
- GT (gt): Ground-Truth;
- LQ (lq): Low-Quality, e.g., low-resolution/blurry/noisy/compressed frames.
-
- Args:
- opt (dict): Config for train dataset. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- dataroot_lq (str): Data root path for lq.
- dataroot_flow (str, optional): Data root path for flow.
- meta_info_file (str): Path for meta information file.
- val_partition (str): Validation partition types. 'REDS4' or
- 'official'.
- io_backend (dict): IO backend type and other kwarg.
-
- num_frame (int): Window size for input frames.
- gt_size (int): Cropped patched size for gt patches.
- interval_list (list): Interval list for temporal augmentation.
- random_reverse (bool): Random reverse input frames.
- use_hflip (bool): Use horizontal flips.
- use_rot (bool): Use rotation (use vertical flip and transposing h
- and w for implementation).
-
- scale (bool): Scale, which will be added automatically.
- """
-
- def __init__(self, opt):
- super(REDSRecurrentDataset, self).__init__()
- self.opt = opt
- self.gt_root, self.lq_root = Path(opt['dataroot_gt']), Path(opt['dataroot_lq'])
- self.num_frame = opt['num_frame']
-
- self.keys = []
- with open(opt['meta_info_file'], 'r') as fin:
- for line in fin:
- folder, frame_num, _ = line.split(' ')
- self.keys.extend([f'{folder}/{i:08d}' for i in range(int(frame_num))])
-
- # remove the video clips used in validation
- if opt['val_partition'] == 'REDS4':
- val_partition = ['000', '011', '015', '020']
- elif opt['val_partition'] == 'official':
- val_partition = [f'{v:03d}' for v in range(240, 270)]
- else:
- raise ValueError(f'Wrong validation partition {opt["val_partition"]}.'
- f"Supported ones are ['official', 'REDS4'].")
- if opt['test_mode']:
- self.keys = [v for v in self.keys if v.split('/')[0] in val_partition]
- else:
- self.keys = [v for v in self.keys if v.split('/')[0] not in val_partition]
-
- # file client (io backend)
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
- self.is_lmdb = False
- if self.io_backend_opt['type'] == 'lmdb':
- self.is_lmdb = True
- if hasattr(self, 'flow_root') and self.flow_root is not None:
- self.io_backend_opt['db_paths'] = [self.lq_root, self.gt_root, self.flow_root]
- self.io_backend_opt['client_keys'] = ['lq', 'gt', 'flow']
- else:
- self.io_backend_opt['db_paths'] = [self.lq_root, self.gt_root]
- self.io_backend_opt['client_keys'] = ['lq', 'gt']
-
- # temporal augmentation configs
- self.interval_list = opt.get('interval_list', [1])
- self.random_reverse = opt.get('random_reverse', False)
- interval_str = ','.join(str(x) for x in self.interval_list)
- logger = get_root_logger()
- logger.info(f'Temporal augmentation interval list: [{interval_str}]; '
- f'random reverse is {self.random_reverse}.')
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- scale = self.opt['scale']
- gt_size = self.opt['gt_size']
- key = self.keys[index]
- clip_name, frame_name = key.split('/') # key example: 000/00000000
-
- # determine the neighboring frames
- interval = random.choice(self.interval_list)
-
- # ensure not exceeding the borders
- start_frame_idx = int(frame_name)
- if start_frame_idx > 100 - self.num_frame * interval:
- start_frame_idx = random.randint(0, 100 - self.num_frame * interval)
- end_frame_idx = start_frame_idx + self.num_frame * interval
-
- neighbor_list = list(range(start_frame_idx, end_frame_idx, interval))
-
- # random reverse
- if self.random_reverse and random.random() < 0.5:
- neighbor_list.reverse()
-
- # get the neighboring LQ and GT frames
- img_lqs = []
- img_gts = []
- for neighbor in neighbor_list:
- if self.is_lmdb:
- img_lq_path = f'{clip_name}/{neighbor:08d}'
- img_gt_path = f'{clip_name}/{neighbor:08d}'
- else:
- img_lq_path = self.lq_root / clip_name / f'{neighbor:08d}.png'
- img_gt_path = self.gt_root / clip_name / f'{neighbor:08d}.png'
-
- # get LQ
- img_bytes = self.file_client.get(img_lq_path, 'lq')
- img_lq = imfrombytes(img_bytes, float32=True)
- img_lqs.append(img_lq)
-
- # get GT
- img_bytes = self.file_client.get(img_gt_path, 'gt')
- img_gt = imfrombytes(img_bytes, float32=True)
- img_gts.append(img_gt)
-
- # randomly crop
- img_gts, img_lqs = paired_random_crop(img_gts, img_lqs, gt_size, scale, img_gt_path)
-
- # augmentation - flip, rotate
- img_lqs.extend(img_gts)
- img_results = augment(img_lqs, self.opt['use_hflip'], self.opt['use_rot'])
-
- img_results = img2tensor(img_results)
- img_gts = torch.stack(img_results[len(img_lqs) // 2:], dim=0)
- img_lqs = torch.stack(img_results[:len(img_lqs) // 2], dim=0)
-
- # img_lqs: (t, c, h, w)
- # img_gts: (t, c, h, w)
- # key: str
- return {'lq': img_lqs, 'gt': img_gts, 'key': key}
-
- def __len__(self):
- return len(self.keys)
diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/models/swinir_model.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/models/swinir_model.py
deleted file mode 100644
index 5ac182f23b4a300aff14b2b45fcdca8c00da90c1..0000000000000000000000000000000000000000
--- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/models/swinir_model.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-from basicsr.utils.registry import MODEL_REGISTRY
-from .sr_model import SRModel
-
-
-@MODEL_REGISTRY.register()
-class SwinIRModel(SRModel):
-
- def test(self):
- # pad to multiplication of window_size
- window_size = self.opt['network_g']['window_size']
- scale = self.opt.get('scale', 1)
- mod_pad_h, mod_pad_w = 0, 0
- _, _, h, w = self.lq.size()
- if h % window_size != 0:
- mod_pad_h = window_size - h % window_size
- if w % window_size != 0:
- mod_pad_w = window_size - w % window_size
- img = F.pad(self.lq, (0, mod_pad_w, 0, mod_pad_h), 'reflect')
- if hasattr(self, 'net_g_ema'):
- self.net_g_ema.eval()
- with torch.no_grad():
- self.output = self.net_g_ema(img)
- else:
- self.net_g.eval()
- with torch.no_grad():
- self.output = self.net_g(img)
- self.net_g.train()
-
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - mod_pad_h * scale, 0:w - mod_pad_w * scale]
diff --git a/spaces/bioriAsaeru/text-to-voice/Chromatic Harmonica Songbook Christmas Carols Thomas Balinger LINK.md b/spaces/bioriAsaeru/text-to-voice/Chromatic Harmonica Songbook Christmas Carols Thomas Balinger LINK.md
deleted file mode 100644
index af2c42a9172482d4d7fc775cc16ba5a1c8fb63eb..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Chromatic Harmonica Songbook Christmas Carols Thomas Balinger LINK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Chromatic Harmonica Songbook: Christmas Carols Thomas Balinger
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Kunci Jawaban Kumon Level J Rangkuman dan Latihan Matematika SMA.md b/spaces/bioriAsaeru/text-to-voice/Kunci Jawaban Kumon Level J Rangkuman dan Latihan Matematika SMA.md
deleted file mode 100644
index 3950820588753c3e773b2454c73ffd0eb96b3309..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Kunci Jawaban Kumon Level J Rangkuman dan Latihan Matematika SMA.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Informasi yang anda cari adalah Kunci Jawaban Kumon Matematika Level J. Dibawah ini telah kami sajikan Informasi Lowongan berdasarkan keterkaitan.
19 Nov 2017 . We have many downloads related to kunci jawaban kumon level j which are hosted on sites like . The presence of this template of a flower vase.
10 Aug 2018 - 8 min - Uploaded by CherriesIts very legit indeed no link bs just strait answers. Hehe.
Kunci Jawaban Kumon Level J - DOWNLOAD c1731006c4 [FULL] WINDOWS VISTA Starter Original. . kunci jawaban kumon level j download video seks anak.