diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/Wake Up Sid 720p Dvdrip Torrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/Wake Up Sid 720p Dvdrip Torrent.md
deleted file mode 100644
index 28543d6bd3039a9cfbc967c33a354e0bc3fde8b9..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/Wake Up Sid 720p Dvdrip Torrent.md
+++ /dev/null
@@ -1,74 +0,0 @@
-## Wake Up Sid 720p Dvdrip Torrent
-
-
-
-
-
-
-
-
-
-**CLICK HERE ⚙⚙⚙ [https://jinyurl.com/2tA06v](https://jinyurl.com/2tA06v)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Download Wake Up Sid (2009) in High Quality
-
-
-
-Wake Up Sid is a 2009 Indian comedy-drama film directed by Ayan Mukherjee and starring Ranbir Kapoor and Konkona Sen Sharma. The film tells the story of Sid Mehra, a spoiled and aimless young man who finds his true calling after meeting Aisha, an aspiring writer from Calcutta.
-
-
-
-If you want to watch this movie in high quality, you can download it from torrent sites using a VPN service. A VPN service will protect your privacy and security by encrypting your traffic and hiding your IP address from your ISP and government agencies. Here are the steps to download Wake Up Sid (2009) in 720p or 1080p bluray quality:
-
-
-
-1. Download and install a VPN service on your device. We recommend Hide VPN as it is fast, reliable and affordable.
-
-2. Connect to a VPN server in a country where torrenting is legal, such as Switzerland or Netherlands.
-
-3. Go to a torrent site that has Wake Up Sid (2009) available for download. We recommend YTS.mx or YTS.rs as they have high-quality torrents and subtitles.
-
-4. Search for Wake Up Sid (2009) and choose the desired quality (720p or 1080p). Click on the download button or magnet link to start the download.
-
-5. Open the torrent file with your preferred torrent client and wait for the download to finish.
-
-6. Enjoy watching Wake Up Sid (2009) in high quality!
-
-
-
-Note: Downloading torrents is risky and may expose you to legal issues. We do not condone or encourage piracy and advise you to respect the copyrights of the creators. Please use this guide at your own risk.
-
-
-
-Wake Up Sid (2009) is a refreshing and realistic portrayal of the urban youth in India. The film explores the themes of friendship, love, family, career and self-discovery through the eyes of Sid, who undergoes a transformation from a lazy and irresponsible boy to a mature and responsible man. The film also showcases the vibrant and cosmopolitan city of Mumbai, which serves as a backdrop for Sid's journey.
-
-
-
-The film received positive reviews from critics and audiences alike. It was praised for its direction, screenplay, performances, music and cinematography. It was also a commercial success, grossing over â¹750 million worldwide. It won several awards and nominations, including three Filmfare Awards for Best Debut Director, Best Supporting Actress and Best Story.
-
-
-
-Wake Up Sid (2009) is a must-watch for anyone who loves a good coming-of-age story with a touch of humor and romance. It is a film that will make you laugh, cry and cheer for Sid as he wakes up to his true potential. You can download it from torrent sites in high quality using a VPN service and enjoy it on your device.
-
-
-
-In conclusion, Wake Up Sid (2009) is a brilliant and engaging film that will appeal to anyone who loves a good story with relatable characters and realistic situations. The film is a perfect example of how a simple and honest story can touch the hearts of millions of viewers. If you want to watch this film in high quality, you can download it from torrent sites using a VPN service and enjoy it on your device. Wake Up Sid (2009) is a film that will make you wake up to life and its possibilities.
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Office 365 Offline Installer for Free and Install it on Your PC or Mac.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Office 365 Offline Installer for Free and Install it on Your PC or Mac.md
deleted file mode 100644
index 9d6d720f7ddbb33f7855506de5e91ba265c455fd..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Office 365 Offline Installer for Free and Install it on Your PC or Mac.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
How to Download Office 365 Offline Installer for Free
-
Office 365 is a subscription-based service that offers various Microsoft products such as Word, Excel, PowerPoint, Outlook, OneNote and more. You can access these products online or install them on your PC or Mac. However, if you have a slow or unreliable internet connection, you might want to download Office 365 offline installer for free and install it on your device without any interruptions.
-
What is Office 365 Offline Installer?
-
Office 365 offline installer is a file that contains all the necessary components to install Office 365 on your PC or Mac without an internet connection. You can download this file from your Microsoft account portal and save it to a USB drive or a disc. You can then use this file to install Office 365 on any device that meets the system requirements.
How to Download Office 365 Offline Installer for Free?
-
To download Office 365 offline installer for free, you will need to have an active Office 365 subscription and a Microsoft account. You will also need to be connected to the internet to download this file, but once that's done, you can install Office 365 offline on your device at your convenience. Here are the steps to follow:
-
-
Go to www.office.com and sign in with your Microsoft account associated with your Office 365 subscription.
-
Select Install Office from the home page.
-
In the Download and install window, select Other options.
-
Check the box Download an offline installer and select the language you want to install Office 365 in.
-
Select Download and choose a location to save the file.
-
Wait for the download to complete. The file size may vary depending on your subscription plan and language.
-
-
How to Install Office 365 Offline?
-
After you have downloaded the Office 365 offline installer file, you can install it on your PC or Mac by following these steps:
-
-
Locate the file you downloaded and double-click it to open it.
-
A new virtual drive will appear in your directory, for example (D:). This drive contains the Office 365 installation files.
-
Select the Office folder from the virtual drive and then double-click either the Setup32.exe to install the 32-bit version of Office 365, or Setup64.exe to install the 64-bit version.
-
Follow the on-screen instructions to complete the installation.
-
Activate Office 365 by signing in with your Microsoft account when prompted.
-
-
Conclusion
-
Office 365 offline installer is a convenient way to install Office 365 on your PC or Mac without an internet connection. You can download this file for free from your Microsoft account portal and use it to install Office 365 on any device that meets the system requirements. You can also save this file to a USB drive or a disc for later use. Enjoy using Office 365 offline!
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/21 Jump Street 720p Yify 208.md b/spaces/1gistliPinn/ChatGPT4/Examples/21 Jump Street 720p Yify 208.md
deleted file mode 100644
index 66265cc67ecda6b855a30aab7477205b2249ba1e..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/21 Jump Street 720p Yify 208.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
-If a teenager is in a position where they cannot support a child, adoption may be a natural alternative. - Adoption allows a couple to adopt a child, thereby giving them ... As with any form of adoption, it can be difficult to adopt.
-However, if there is something that can really be accomplished if you are in a situation that you cannot afford to raise a child, why not consider adoption?
-There are many pros and cons to consider before making a final decision.
-If you are thinking about adopting, consider the following three points to see if it is worth your effort 8a78ff9644
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Biologija Pries Egzamina Knyga Pdf 105.md b/spaces/1gistliPinn/ChatGPT4/Examples/Biologija Pries Egzamina Knyga Pdf 105.md
deleted file mode 100644
index 5973c3bfe38e1b7472ae0136a131f44f1aa956ea..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Biologija Pries Egzamina Knyga Pdf 105.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
-Decipher Textmessage License Code Mac Keygen; DECIPHER BACKUP REPAIR; Decipher TextMessage ... We provide RV generator repair and installation. 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Diptrace Full Version Free Download Crack REPACK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Diptrace Full Version Free Download Crack REPACK.md
deleted file mode 100644
index fd1c9971a7de801398d3d1789ffb02a251edee7a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Diptrace Full Version Free Download Crack REPACK.md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-Here is what I created:
-
-
How to Download and Install Diptrace Full Version for Free with Crack
-
Diptrace is a powerful and easy-to-use software for designing and simulating printed circuit boards (PCBs). It offers a comprehensive set of features, such as schematic capture, PCB layout, 3D modeling, autorouting, verification, and export. However, the full version of Diptrace is not free and requires a license key to activate.
If you want to use Diptrace for free without paying for a license, you might be tempted to look for a crack or a patch that can bypass the activation process. However, this is not a good idea for several reasons. First of all, downloading and installing a crack or a patch from an unknown source can expose your computer to malware and viruses that can harm your system and compromise your data. Second, using a cracked or patched version of Diptrace can cause errors and bugs that can affect the performance and quality of your PCB designs. Third, using a cracked or patched version of Diptrace is illegal and unethical, as it violates the terms and conditions of the software and infringes the intellectual property rights of the developers.
-
Therefore, the best way to use Diptrace for free is to download and install the official trial version from the official website. The trial version allows you to use all the features of Diptrace for 30 days without any limitations. After 30 days, you can either purchase a license key to continue using the full version or switch to the freeware version. The freeware version has some restrictions on the number of pins and signal layers, but it still allows you to design and simulate simple PCBs for personal or educational purposes.
-
To download and install Diptrace full version for free with the trial option, follow these steps:
Save the installation file on your computer and run it as an administrator.
-
Follow the instructions on the screen to complete the installation process.
-
Launch Diptrace and enter your name and email address to register for the trial option.
-
Enjoy using Diptrace full version for free for 30 days.
-
-
I hope this helps you with your PCB design project. If you have any questions or feedback, please let me know.
-
-Here is what I created:
-
-
In this article, I will show you some tips and tricks to improve your PCB design skills using Diptrace. Whether you are a beginner or an expert, you can always learn something new and enhance your productivity and creativity with Diptrace.
-
Tip 1: Use the built-in libraries and components
-
Diptrace comes with a large collection of libraries and components that you can use for your PCB design project. You can access them from the "Library" menu in the schematic or PCB editor. You can also search for a specific component by name, type, or category using the "Find Component" tool. You can also add your own custom components or import them from other sources using the "Component Editor". By using the built-in libraries and components, you can save time and avoid errors in your design.
-
Tip 2: Use the autorouter and manual routing tools
-
Diptrace offers both an autorouter and manual routing tools to help you connect the components on your PCB. The autorouter can automatically route all or some of the nets on your PCB according to your settings and preferences. You can access the autorouter from the "Route" menu in the PCB editor. You can also use the manual routing tools to draw traces, vias, arcs, polygons, and other shapes on your PCB. You can access the manual routing tools from the toolbar or the "Route" menu in the PCB editor. By using the autorouter and manual routing tools, you can optimize your PCB layout and reduce noise and interference.
-
Tip 3: Use the verification and export tools
-
Diptrace also provides verification and export tools to help you check and finalize your PCB design. The verification tools can detect and highlight any errors or warnings on your schematic or PCB, such as unconnected pins, overlapping objects, clearance violations, etc. You can access the verification tools from the "Verification" menu in the schematic or PCB editor. The export tools can generate various output files for your PCB design, such as Gerber files, drill files, netlist files, bill of materials (BOM), etc. You can access the export tools from the "File" menu in the schematic or PCB editor. By using the verification and export tools, you can ensure that your PCB design is error-free and ready for fabrication.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FileMenu Tools 7.7.0.0 With Crack (Latest) FREE.md b/spaces/1gistliPinn/ChatGPT4/Examples/FileMenu Tools 7.7.0.0 With Crack (Latest) FREE.md
deleted file mode 100644
index b5742035860e3a659d91847521ceba284fb71a37..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/FileMenu Tools 7.7.0.0 With Crack (Latest) FREE.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-First Download FileMenu Tools Crack form below Links. If You are ... Download FileMenu Tools 7.7.0.0 Multilingual [Latest] from our software library. FileMenu ... 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Flatiron 3ds Max 2012 Torrent.md b/spaces/1gistliPinn/ChatGPT4/Examples/Flatiron 3ds Max 2012 Torrent.md
deleted file mode 100644
index 8a7f9325ebed481a6b420049e911e6591e0202a6..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Flatiron 3ds Max 2012 Torrent.md
+++ /dev/null
@@ -1,72 +0,0 @@
-
-
Flatiron 3ds Max 2012 Torrent: A Guide to 3D Texture Baking
-
-
If you are looking for a plugin that can help you bake full scenes or selections of objects into a single UV map in 3ds Max 2012, you might want to check out Flatiron 3ds Max 2012 Torrent. Flatiron is a four steps Render To Texture plugin that is based on the Unwrella high quality automated unwrapping technology. It is a fast, simple and yet completely configurable automatic unwrapping and baking solution that can greatly speed up the process of baking complex scenes.
What are the benefits of using Flatiron 3ds Max 2012 Torrent?
-
-
Flatiron 3ds Max 2012 Torrent can help you create realistic and detailed textures for your 3D models without spending too much time and resources on rendering. Some of the benefits of using Flatiron are:
-
-
-
It can handle thousands of objects at once, making it ideal for real time game levels, architectural scenes, industrial design and more.
-
It can bake any additional shaders, such as diffuse, lightmaps, shadowmaps, global illumination maps, etc. into one texture.
-
It can automatically generate optimal UV layouts for each object or group of objects, minimizing distortion and seams.
-
It can support multiple texture resolutions and formats, such as JPG, PNG, TGA, BMP, etc.
-
It can work with any render engine that supports Render To Texture functionality in 3ds Max 2012.
-
-
-
How to download and install Flatiron 3ds Max 2012 Torrent?
-
-
If you want to try out Flatiron 3ds Max 2012 Torrent, you can follow these steps:
-
-
-
Download the Flatiron 3ds Max 2012 Torrent file from a reliable source. Make sure you have a torrent client installed on your computer.
-
Extract the ZIP file to a folder on your hard drive.
-
Run the setup.exe file and follow the instructions to install Flatiron on your computer.
-
Copy the crack file from the crack folder and paste it into the installation directory of Flatiron.
-
Launch 3ds Max 2012 and activate Flatiron from the plugin manager.
-
-
-
How to use Flatiron 3ds Max 2012 Torrent?
-
-
Using Flatiron 3ds Max 2012 Torrent is very easy and straightforward. You just need to follow these four steps:
-
-
-
Select the objects or groups of objects that you want to bake into a single UV map.
-
Open the Flatiron dialog from the Utilities panel or the Quad menu.
-
Choose the texture resolution, format and output folder for your baked texture.
-
Click on Start Baking and wait for Flatiron to do its magic.
-
-
-
You can also adjust some advanced settings in Flatiron, such as padding, margin, overlap, smoothing groups, etc. to fine tune your results. You can also preview your baked texture in the viewport or open it in an image editor for further editing.
-
-
Conclusion
-
-
Flatiron 3ds Max 2012 Torrent is a powerful and versatile plugin that can help you create stunning textures for your 3D models in a matter of minutes. It can handle complex scenes with ease and produce high quality results with minimal effort. If you are looking for a plugin that can simplify and speed up your texture baking workflow in 3ds Max 2012, you should definitely give Flatiron a try.
-
-
Where can you find Flatiron 3ds Max 2012 Torrent tutorials?
-
-
If you want to learn more about how to use Flatiron 3ds Max 2012 Torrent effectively, you can find some helpful tutorials online. Here are some of the best sources for Flatiron tutorials:
-
-
-
The official Flatiron website has a comprehensive user manual that covers all the features and settings of the plugin. You can also find some video tutorials that demonstrate how to use Flatiron for different scenarios and projects.
-
The CG Persia website has a torrent download link for Flatiron 3ds Max 2012 Torrent that also includes a video tutorial on how to bake a canalization scene using Flatiron. You can learn some tips and tricks on how to optimize your UV layout and texture quality with Flatiron.
-
The YouTube channel of 3d-io games & video production GmbH has several videos that showcase the capabilities and benefits of Flatiron. You can see how Flatiron can handle complex scenes with thousands of objects, how it can bake multiple shaders into one texture, and how it can work with different render engines.
-
-
-
What are some alternatives to Flatiron 3ds Max 2012 Torrent?
-
-
Flatiron 3ds Max 2012 Torrent is not the only plugin that can help you with 3D texture baking in 3ds Max 2012. There are some other plugins that offer similar or different features and functions for texture baking. Some of the most popular alternatives to Flatiron are:
-
-
-
Unwrella: This is another plugin from 3d-io that is based on the same unwrapping technology as Flatiron. However, Unwrella focuses more on creating optimal UV layouts for each object or group of objects, rather than baking them into a single UV map. Unwrella can also work with any 3D software that supports OBJ export.
-
Render To Texture: This is a built-in feature in 3ds Max that allows you to bake textures from any render engine that supports Render To Texture functionality. You can customize your baking settings, such as resolution, format, padding, etc. and preview your results in the viewport.
-
BakeMyScan: This is a free plugin that can help you bake high-poly models into low-poly models with textures. It can also optimize your mesh topology and reduce your polygon count. BakeMyScan can work with any render engine that supports Render To Texture functionality.
-
-
-
Conclusion
-
-
Flatiron 3ds Max 2012 Torrent is a powerful and versatile plugin that can help you create stunning textures for your 3D models in a matter of minutes. It can handle complex scenes with ease and produce high quality results with minimal effort. If you are looking for a plugin that can simplify and speed up your texture baking workflow in 3ds Max 2012, you should definitely give Flatiron a try.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Azrbaycan thsil sisteminin kurikulum az sndi Niy vacibdir v nec ilyir?.md b/spaces/1phancelerku/anime-remove-background/Azrbaycan thsil sisteminin kurikulum az sndi Niy vacibdir v nec ilyir?.md
deleted file mode 100644
index 38b4dcf215534d38c3a03a314e53b8de2c0d1d80..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Azrbaycan thsil sisteminin kurikulum az sndi Niy vacibdir v nec ilyir?.md
+++ /dev/null
@@ -1,163 +0,0 @@
-
-
Kurikulum az: What is it and why is it important?
-
Kurikulum az is a term that refers to the modern curriculum model that is being implemented in Azerbaijan since 2016. It is based on the principles of student-centered, competency-based, and outcome-oriented education. It aims to provide students with the knowledge, skills, values, and attitudes that they need to succeed in the 21st century. But what exactly is kurikulum az and why is it important for the development of education in Azerbaijan? In this article, we will explore the meaning, structure, content, benefits, and challenges of kurikulum az.
-
Introduction
-
Kurikulum az is derived from the word "curriculum", which means "a course of study". However, kurikulum az is more than just a list of subjects and topics that students have to learn. It is a comprehensive framework that defines the purpose, content, process, assessment, and evaluation of education in Azerbaijan. It covers all levels of education from preschool to higher education. It also reflects the national identity, culture, values, and aspirations of Azerbaijan.
Ensure that students acquire the essential knowledge and skills that are relevant to their personal, social, and professional development
-
Develop students' key competencies such as critical thinking, creativity, communication, collaboration, digital literacy, civic literacy, etc.
-
Foster students' lifelong learning habits and attitudes such as curiosity, initiative, responsibility, self-regulation, etc.
-
Prepare students for the challenges and opportunities of the globalized world
-
-
The main principles of kurikulum az are:
-
-
Student-centeredness: Kurikulum az puts the needs, interests, abilities, and preferences of students at the center of education. It allows students to have more choice, voice, and agency in their learning. It also encourages students to learn by doing, discovering, solving problems, and creating products.
-
Competency-basedness: Kurikulum az focuses on developing students' competencies rather than memorizing facts. Competencies are complex combinations of knowledge, skills, values, and attitudes that enable students to perform tasks effectively in various contexts. Kurikulum az defines eight key competencies that students should master by the end of their education.
-
Outcome-orientedness: Kurikulum az defines clear and measurable learning outcomes for each subject and course. Learning outcomes are statements that describe what students should know, be able to do, and value as a result of their learning. Learning outcomes guide the teaching, learning, and assessment processes in kurikulum az.
-
-
Kurikulum az is different from traditional curriculum in several ways. For example:
-
-
Kurikulum az is more flexible and adaptable to the changing needs and demands of society and economy
-
Kurikulum az is more integrated and interdisciplinary across subjects and courses
-
Kurikulum az is more interactive and collaborative among students and teachers
-
Kurikulum az is more diverse and inclusive of different learners' backgrounds, abilities, styles, and preferences
The structure and content of kurikulum az
-
Kurikulum az is organized into four sub-levels of general education: preschool, primary, basic, and secondary. Each sub-level has its own specific objectives, content standards, and learning outcomes. The table below shows the duration, age range, and main subjects of each sub-level.
-
-
-
Sub-level
-
Duration
-
Age range
-
Main subjects
-
-
-
Preschool
-
1-2 years
-
3-5 years
-
Language and communication, mathematics, natural sciences, social sciences, arts, physical education
-
-
-
Primary
-
4 years
-
6-9 years
-
Azerbaijani language and literature, mathematics, natural sciences, social sciences, foreign language, arts, physical education, ethics and religion
-
-
-
Basic
-
5 years
-
10-14 years
-
Azerbaijani language and literature, mathematics, natural sciences, social sciences, foreign language, arts, physical education, ethics and religion, information and communication technologies, elective courses
-
-
-
Secondary
-
2 years
-
15-16 years
-
Azerbaijani language and literature, mathematics, natural sciences, social sciences, foreign language, arts, physical education, ethics and religion, information and communication technologies, elective courses
-
-
-
Kurikulum az defines eight key competencies that students should develop throughout their general education. These competencies are:
-
-
Linguistic competence: The ability to communicate effectively in oral and written forms in Azerbaijani and foreign languages.
-
Mathematical competence: The ability to use mathematical concepts, procedures, and reasoning to solve problems in various contexts.
-
Natural-scientific competence: The ability to understand and apply scientific concepts, methods, and processes to explain natural phenomena and human interactions with the environment.
-
Social-scientific competence: The ability to understand and analyze social, historical, cultural, political, economic, and geographic aspects of human societies and their diversity.
-
Digital competence: The ability to use information and communication technologies to access, create, process, store, share, and evaluate information.
-
Civic competence: The ability to participate actively and responsibly in democratic processes and civic life at local, national, and global levels.
-
Cultural competence: The ability to appreciate and respect one's own and others' cultural identities, values, beliefs, traditions, and expressions.
-
Personal competence: The ability to manage one's own learning, emotions, health, well-being, relationships, and career development.
-
-
Kurikulum az also specifies the content standards and learning outcomes for each subject and course. Content standards describe the essential knowledge and skills that students should acquire in each subject area. Learning outcomes describe the expected achievements of students at the end of each sub-level of general education. For example:
-
kurikulum azərbaycan dili
-kurikulum azərbaycan ədəbiyyatı
-kurikulum azərbaycan tarixi
-kurikulum azərbaycan coğrafiyası
-kurikulum azərbaycan mədəniyyəti
-kurikulum az portalı
-kurikulum az şəxsi kabinet
-kurikulum az arti edu
-kurikulum az riyaziyyat
-kurikulum az fizika
-kurikulum az kimya
-kurikulum az biologiya
-kurikulum az ingilis dili
-kurikulum az rus dili
-kurikulum az alman dili
-kurikulum az fransız dili
-kurikulum az türk dili
-kurikulum az fəlsəfə
-kurikulum az psixologiya
-kurikulum az sosial elmlər
-kurikulum az hüquqşünaslıq
-kurikulum az iqtisadiyyat
-kurikulum az informatika
-kurikulum az texnologiya
-kurikulum az musiqi
-kurikulum az rəsm və naxış
-kurikulum az bədən tərbiyəsi
-kurikulum az sivil müdafiə
-kurikulum az tibb və sağlamlıq
-kurikulum az ekologiya və təbii sərvətlər
-kurikulum az mühazirələr və prezentasiyalar
-kurikulum az testlər və suallar
-kurikulum az imtahanlar və qiymətləndirmələr
-kurikulum az metodika və pedaqoji texnologiyalar
-kurikulum az təhsil standartları və proqramları
-kurikulum az tibbi profilaktika və hüquqi mühafizə
-kurikulum az türk dünyası və beynəlxalq ictimaiyyat
-kurikulum az qlobal problemlər və inkişaf perspektivləri
-kurikulum az innovasiya və yaradıcılıq
-kurikulum az liderlik və menecment
-kurikulum az kommunikasiya və ictimai fayda
-kurikulum az etika və dini mühit
-kurikulum az girişimçilik və karyera planlaşdırma
-kurikulum az media və informasiya savadı
-kurikulum az dil öyrənmə strategiyaları
-kurikulum az mükafatlandırma və motivasiya
-kurikulum az öyrənmə üsulları və stililri
-kurikulum az öyrücülük və mentorluq
-kurikulum az öyrütmek üçün dizayn
-
-
The content standard for Azerbaijani language and literature in primary education is: "Students will develop their linguistic competence in Azerbaijani language by listening, speaking, reading, and writing in various situations and contexts. They will also develop their literary competence by exploring and appreciating different genres and forms of Azerbaijani literature."
-
The learning outcome for Azerbaijani language and literature in primary education is: "By the end of primary education, students will be able to communicate effectively in oral and written forms in Azerbaijani language using appropriate vocabulary, grammar, and style. They will also be able to analyze and interpret different texts and works of Azerbaijani literature using basic literary concepts and techniques."
The benefits and challenges of kurikulum az
-
Kurikulum az has many benefits for the improvement of the quality and relevance of education in Azerbaijan. Some of these benefits are:
-
-
Kurikulum az helps students to develop the competencies and skills that are in high demand in the modern world, such as critical thinking, creativity, communication, collaboration, digital literacy, civic literacy, etc.
-
Kurikulum az enables students to learn in a more meaningful and engaging way, by connecting their learning to real-life situations, problems, and contexts.
-
Kurikulum az empowers students to take more responsibility and ownership of their learning, by giving them more choice, voice, and agency in their learning process.
-
Kurikulum az supports teachers to adopt more effective and innovative teaching methods, such as inquiry-based learning, project-based learning, cooperative learning, etc.
-
Kurikulum az involves parents and other stakeholders in the education system, by encouraging their participation and feedback in the curriculum development, implementation, and evaluation.
-
Kurikulum az reflects and promotes the national identity, culture, values, and aspirations of Azerbaijan, by integrating them into the curriculum content and outcomes.
-
-
However, kurikulum az also faces some challenges and difficulties in its implementation and evaluation. Some of these challenges are:
-
-
Kurikulum az requires a lot of resources and support for its successful implementation, such as adequate funding, infrastructure, equipment, materials, training, etc.
-
Kurikulum az demands a lot of changes and adjustments from the teachers, students, parents, and other actors in the education system, such as new roles, responsibilities, expectations, attitudes, behaviors, etc.
-
Kurikulum az poses a lot of questions and uncertainties about its effectiveness and impact on the students' learning outcomes and achievements, such as how to measure, monitor, assess, and evaluate them.
-
-
Conclusion
-
In conclusion, kurikulum az is a modern curriculum model that aims to provide students with the knowledge, skills, values, and attitudes that they need to succeed in the 21st century. It is based on the principles of student-centeredness, competency-basedness, and outcome-orientedness. It covers all levels of general education from preschool to higher education. It defines eight key competencies that students should develop throughout their education. It also specifies the content standards and learning outcomes for each subject and course. Kurikulum az has many benefits for the improvement of the quality and relevance of education in Azerbaijan. However, it also faces some challenges and difficulties in its implementation and evaluation. Therefore, it is important to provide continuous support and feedback to all the stakeholders involved in kurikulum az and to monitor and improve its effectiveness and impact on the students' learning outcomes and achievements.
-
Do you have any questions or comments about kurikulum az? If so, please share them with us in the comment section below. We would love to hear from you!
-
FAQs
-
Here are some frequently asked questions and answers about kurikulum az:
-
-
What is the difference between kurikulum az and derslik?
-
Derslik is a term that refers to the textbooks that are used in schools. Kurikulum az is a term that refers to the curriculum model that guides the teaching, learning, and assessment processes in schools. Derslik is one of the tools that supports kurikulum az, but it is not the only one. Kurikulum az also uses other tools such as teacher guides, student workbooks, digital resources, etc.
-
How can I access kurikulum az online?
-
You can access kurikulum az online through the official website of the Ministry of Education of Azerbaijan: www.edu.gov.az. There you can find all the information, documents, and resources related to kurikulum az.
-
How can I give feedback or suggestions about kurikulum az?
-
You can give feedback or suggestions about kurikulum az through various channels such as email, phone, social media, or online surveys. You can also contact your local education authorities or school administration for any issues or concerns related to kurikulum az.
-
How can I get involved or participate in kurikulum az?
-
You can get involved or participate in kurikulum az by taking an active role in your own or your child's education. For example, you can:
-
-
Read and understand the goals, principles, and - content standards and learning outcomes of kurikulum az - Support and encourage your child's learning at home and at school - Communicate and cooperate with your child's teachers and school administration - Participate in school events, activities, and decision-making processes - Join or form parent-teacher associations or other community groups that support education - Volunteer or donate to educational initiatives or projects
-
What are some examples of good practices or success stories of kurikulum az?
-
There are many examples of good practices or success stories of kurikulum az that showcase the positive impact of kurikulum az on students, teachers, schools, and society. For example:
-
-
Some schools have implemented innovative projects that integrate kurikulum az with local needs and resources, such as environmental education, cultural heritage, social entrepreneurship, etc.
-
Some teachers have adopted new pedagogical methods that enhance student engagement, motivation, and achievement, such as gamification, flipped classroom, blended learning, etc.
-
Some students have demonstrated outstanding performance and achievements in national and international competitions, assessments, and exhibitions, such as Olympiads, PISA, STEM Expo, etc.
-
Some parents and communities have expressed their satisfaction and appreciation for the quality and relevance of education provided by kurikulum az.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Castle Clash Mod Apk 2022 Enjoy the Best Features of the Game with No Ads.md b/spaces/1phancelerku/anime-remove-background/Castle Clash Mod Apk 2022 Enjoy the Best Features of the Game with No Ads.md
deleted file mode 100644
index 5065c76dead90723378df71ac8491f3d1f29705c..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Castle Clash Mod Apk 2022 Enjoy the Best Features of the Game with No Ads.md
+++ /dev/null
@@ -1,145 +0,0 @@
-
-
Castle Clash Mod Apk 2022: A Guide for Beginners
-
Are you looking for a fun and exciting strategy game that will keep you hooked for hours? Do you want to experience the thrill of building your own castle, commanding your own army, and conquering your enemies? If yes, then you should try Castle Clash Mod Apk 2022, the latest version of the popular mobile game that has millions of fans around the world.
In this article, we will tell you everything you need to know about Castle Clash Mod Apk 2022, including what it is, how to download and install it, how to play it, and how to get unlimited money and gems in the game. By the end of this article, you will be ready to join the epic adventure of Castle Clash Mod Apk 2022 and become a world ruler.
-
What is Castle Clash?
-
A brief introduction to the game and its features
-
Castle Clash is a free-to-play mobile strategy game from Playrix that was released in 2013. It is one of the most popular games in the genre, with over 100 million downloads on Google Play Store alone. The game is available for both Android and iOS devices.
-
Castle Clash is a game where you can create your own kingdom, recruit and train your own troops, build and upgrade your own buildings, and fight against other players or computer-controlled enemies. You can choose from ten different medieval lords, each with their own unique troops and buildings. You can also join or create guilds, participate in events, complete quests, and collect rewards.
-
How to download and install Castle Clash Mod Apk 2022
-
If you want to enjoy the game with more features and benefits, you can download and install Castle Clash Mod Apk 2022, which is a modified version of the original game that gives you access to unlimited money and gems, as well as other perks. Here are the steps to download and install Castle Clash Mod Apk 2022:
-
-
Go to or any other trusted website that offers Castle Clash Mod Apk 2022.
-
Click on the download button and wait for the file to be downloaded on your device.
-
Go to your device's settings and enable the installation of apps from unknown sources.
-
Locate the downloaded file on your device and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to be completed.
-
Launch the game and enjoy Castle Clash Mod Apk 2022.
-
-
What are the benefits of using Castle Clash Mod Apk 2022
-
There are many benefits of using Castle Clash Mod Apk 2022, such as:
-
-
You can get unlimited money and gems in the game, which you can use to buy anything you want, such as troops, buildings, upgrades, items, etc.
-
You can unlock all the lords, troops, buildings, and modes in the game without having to spend real money or wait for long hours.
-
You can enjoy faster loading times, smoother gameplay, better graphics, and more stability in the game.
-
You can have more fun and excitement in the game without any limitations or restrictions
How to play Castle Clash Mod Apk 2022
-
The basics of building your castle and army
-
Once you have installed Castle Clash Mod Apk 2022, you can start playing the game by creating your own castle and army. Here are some of the basic steps to follow:
-
castle clash hack apk unlimited gems 2022
-castle clash modded apk free download 2022
-castle clash cheats apk latest version 2022
-castle clash premium apk mod unlocked 2022
-castle clash mod apk offline no root 2022
-castle clash unlimited money apk mod 2022
-castle clash hack tool apk no survey 2022
-castle clash mod apk android 1 2022
-castle clash mod apk revdl 2022
-castle clash mod apk rexdl 2022
-castle clash mod apk happymod 2022
-castle clash mod apk an1 2022
-castle clash mod apk platinmods 2022
-castle clash mod apk blackmod 2022
-castle clash mod apk ihackedit 2022
-castle clash mod apk lenov.ru 2022
-castle clash mod apk andropalace 2022
-castle clash mod apk apkpure 2022
-castle clash mod apk apkmody 2022
-castle clash mod apk apknite 2022
-castle clash mod apk mob.org 2022
-castle clash mod apk mobpark 2022
-castle clash mod apk android republic 2022
-castle clash mod apk androidoyun.club 2022
-castle clash mod apk android zone 2022
-castle clash mod apk latest update 2022
-castle clash mod apk new version 2022
-castle clash mod apk full version 2022
-castle clash mod apk pro version 2022
-castle clash mod apk vip version 2022
-castle clash mod apk mega mod 2022
-castle clash mod apk god mode 2022
-castle clash mod apk one hit kill 2022
-castle clash mod apk unlimited everything 2022
-castle clash mod apk all heroes unlocked 2022
-castle clash mod apk all troops unlocked 2022
-castle clash mod apk all weapons unlocked 2022
-castle clash mod apk all modes unlocked 2022
-castle clash mod apk all features unlocked 2022
-castle clash mod apk all in one 2022
-castle clash hack and slash mod apk 2022
-castle clash strategy and tactics mod apk 2022
-castle clash war and adventure mod apk 2022
-castle clash fantasy and magic mod apk 2022
-castle clash rpg and simulation mod apk 2022
-castle clash online and offline mod apk 2022
-castle clash multiplayer and singleplayer mod apk 2022
-castle clash pvp and pve mod apk 2022
-castle clash fun and addictive mod apk 2022
-castle clash best and popular mod apk 2022
-
-
Choose a lord that suits your playstyle and strategy. Each lord has different strengths and weaknesses, as well as different troops and buildings.
-
Build your castle by placing various buildings, such as barracks, towers, walls, mines, vaults, etc. You can upgrade your buildings to make them stronger and more efficient.
-
Recruit and train your troops by using the barracks. You can choose from different types of troops, such as infantry, archers, cavalry, mages, etc. You can also upgrade your troops to improve their skills and abilities.
-
Defend your castle from enemy attacks by using your towers, walls, traps, heroes, etc. You can also use spells and items to boost your defense.
-
Attack other players' castles or computer-controlled enemies by using your troops, heroes, spells, items, etc. You can also use strategies and tactics to overcome your opponents.
-
-
The different game modes and challenges
-
Castle Clash Mod Apk 2022 offers a variety of game modes and challenges that will test your skills and keep you entertained. Some of the game modes and challenges are:
-
-
Arena: A mode where you can compete with other players in real-time battles and rank up in the leaderboard.
-
Guild Wars: A mode where you can join or create a guild and fight with other guilds for glory and rewards.
-
Dungeon: A mode where you can explore different dungeons and face various enemies and bosses.
-
Raid: A mode where you can raid other players' castles and loot their resources.
-
HBM: A mode where you can defend your castle from waves of enemies and earn rewards.
-
Trial: A mode where you can challenge yourself with different scenarios and difficulties.
-
-
The best tips and tricks for winning battles and raids
-
If you want to win more battles and raids in Castle Clash Mod Apk 2022, you should follow these tips and tricks:
-
-
Know your enemy: Before you attack or defend, you should scout your enemy's castle and troops and plan your strategy accordingly.
-
Use the right troops: Depending on the situation, you should use the right troops for the job. For example, infantry are good for breaking walls, archers are good for sniping towers, cavalry are good for flanking enemies, etc.
-
Use the right heroes: Heroes are powerful units that can turn the tide of battle. You should use the right heroes for the right roles. For example, some heroes are good for offense, some are good for defense, some are good for support, etc.
-
Use the right spells and items: Spells and items are useful tools that can enhance your performance in battle. You should use the right spells and items for the right situations. For example, some spells and items can heal your units, some can damage your enemies, some can buff your allies, etc.
-
Use the right strategies and tactics: Strategies and tactics are important factors that can determine the outcome of battle. You should use the right strategies and tactics for the right scenarios. For example, some strategies and tactics are good for attacking, some are good for defending, some are good for ambushes, etc.
-
How to get unlimited money and gems in Castle Clash Mod Apk 2022
-
The advantages of having unlimited resources in the game
-
One of the main reasons why many players use Castle Clash Mod Apk 2022 is because it gives them unlimited money and gems in the game. Money and gems are the two main currencies in Castle Clash, and they are used for various purposes, such as:
-
-
Buying and upgrading troops, buildings, heroes, spells, items, etc.
-
Speeding up the construction and training time of your units and structures.
-
Unlocking new lords, troops, buildings, and modes in the game.
-
Participating in special events, quests, and rewards.
-
Enhancing your gameplay experience and enjoyment.
-
-
Having unlimited money and gems in the game can give you a huge advantage over other players who have to spend real money or wait for long hours to get them. You can have more fun and freedom in the game without any limitations or restrictions.
-
The methods of getting free money and gems in Castle Clash Mod Apk 2022
-
There are two main methods of getting free money and gems in Castle Clash Mod Apk 2022. They are:
-
-
Using the modded version of the game: This is the easiest and most convenient method of getting unlimited money and gems in the game. All you have to do is download and install Castle Clash Mod Apk 2022 from a trusted website, such as , and launch the game. You will automatically get unlimited money and gems in your account, which you can use as you wish.
-
Using online generators or hacks: This is another method of getting free money and gems in the game, but it is more risky and complicated. You have to use online tools or websites that claim to generate or hack money and gems for you, such as or . You have to enter your username or email, select the amount of money and gems you want, and complete some verification steps. Then, you will supposedly get the money and gems in your account.
-
-
The precautions and risks of using Castle Clash Mod Apk 2022
-
While using Castle Clash Mod Apk 2022 can be tempting and beneficial, it also comes with some precautions and risks that you should be aware of. Some of them are:
-
-
You may get banned from the game: The developers of Castle Clash do not approve of using modded versions or hacks of the game, as they consider it cheating and unfair. They may detect your activity and ban your account from the game permanently.
-
You may get viruses or malware on your device: Some websites or tools that offer Castle Clash Mod Apk 2022 or hacks may be malicious or fraudulent. They may contain viruses or malware that can harm your device or steal your personal information.
-
You may lose your progress or data: Some modded versions or hacks of the game may not be compatible with the original version or updates of the game. They may cause errors or glitches that can corrupt your progress or data in the game.
-
You may lose your interest or challenge in the game: Having unlimited money and gems in the game may make it too easy or boring for you. You may lose your interest or challenge in the game, as you will not have any goals or obstacles to overcome.
-
-
Conclusion
-
A summary of the main points and a call to action
-
In conclusion, Castle Clash Mod Apk 2022 is a modified version of the original Castle Clash game that gives you unlimited money and gems in the game, as well as other features and benefits. It is a fun and exciting strategy game where you can build your own castle, recruit your own army, and fight against other players or enemies. You can download and install Castle Clash Mod Apk 2022 from a trusted website, such as , or use online generators or hacks to get free money and gems in the game. However, you should also be careful of the precautions and risks of using Castle Clash Mod Apk 2022, such as getting banned from the game, getting viruses or malware on your device, losing your progress or data, or losing your interest or challenge in the game.
-
If you are interested in trying out Castle Clash Mod Apk 2022, you can follow the steps we have provided in this article. We hope you have enjoyed this article and learned something new about Castle Clash Mod Apk 2022. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
- FAQs
-
What is the difference between Castle Clash and Castle Clash Mod Apk 2022?
-
Castle Clash is the original version of the game, while Castle Clash Mod Apk 2022 is a modified version of the game that gives you unlimited money and gems, as well as other features and benefits.
-
Is Castle Clash Mod Apk 2022 safe to use?
-
Castle Clash Mod Apk 2022 is safe to use if you download and install it from a trusted website, such as . However, you should also be aware of the precautions and risks of using it, such as getting banned from the game, getting viruses or malware on your device, losing your progress or data, or losing your interest or challenge in the game.
-
How can I update Castle Clash Mod Apk 2022?
-
You can update Castle Clash Mod Apk 2022 by visiting the same website where you downloaded and installed it, and downloading and installing the latest version of the mod. You should also backup your progress and data before updating, in case something goes wrong.
-
Can I play Castle Clash Mod Apk 2022 with my friends?
-
Yes, you can play Castle Clash Mod Apk 2022 with your friends, as long as they also have the same modded version of the game. You can join or create guilds, chat with other players, and cooperate or compete with them in various game modes and challenges.
-
Can I play Castle Clash Mod Apk 2022 offline?
-
No, you cannot play Castle Clash Mod Apk 2022 offline, as it requires an internet connection to run. You need to be online to access the game's servers, features, and content.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Ludo for PC and Challenge Your Friends Online.md b/spaces/1phancelerku/anime-remove-background/Download Ludo for PC and Challenge Your Friends Online.md
deleted file mode 100644
index ae1c824b817800adf4d968440257f66c71cb1177..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Ludo for PC and Challenge Your Friends Online.md
+++ /dev/null
@@ -1,153 +0,0 @@
-
-
How to Download Ludo for PC and Enjoy Its Benefits
-
Ludo is one of the most popular board games in the world, especially in India, where it originated. It is a game that can be played by anyone, regardless of age or skill level. It is also a game that can offer many benefits, such as improving your cognitive abilities, social skills, and confidence. But did you know that you can also play Ludo on your PC? In this article, we will show you how to download Ludo for PC using an Android emulator, and what are the advantages of playing Ludo on PC with BlueStacks.
Ludo is a classic board game that originated in India
-
Ludo is a board game that is played by two to four players. Each player has four tokens of the same color, which they have to move around the board according to the roll of a dice. The objective of the game is to be the first player to move all four tokens into their home triangle in the center of the board. Along the way, players can capture their opponents' tokens by landing on the same square as them, or block their path by forming a chain with their own tokens. The game is based on an ancient Indian game called Pachisi, which was played by kings and queens in medieval times.
-
Ludo is a fun and engaging game that can improve your skills and social connections
-
Ludo is not just a simple game that you play for entertainment. It is also a game that can help you develop various skills and qualities that are useful in life. For example, playing Ludo can help you:
-
-
Develop your brain function by stimulating your logical thinking, problem-solving, analysis, and decision-making abilities.
-
Give pleasure and relieve stress by providing a fun and relaxing activity that can distract you from your worries and challenges.
-
Lower your blood pressure by reducing anxiety and tension that can affect your health.
-
Avoid serious diseases by keeping your brain active and preventing cognitive decline.
-
Strengthen your immune system by boosting your mood and happiness hormones.
-
Improve your mind for strategy and tactics by planning your moves ahead and anticipating your opponents' actions.
-
Have better relationships with friends and family by playing with them online or offline, communicating with them, and bonding with them over a shared interest.
-
Instill a competitive spirit in yourself by challenging yourself and others to win the game.
-
Escape from boredom and loneliness by playing with other players around the world, making new friends, and having fun conversations.
-
-
As you can see, playing Ludo can have many positive effects on your mind, body, and soul. But how can you play Ludo on your PC? Let's find out in the next section.
-
How to Download Ludo for PC Using an Android Emulator
-
An Android emulator is a software that allows you to run Android apps on your PC
-
If you want to play Ludo on your PC, you will need an Android emulator. An Android emulator is a software that mimics the Android operating system on your PC, allowing you to run Android apps and games on your computer. There are many Android emulators available online, but one of the best and most popular ones is BlueStacks.
-
You can use BlueStacks, a popular and reliable Android emulator, to download and play Ludo on your PC
-
BlueStacks is a free and easy-to-use Android emulator that has millions of users worldwide. It is compatible with Windows and Mac computers, and it supports a wide range of Android apps and games, including Ludo. With BlueStacks, you can download and play Ludo on your PC in just a few steps. Here's how:
-
How to download ludo king on pc
-Ludo game for pc free download
-Ludo game for pc windows 10
-Ludo game for pc online multiplayer
-Ludo game for pc offline
-Ludo game for pc with friends
-Ludo game for pc bluestacks
-Ludo game for pc emulator
-Ludo game for pc full version
-Ludo game for pc without internet
-Best ludo game for pc 2023
-Ludo club fun dice game for pc
-Ludo star 2 game for pc
-Ludo master new ludo game 2023 for pc
-Ludo all star online classic board and dice game for pc
-Ludo super classic board and dice game for pc
-Ludo talent board and dice game for pc
-Ludo dream classic board and dice game for pc
-Ludo party board and dice game for pc
-Ludo champ 2023 free new board and dice game for pc
-Download ludo on pc with bluestacks emulator
-Download ludo on pc with nox player emulator
-Download ludo on pc with ld player emulator
-Download ludo on pc with memu play emulator
-Download ludo on pc with gameloop emulator
-Download ludo on crazygames.com in browser
-Download ludo king mod apk for pc
-Download ludo king hack version for pc
-Download ludo king unlimited money for pc
-Download ludo king old version for pc
-Download ludo king latest version for pc
-Download ludo king update version for pc
-Download ludo king offline mode for pc
-Download ludo king voice chat feature for pc
-Download ludo king theme change option for pc
-Download ludo king cheats and tricks for pc
-Download ludo king rules and tips for pc
-Download ludo king tournament mode for pc
-Download ludo king snake and ladder mode for pc
-Download ludo king carrom mode for pc
-
How to install BlueStacks on your PC
-
-
Go to the official website of BlueStacks at [bluestacks.com] and click on the "Download BlueStacks" button.
-
Wait for the download to finish and then run the installer file.
-
Follow the instructions on the screen to complete the installation process.
-
Launch BlueStacks on your PC and sign in with your Google account or create a new one.
-
-
How to access the Google Play Store and search for Ludo, Ludo King, or Ludo Club on BlueStacks
-
-
On the home screen of BlueStacks, click on the "Google Play" icon to open the Google Play Store.
-
In the search bar, type "Ludo" and hit enter. You will see a list of Ludo games available for download.
-
You can choose any Ludo game that you like, such as Ludo King or Ludo Club, which are some of the most popular and highly rated ones.
-
Click on the game that you want to download and then click on the "Install" button.
-
Wait for the installation to finish and then click on the "Open" button.
-
-
How to install and launch the Ludo game of your choice on BlueStacks
-
-
Once you have installed the Ludo game that you want to play, you will see its icon on the home screen of BlueStacks.
-
Click on the icon to launch the game and start playing.
-
You can adjust the settings of the game according to your preferences, such as the sound, language, graphics, etc.
-
You can also customize your profile by choosing your name, avatar, color, etc.
-
You can play Ludo in different modes, such as online multiplayer, local multiplayer, or against the computer.
-
-
Benefits of Playing Ludo on PC with BlueStacks
-
You can enjoy a larger and better display of the game on your PC screen
-
One of the main benefits of playing Ludo on PC with BlueStacks is that you can enjoy a larger and better display of the game on your PC screen. You can see the board more clearly and appreciate the details more. You can also zoom in or out as you wish. Playing Ludo on a bigger screen can enhance your visual experience and make you feel more immersed in the game.
-
You can play with your friends and family online or offline, or against the computer
-
Another benefit of playing Ludo on PC with BlueStacks is that you can play with your friends and family online or offline, or against the computer. You can invite your friends or family members to join you in an online multiplayer mode, where you can chat with them and have fun together. You can also play with them offline by connecting your devices through Bluetooth or Wi-Fi. Alternatively, you can play against the computer in a single-player mode, where you can choose the difficulty level and practice your skills.
-
You can use various features and enhancements of BlueStacks to improve your gaming experience
-
A third benefit of playing Ludo on PC with BlueStacks is that you can use various features and enhancements of BlueStacks to improve your gaming experience. For example, you can use the following features of BlueStacks:
-
-
Multi-instance: You can play multiple Ludo games at the same time on different windows, or play other games or apps while playing Ludo.
-
Macro recorder: You can record and replay your actions in the game, such as rolling the dice, moving the tokens, etc.
-
Keymapping: You can customize the keyboard and mouse controls for the game, such as assigning keys for different actions, changing the sensitivity, etc.
-
Eco mode: You can lower the CPU and RAM usage of BlueStacks, which can improve the performance and speed of the game.
-
Real-time translation: You can translate the text and voice chat in the game to any language that you want, which can help you communicate with other players from different countries.
-
-
These are just some of the features that BlueStacks offers to enhance your gaming experience. You can explore more features and settings of BlueStacks by clicking on the menu icon on the top right corner of the emulator.
-
Conclusion
-
Ludo is a great game that can provide you with many benefits, such as improving your brain function, social skills, and happiness. But playing Ludo on PC with BlueStacks can make your gaming experience even better, as you can enjoy a larger and better display, play with your friends and family online or offline, or against the computer, and use various features and enhancements of BlueStacks to improve your performance and fun. So what are you waiting for? Download BlueStacks today and start playing Ludo on your PC!
-
FAQs
-
What are some of the social benefits of playing Ludo?
-
Some of the social benefits of playing Ludo are:
-
-
You can make new friends and connect with old ones by playing online with other players around the world.
-
You can strengthen your bond with your family members by playing offline with them through Bluetooth or Wi-Fi.
-
You can improve your communication and cooperation skills by chatting and working with your teammates in the game.
-
You can learn about different cultures and languages by playing with people from different countries and using the real-time translation feature of BlueStacks.
-
-
What are some of the skills that you can develop by playing Ludo?
-
Some of the skills that you can develop by playing Ludo are:
-
-
You can enhance your logical thinking, problem-solving, analysis, and decision-making abilities by planning your moves ahead and anticipating your opponents' actions.
-
You can boost your memory, concentration, and attention span by keeping track of your tokens and dice rolls.
-
You can increase your creativity and imagination by choosing different themes and avatars for the board and your profile.
-
You can develop your strategy and tactics by using different methods and tricks to win the game.
-
-
How can you play Ludo online with other players around the world?
-
You can play Ludo online with other players around the world by following these steps:
-
-
Launch the Ludo game that you have downloaded on BlueStacks.
-
Select the online multiplayer mode from the main menu.
-
Choose whether you want to play with two, three, or four players.
-
Select whether you want to play with random players or invite your friends by sharing a code.
-
Wait for the game to start and enjoy playing with other players around the world.
-
-
How can you change the theme of the board in Ludo?
-
You can change the theme of the board in Ludo by following these steps:
-
-
Launch the Ludo game that you have downloaded on BlueStacks.
-
Select the settings icon from the main menu.
-
Select the theme option from the settings menu.
-
Choose from various themes available for the board, such as nature, Egypt, disco, etc.
-
Apply the theme that you like and enjoy playing on a different board.
-
-
How can you win the game of Ludo?
-
You can win the game of Ludo by following these tips:
-
-
Roll the dice carefully and try to get a six as often as possible. A six will allow you to move a token out of your base or move an existing token six squares ahead. It will also give you another chance to roll again.
-
Move your tokens strategically and try to capture your opponents' tokens by landing on the same square as them. This will send their tokens back to their base and delay their progress.
-
Protect your tokens from being captured by forming a chain with two or more of your tokens on the same square. This will make them immune to capture by your opponents.
-
Avoid landing on the star squares, as they are the most vulnerable to capture by your opponents. Instead, try to land on the safe squares, which are marked with a shield icon. These will protect your tokens from being captured.
-
Move your tokens as fast as possible to reach your home triangle in the center of the board. Once you have moved all four of your tokens into your home triangle, you will win the game.
-
-
I hope you enjoyed reading this article and learned how to download Ludo for PC and enjoy its benefits. Now, go ahead and try playing Ludo on your PC with BlueStacks and have fun!
Qué es un avatar de anime y por qué deberías tener uno
-
Anime avatar es un término que se refiere a una imagen digital o personaje que te representa en línea, utilizando el estilo de anime, que es una forma de animación de Japón. Los avatares de anime se están volviendo más populares y populares, ya que ofrecen muchos beneficios para los usuarios en línea que quieren expresarse, conectarse con otros y divertirse. En este artículo, explicaremos qué son los avatares de anime, cómo pueden beneficiarte, cuáles son algunos ejemplos y tendencias de los avatares de anime, y cómo puedes crear tu propio avatar de anime en cuatro sencillos pasos.
Para entender qué son los avatares de anime, primero tenemos que entender qué son anime y avatares por separado.
-
Anime como un estilo de animación de Japón
-
Anime es una palabra derivada de la animación, y se refiere a un estilo de animación que se originó en Japón. El anime se caracteriza por su estilo artístico distintivo, que a menudo presenta ojos grandes y expresivos, cabello colorido, expresiones exageradas y movimientos dinámicos. Anime también cubre una amplia gama de géneros, temas e historias, apelando a diversos públicos y gustos. Anime tiene una larga historia y una gran base de fans, tanto en Japón y en todo el mundo. Algunos ejemplos de series de anime famosas son Naruto, One Piece, Dragon Ball, Sailor Moon, Pokemon, Attack on Titan, My Hero Academia, Demon Slayer, etc.
-
Avatar como representación digital de uno mismo
-
-
Avatar de anime como una combinación de anime y avatar
-
Anime avatar es un término que combina anime y avatar, lo que significa una imagen digital o personaje que te representa en línea usando el estilo de anime. Un avatar de anime puede ser una imagen o modelo 2D o 3D que imita la apariencia y los movimientos de un personaje de anime. Un avatar de anime también puede tener varias características y opciones que le permiten personalizar su apariencia, expresiones, voz, ropa, accesorios, antecedentes, etc. Un avatar de anime se puede usar para varios propósitos y ocasiones en línea, como chatear, jugar, transmitir, socializar, etc.
-
Los beneficios de tener un avatar de anime
-
Tener un avatar de anime puede ofrecerle muchos beneficios en línea. Aquí están algunos de ellos:
-
-
Expresar tu personalidad y creatividad
-
Uno de los principales beneficios de tener un avatar de anime es que te permite expresar tu personalidad y creatividad en línea. Puedes elegir un avatar de anime que refleje tus rasgos, preferencias, intereses, aficiones, estados de ánimo, etc., o crear uno que sea completamente original y único. También puedes cambiar tu avatar de anime de acuerdo a diferentes situaciones y contextos en línea. Por ejemplo, puedes tener diferentes avatares de anime para diferentes plataformas, juegos, géneros, estados de ánimo, etc. También puedes dar rienda suelta a tu creatividad e imaginación diseñando tu avatar de anime con varias opciones y características. Puedes hacer tu avatar de anime tan realista o fantástico como quieras, y experimentar con diferentes estilos y combinaciones.
-
Unirse a la creciente comunidad de fans del anime
-
-
Mejorar su presencia en línea y el compromiso
-
Un tercer beneficio de tener un avatar de anime es que puede mejorar su presencia en línea y el compromiso. Un avatar de anime puede ayudarte a destacar entre la multitud y atraer más atención y seguidores en línea. Un avatar de anime también puede hacerte más memorable y reconocible en línea, ya que puede crear una fuerte impresión visual e identidad para ti. Un avatar de anime también puede hacerte más atractivo e interactivo en línea, ya que puede transmitir tus emociones y expresiones de manera más vívida y dinámica. Un avatar de anime también puede hacerte más entretenido y divertido en línea, ya que puede agregar humor, encanto y personalidad a tu contenido en línea.
-
Los ejemplos y tendencias de anime avatar
-
Los avatares de anime no son un fenómeno nuevo, pero se han vuelto más populares y populares en los últimos años. Aquí hay algunos ejemplos y tendencias de avatares de anime:
-
Los populares generadores de avatar de anime y plataformas
-
Hay muchos generadores de avatar de anime y plataformas disponibles en línea que le permiten crear su propio avatar de anime fácil y rápidamente. Algunos de los más populares son:
-
-
-
Nombre
-
Descripción
-
URL
-
-
-
Picrew
-
Un sitio web japonés que alberga miles de creadores de avatar de anime generados por el usuario con varios estilos y opciones.
-
(https://picrew.me/)
-
-
-
Vroid Studio
-
Un software 3D gratuito que te permite crear tus propios modelos de avatar de anime 3D con alta calidad y detalle.
-
(https://vroid.com/en/studio)
-
-
-
VRChat
-
Una plataforma de realidad virtual social que te permite crear, subir y usar tus propios avatares de anime en 3D en varios mundos virtuales y escenarios.
-
(https://www.vrchat.com/)
-
-
-
Zepeto
-
Una aplicación móvil que te permite crear tus propios avatares de anime en 3D con expresiones y movimientos faciales realistas.
-
(https://zepeto.me/)
-
-
-
FaceRig
-
Un software que te permite usar tu webcam para animar tus propios avatares de anime 2D o 3D en tiempo real.
-
(https://facerig.com/)
-
-
La adopción de avatar de anime por personalidades en línea
-
Otra tendencia de los avatares de anime es que han sido adoptados por muchas personalidades en línea, tales como serpentinas, influencers, celebridades, etc. Algunos de ellos utilizan avatares de anime como su persona en línea principal o alternativa, mientras que otros los usan como una forma de experimentar con diferentes estilos y géneros. Algunos ejemplos de personalidades en línea que usan avatares de anime son:
-
-
Kizuna AI: Un YouTuber virtual que es considerado el primer y más popular anime avatar streamer. Ella tiene más de 4 millones de suscriptores en YouTube y es conocida por su personalidad linda y energética.
-
CodeMiko: Un streamer virtual que utiliza un avatar de anime en 3D que es controlado por un traje de captura de movimiento. Tiene más de 1 millón de seguidores en Twitch y es conocida por sus transmisiones interactivas e inmersivas.
-
Lil Nas X: Un rapero y cantante que utiliza un anime en 3D avatar para realizar su hit canción "Montero (Call Me By Your Name)" en un concierto virtual en Roblox. Atrajo a más de 30 millones de espectadores y recibió comentarios positivos de los fans.
-
Belle Delphine: Una modelo e influencer que usó un avatar de anime en 2D para bromear con sus fans en el Día de los Inocentes. Ella fingió ser una serpentina virtual y subió un video de su avatar de anime bailando y cantando.
-
Pokimane: Un streamer y jugador que utiliza un avatar de anime en 2D para transmitir en Twitch como una broma. Ella sorprendió a sus fans con su avatar de anime, que se basó en su verdadera apariencia y voz.
-
-
Las posibilidades futuras de avatar de anime con AI y VR
-
Una tercera tendencia de los avatares de anime es que tienen el potencial de evolucionar y mejorar con el avance de las tecnologías de IA y RV. Algunos de los posibles escenarios futuros de avatares de anime son:
-
-
-
Avatares de anime que pueden aprender de su comportamiento, preferencias y comentarios, y adaptarse a sus necesidades y expectativas utilizando modelos y sistemas de IA.
-
Avatares de anime que pueden interactuar contigo y con otros usuarios en tiempo real usando chatbots y agentes de IA.
-
Avatares de anime que se pueden experimentar en plena inmersión y presencia usando auriculares y dispositivos de VR.
-
Avatares de anime que se pueden personalizar y personalizar usando herramientas e interfaces de realidad virtual.
-
-
Cómo crear tu propio avatar de anime en cuatro sencillos pasos
-
Si estás interesado en crear tu propio avatar de anime, aquí hay cuatro sencillos pasos que puedes seguir:
-
Elige un generador de avatar de anime que se adapte a tus necesidades
-
El primer paso es elegir un generador de avatar de anime que se adapte a sus necesidades. Hay muchos generadores de avatar de anime disponibles en línea, cada uno con sus propias ventajas y desventajas. Usted debe considerar factores tales como el estilo, calidad, características, opciones, facilidad de uso, costo, etc., del generador de avatar de anime. También puedes comparar diferentes generadores de avatar de anime leyendo reseñas, viendo tutoriales o probando demos. También puede consultar la tabla de arriba para algunos generadores de avatar de anime populares.
-
Personaliza tu avatar de anime con varias opciones y características
-
El segundo paso es personalizar tu avatar de anime con varias opciones y características. Dependiendo del generador de avatar de anime que elijas, puedes personalizar aspectos como la cara, cabello, ojos, nariz, boca, piel, cuerpo, ropa, accesorios, etc., de tu avatar de anime. También puede ajustar los colores, tamaños, formas, posiciones, ángulos, etc., de estos aspectos. También puedes añadir efectos como sombras, iluminación, filtros, etc., para mejorar tu avatar de anime. También puedes previsualizar tu avatar de anime en diferentes poses y expresiones.
Guarda y descarga tu avatar de anime en alta calidad
-
-
Comparte y usa tu avatar de anime en diferentes plataformas y ocasiones
-
El cuarto y último paso es compartir y usar tu avatar de anime en diferentes plataformas y ocasiones. Puedes usar tu avatar de anime para varios propósitos y ocasiones en línea, como chatear, jugar, transmitir, socializar, etc. También puedes compartir tu avatar de anime con tus amigos, familiares, fans, seguidores, etc., en línea. También puedes subir tu avatar de anime a diferentes plataformas y sitios web que admiten avatares de anime, como VRChat, FaceRig, Zepeto, etc. También puedes imprimir tu avatar de anime en diferentes productos y materiales, como pegatinas, carteles, camisas, tazas, etc.
-
Conclusión y preguntas frecuentes
-
Los avatares de anime son imágenes digitales o personajes que te representan en línea usando el estilo de anime. Los avatares de anime se están volviendo más populares y populares, ya que ofrecen muchos beneficios para los usuarios en línea que quieren expresarse, conectarse con otros y divertirse. Los avatares de anime también están evolucionando y mejorando con el avance de las tecnologías de IA y VR. Puede crear su propio avatar de anime en cuatro sencillos pasos: elegir un generador de avatar de anime que se adapte a sus necesidades, personalizar su avatar de anime con varias opciones y características, guardar y descargar su avatar de anime en alta calidad, y compartir y utilizar su avatar de anime en diferentes plataformas y ocasiones.
-
Aquí hay algunas preguntas frecuentes sobre avatares de anime:
-
-
Q: ¿Cuánto cuesta crear un avatar de anime?
-
A: Depende del generador de avatar de anime que elijas. Algunos generadores de avatar de anime son de uso gratuito, mientras que otros pueden cobrar una tarifa o requerir una suscripción. Usted debe comprobar el precio y los términos de servicio del generador de avatar anime antes de usarlo.
-
Q: ¿Cuánto tiempo se tarda en crear un avatar de anime?
-
-
Q: ¿Cómo puedo hacer que mi avatar de anime se parezca más a mí?
-
A: Hay algunos consejos y trucos que pueden ayudarte a hacer que tu avatar de anime se parezca más a ti. Por ejemplo, puedes usar una foto tuya como referencia o una plantilla para tu avatar de anime. También puedes ajustar las proporciones, colores, formas, etc., de tu avatar de anime para que coincida con tu apariencia real. También puedes añadir detalles como gafas, piercings, tatuajes, etc.
-
Q: ¿Cómo puedo hacer mi avatar de anime más único y original?
-
A: Hay algunos consejos y trucos que pueden ayudarte a hacer tu avatar de anime más único y original. Por ejemplo, puedes mezclar y combinar diferentes estilos y géneros de anime para tu avatar de anime. También puedes añadir elementos como fantasía, ciencia ficción, terror, etc., a tu avatar de anime. También puedes experimentar con diferentes efectos como filtros, sombras, iluminación, etc., a tu avatar de anime. También puedes usar tu creatividad e imaginación para crear tu avatar de anime.
-
Q: ¿Cómo puedo proteger mi avatar de anime de ser robado o copiado?
-
A: Hay algunos consejos y trucos que pueden ayudarte a proteger tu avatar de anime de ser robado o copiado. Por ejemplo, puedes agregar una marca de agua o una firma a tu avatar de anime. También puedes usar una herramienta de búsqueda de imagen inversa para comprobar si tu avatar de anime ha sido utilizado por alguien más sin tu permiso. También puedes reportar o tomar acciones legales contra cualquiera que robe o copie tu avatar de anime.
-
-
Espero que este artículo te haya ayudado a aprender más sobre los avatares de anime y cómo crear los tuyos. Si usted tiene alguna pregunta o retroalimentación, por favor no dude en dejar un comentario a continuación. Gracias por leer y divertirse con su avatar de anime!
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/service.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/service.py
deleted file mode 100644
index cb01529ee0dc51b61f4e9f4d4c876bee82ebe9e1..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/service.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# https://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-import os
-
-from botocore.docs.bcdoc.restdoc import DocumentStructure
-from botocore.docs.service import ServiceDocumenter as BaseServiceDocumenter
-from botocore.exceptions import DataNotFoundError
-
-import boto3
-from boto3.docs.client import Boto3ClientDocumenter
-from boto3.docs.resource import ResourceDocumenter, ServiceResourceDocumenter
-from boto3.utils import ServiceContext
-
-
-class ServiceDocumenter(BaseServiceDocumenter):
- # The path used to find examples
- EXAMPLE_PATH = os.path.join(os.path.dirname(boto3.__file__), 'examples')
-
- def __init__(self, service_name, session, root_docs_path):
- super().__init__(
- service_name=service_name,
- # I know that this is an internal attribute, but the botocore session
- # is needed to load the paginator and waiter models.
- session=session._session,
- root_docs_path=root_docs_path,
- )
- self._boto3_session = session
- self._client = self._boto3_session.client(service_name)
- self._service_resource = None
- if self._service_name in self._boto3_session.get_available_resources():
- self._service_resource = self._boto3_session.resource(service_name)
- self.sections = [
- 'title',
- 'client',
- 'paginators',
- 'waiters',
- 'resources',
- 'examples',
- ]
- self._root_docs_path = root_docs_path
- self._USER_GUIDE_LINK = (
- 'https://boto3.amazonaws.com/'
- 'v1/documentation/api/latest/guide/resources.html'
- )
-
- def document_service(self):
- """Documents an entire service.
-
- :returns: The reStructured text of the documented service.
- """
- doc_structure = DocumentStructure(
- self._service_name, section_names=self.sections, target='html'
- )
- self.title(doc_structure.get_section('title'))
-
- self.client_api(doc_structure.get_section('client'))
- self.paginator_api(doc_structure.get_section('paginators'))
- self.waiter_api(doc_structure.get_section('waiters'))
- if self._service_resource:
- self.resource_section(doc_structure.get_section('resources'))
- self._document_examples(doc_structure.get_section('examples'))
- return doc_structure.flush_structure()
-
- def client_api(self, section):
- examples = None
- try:
- examples = self.get_examples(self._service_name)
- except DataNotFoundError:
- pass
-
- Boto3ClientDocumenter(
- self._client, self._root_docs_path, examples
- ).document_client(section)
-
- def resource_section(self, section):
- section.style.h2('Resources')
- section.style.new_line()
- section.write(
- 'Resources are available in boto3 via the '
- '``resource`` method. For more detailed instructions '
- 'and examples on the usage of resources, see the '
- 'resources '
- )
- section.style.external_link(
- title='user guide',
- link=self._USER_GUIDE_LINK,
- )
- section.write('.')
- section.style.new_line()
- section.style.new_line()
- section.write('The available resources are:')
- section.style.new_line()
- section.style.toctree()
- self._document_service_resource(section)
- self._document_resources(section)
-
- def _document_service_resource(self, section):
- # Create a new DocumentStructure for each Service Resource and add contents.
- service_resource_doc = DocumentStructure(
- 'service-resource', target='html'
- )
- breadcrumb_section = service_resource_doc.add_new_section('breadcrumb')
- breadcrumb_section.style.ref(
- self._client.__class__.__name__, f'../../{self._service_name}'
- )
- breadcrumb_section.write(' / Resource / ServiceResource')
- ServiceResourceDocumenter(
- self._service_resource, self._session, self._root_docs_path
- ).document_resource(service_resource_doc)
- # Write collections in individual/nested files.
- # Path: /reference/services///.rst
- resource_name = self._service_resource.meta.resource_model.name
- if resource_name == self._service_name:
- resource_name = 'service-resource'
- service_resource_dir_path = os.path.join(
- self._root_docs_path,
- f'{self._service_name}',
- f'{resource_name.lower()}',
- )
- service_resource_doc.write_to_file(service_resource_dir_path, 'index')
- section.style.tocitem(f'{self._service_name}/{resource_name}/index')
-
- def _document_resources(self, section):
- temp_identifier_value = 'foo'
- loader = self._session.get_component('data_loader')
- json_resource_model = loader.load_service_model(
- self._service_name, 'resources-1'
- )
- service_model = self._service_resource.meta.client.meta.service_model
- for resource_name in json_resource_model['resources']:
- resource_model = json_resource_model['resources'][resource_name]
- resource_cls = (
- self._boto3_session.resource_factory.load_from_definition(
- resource_name=resource_name,
- single_resource_json_definition=resource_model,
- service_context=ServiceContext(
- service_name=self._service_name,
- resource_json_definitions=json_resource_model[
- 'resources'
- ],
- service_model=service_model,
- service_waiter_model=None,
- ),
- )
- )
- identifiers = resource_cls.meta.resource_model.identifiers
- args = []
- for _ in identifiers:
- args.append(temp_identifier_value)
- resource = resource_cls(*args, client=self._client)
- # Create a new DocumentStructure for each Resource and add contents.
- resource_name = resource.meta.resource_model.name.lower()
- resource_doc = DocumentStructure(resource_name, target='html')
- breadcrumb_section = resource_doc.add_new_section('breadcrumb')
- breadcrumb_section.style.ref(
- self._client.__class__.__name__, f'../../{self._service_name}'
- )
- breadcrumb_section.write(
- f' / Resource / {resource.meta.resource_model.name}'
- )
- ResourceDocumenter(
- resource, self._session, self._root_docs_path
- ).document_resource(
- resource_doc.add_new_section(resource.meta.resource_model.name)
- )
- # Write collections in individual/nested files.
- # Path: /reference/services///.rst
- service_resource_dir_path = os.path.join(
- self._root_docs_path,
- f'{self._service_name}',
- f'{resource_name}',
- )
- resource_doc.write_to_file(service_resource_dir_path, 'index')
- section.style.tocitem(
- f'{self._service_name}/{resource_name}/index'
- )
-
- def _get_example_file(self):
- return os.path.realpath(
- os.path.join(self.EXAMPLE_PATH, self._service_name + '.rst')
- )
-
- def _document_examples(self, section):
- examples_file = self._get_example_file()
- if os.path.isfile(examples_file):
- section.style.h2('Examples')
- section.style.new_line()
- with open(examples_file) as f:
- section.write(f.read())
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/exceptions.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/exceptions.py
deleted file mode 100644
index cba6f3f560f71b3b15ab6aaf21dde4f1bba1bd00..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/exceptions.py
+++ /dev/null
@@ -1,323 +0,0 @@
-from __future__ import absolute_import
-
-from .packages.six.moves.http_client import IncompleteRead as httplib_IncompleteRead
-
-# Base Exceptions
-
-
-class HTTPError(Exception):
- """Base exception used by this module."""
-
- pass
-
-
-class HTTPWarning(Warning):
- """Base warning used by this module."""
-
- pass
-
-
-class PoolError(HTTPError):
- """Base exception for errors caused within a pool."""
-
- def __init__(self, pool, message):
- self.pool = pool
- HTTPError.__init__(self, "%s: %s" % (pool, message))
-
- def __reduce__(self):
- # For pickling purposes.
- return self.__class__, (None, None)
-
-
-class RequestError(PoolError):
- """Base exception for PoolErrors that have associated URLs."""
-
- def __init__(self, pool, url, message):
- self.url = url
- PoolError.__init__(self, pool, message)
-
- def __reduce__(self):
- # For pickling purposes.
- return self.__class__, (None, self.url, None)
-
-
-class SSLError(HTTPError):
- """Raised when SSL certificate fails in an HTTPS connection."""
-
- pass
-
-
-class ProxyError(HTTPError):
- """Raised when the connection to a proxy fails."""
-
- def __init__(self, message, error, *args):
- super(ProxyError, self).__init__(message, error, *args)
- self.original_error = error
-
-
-class DecodeError(HTTPError):
- """Raised when automatic decoding based on Content-Type fails."""
-
- pass
-
-
-class ProtocolError(HTTPError):
- """Raised when something unexpected happens mid-request/response."""
-
- pass
-
-
-#: Renamed to ProtocolError but aliased for backwards compatibility.
-ConnectionError = ProtocolError
-
-
-# Leaf Exceptions
-
-
-class MaxRetryError(RequestError):
- """Raised when the maximum number of retries is exceeded.
-
- :param pool: The connection pool
- :type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool`
- :param string url: The requested Url
- :param exceptions.Exception reason: The underlying error
-
- """
-
- def __init__(self, pool, url, reason=None):
- self.reason = reason
-
- message = "Max retries exceeded with url: %s (Caused by %r)" % (url, reason)
-
- RequestError.__init__(self, pool, url, message)
-
-
-class HostChangedError(RequestError):
- """Raised when an existing pool gets a request for a foreign host."""
-
- def __init__(self, pool, url, retries=3):
- message = "Tried to open a foreign host with url: %s" % url
- RequestError.__init__(self, pool, url, message)
- self.retries = retries
-
-
-class TimeoutStateError(HTTPError):
- """Raised when passing an invalid state to a timeout"""
-
- pass
-
-
-class TimeoutError(HTTPError):
- """Raised when a socket timeout error occurs.
-
- Catching this error will catch both :exc:`ReadTimeoutErrors
- ` and :exc:`ConnectTimeoutErrors `.
- """
-
- pass
-
-
-class ReadTimeoutError(TimeoutError, RequestError):
- """Raised when a socket timeout occurs while receiving data from a server"""
-
- pass
-
-
-# This timeout error does not have a URL attached and needs to inherit from the
-# base HTTPError
-class ConnectTimeoutError(TimeoutError):
- """Raised when a socket timeout occurs while connecting to a server"""
-
- pass
-
-
-class NewConnectionError(ConnectTimeoutError, PoolError):
- """Raised when we fail to establish a new connection. Usually ECONNREFUSED."""
-
- pass
-
-
-class EmptyPoolError(PoolError):
- """Raised when a pool runs out of connections and no more are allowed."""
-
- pass
-
-
-class ClosedPoolError(PoolError):
- """Raised when a request enters a pool after the pool has been closed."""
-
- pass
-
-
-class LocationValueError(ValueError, HTTPError):
- """Raised when there is something wrong with a given URL input."""
-
- pass
-
-
-class LocationParseError(LocationValueError):
- """Raised when get_host or similar fails to parse the URL input."""
-
- def __init__(self, location):
- message = "Failed to parse: %s" % location
- HTTPError.__init__(self, message)
-
- self.location = location
-
-
-class URLSchemeUnknown(LocationValueError):
- """Raised when a URL input has an unsupported scheme."""
-
- def __init__(self, scheme):
- message = "Not supported URL scheme %s" % scheme
- super(URLSchemeUnknown, self).__init__(message)
-
- self.scheme = scheme
-
-
-class ResponseError(HTTPError):
- """Used as a container for an error reason supplied in a MaxRetryError."""
-
- GENERIC_ERROR = "too many error responses"
- SPECIFIC_ERROR = "too many {status_code} error responses"
-
-
-class SecurityWarning(HTTPWarning):
- """Warned when performing security reducing actions"""
-
- pass
-
-
-class SubjectAltNameWarning(SecurityWarning):
- """Warned when connecting to a host with a certificate missing a SAN."""
-
- pass
-
-
-class InsecureRequestWarning(SecurityWarning):
- """Warned when making an unverified HTTPS request."""
-
- pass
-
-
-class SystemTimeWarning(SecurityWarning):
- """Warned when system time is suspected to be wrong"""
-
- pass
-
-
-class InsecurePlatformWarning(SecurityWarning):
- """Warned when certain TLS/SSL configuration is not available on a platform."""
-
- pass
-
-
-class SNIMissingWarning(HTTPWarning):
- """Warned when making a HTTPS request without SNI available."""
-
- pass
-
-
-class DependencyWarning(HTTPWarning):
- """
- Warned when an attempt is made to import a module with missing optional
- dependencies.
- """
-
- pass
-
-
-class ResponseNotChunked(ProtocolError, ValueError):
- """Response needs to be chunked in order to read it as chunks."""
-
- pass
-
-
-class BodyNotHttplibCompatible(HTTPError):
- """
- Body should be :class:`http.client.HTTPResponse` like
- (have an fp attribute which returns raw chunks) for read_chunked().
- """
-
- pass
-
-
-class IncompleteRead(HTTPError, httplib_IncompleteRead):
- """
- Response length doesn't match expected Content-Length
-
- Subclass of :class:`http.client.IncompleteRead` to allow int value
- for ``partial`` to avoid creating large objects on streamed reads.
- """
-
- def __init__(self, partial, expected):
- super(IncompleteRead, self).__init__(partial, expected)
-
- def __repr__(self):
- return "IncompleteRead(%i bytes read, %i more expected)" % (
- self.partial,
- self.expected,
- )
-
-
-class InvalidChunkLength(HTTPError, httplib_IncompleteRead):
- """Invalid chunk length in a chunked response."""
-
- def __init__(self, response, length):
- super(InvalidChunkLength, self).__init__(
- response.tell(), response.length_remaining
- )
- self.response = response
- self.length = length
-
- def __repr__(self):
- return "InvalidChunkLength(got length %r, %i bytes read)" % (
- self.length,
- self.partial,
- )
-
-
-class InvalidHeader(HTTPError):
- """The header provided was somehow invalid."""
-
- pass
-
-
-class ProxySchemeUnknown(AssertionError, URLSchemeUnknown):
- """ProxyManager does not support the supplied scheme"""
-
- # TODO(t-8ch): Stop inheriting from AssertionError in v2.0.
-
- def __init__(self, scheme):
- # 'localhost' is here because our URL parser parses
- # localhost:8080 -> scheme=localhost, remove if we fix this.
- if scheme == "localhost":
- scheme = None
- if scheme is None:
- message = "Proxy URL had no scheme, should start with http:// or https://"
- else:
- message = (
- "Proxy URL had unsupported scheme %s, should use http:// or https://"
- % scheme
- )
- super(ProxySchemeUnknown, self).__init__(message)
-
-
-class ProxySchemeUnsupported(ValueError):
- """Fetching HTTPS resources through HTTPS proxies is unsupported"""
-
- pass
-
-
-class HeaderParsingError(HTTPError):
- """Raised by assert_header_parsing, but we convert it to a log.warning statement."""
-
- def __init__(self, defects, unparsed_data):
- message = "%s, unparsed data: %r" % (defects or "Unknown", unparsed_data)
- super(HeaderParsingError, self).__init__(message)
-
-
-class UnrewindableBodyError(HTTPError):
- """urllib3 encountered an error when trying to rewind a body"""
-
- pass
diff --git a/spaces/BramVanroy/mateo-demo/README.md b/spaces/BramVanroy/mateo-demo/README.md
deleted file mode 100644
index 108c6ed8f5137dca21807ef48014a640393d2df2..0000000000000000000000000000000000000000
--- a/spaces/BramVanroy/mateo-demo/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MATEO
-emoji: 🎈
-colorFrom: green
-colorTo: green
-sdk: docker
-app_port: 7860
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Burcin/ExtractiveSummarizer/app.py b/spaces/Burcin/ExtractiveSummarizer/app.py
deleted file mode 100644
index 533129d03fe9d5f3e8ca486fa11cf57aee9954ef..0000000000000000000000000000000000000000
--- a/spaces/Burcin/ExtractiveSummarizer/app.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import gradio as gr
-from gradio.mix import Parallel, Series
-import wikipedia
-import spacy
-from spacy.lang.en.stop_words import STOP_WORDS
-from string import punctuation
-import nltk
-nltk.download('wordnet', quiet=True)
-nltk.download('punkt', quiet=True)
-from nltk.stem import WordNetLemmatizer
-from heapq import nlargest
-import warnings
-from sklearn.feature_extraction.text import TfidfVectorizer
-import numpy as np
-
-warnings.filterwarnings("ignore")
-
-def get_wiki_original_text(inp):
- text = wikipedia.summary(inp)
- return text
-
-
-
-def get_wiki_summary_by_lem(inp):
- text = wikipedia.summary(inp)
-
- print(text)
-
- stopwords = list(STOP_WORDS)
-
- lemmatizer = WordNetLemmatizer()
- tokens = [lemmatizer.lemmatize(str(token).lower()) for token in nltk.word_tokenize(text) if str(token) not in punctuation and str(token).lower() not in stopwords and len(token) >1]
- word_counts = {}
-
- for token in tokens:
- if token in word_counts.keys():
- word_counts[token] += 1
- else:
- word_counts[token] = 1
-
-
-
- sentence_scores = {}
-
- for sentence in nltk.sent_tokenize(text):
- sentence_scores[sentence] = 0
- for wrd in nltk.word_tokenize(sentence):
- if lemmatizer.lemmatize(str(wrd).lower()) in word_counts.keys():
- sentence_scores[sentence] += word_counts[lemmatizer.lemmatize(str(wrd).lower())]
-
- summary_length = 0
-
- if len(sentence_scores) > 5 :
- summary_length = int(len(sentence_scores)*0.20)
- else:
- summary_length = int(len(sentence_scores)*0.50)
-
- summary = str()
-
- for sentence in nltk.sent_tokenize(text):
- for i in range(0,summary_length):
- if str(sentence).find(str(nlargest(summary_length, sentence_scores, key = sentence_scores.get)[i])) == 0:
- summary += str(sentence).replace('\n','')
- summary += ' '
-
-
- print('\033[1m' + "Summarized Text" + '\033[0m')
-
- return summary
-
-
-def get_wiki_summary_by_tfidf(inp):
- text = wikipedia.summary(inp)
-
- tfidf_vectorizer = TfidfVectorizer(ngram_range=(1,3))
-
- all_sentences = [str(sent) for sent in nltk.sent_tokenize(text)]
- sentence_vectors = tfidf_vectorizer.fit_transform(all_sentences)
-
- sentence_scores_vector = np.hstack(np.array(sentence_vectors.sum(axis=1)))
-
- sentence_scores = dict(zip(all_sentences, sentence_scores_vector))
-
- summary_length = 0
-
- if len(sentence_scores) > 5 :
- summary_length = int(len(sentence_scores)*0.20)
- else:
- summary_length = int(len(sentence_scores)*0.50)
-
- summary = str()
-
- for sentence in nltk.sent_tokenize(text):
- for i in range(0,summary_length):
- if str(sentence).find(str(nlargest(summary_length, sentence_scores, key = sentence_scores.get)[i])) == 0:
- summary += str(sentence).replace('\n','')
- summary += ' '
-
-
- return summary
-
-
-
-desc = """This interface allows you to summarize Wikipedia contents. Only requirement is to write the topic and it collects content by fetching from Wikipedia. For summarization this model uses 2 different extractive summarization methods and the number of sentences in the output depends on the length of the original text."""
-
-
-sample = [['Europe'],['Great Depression'],['Crocodile Dundee']]
-
-
-iface = Parallel(gr.Interface(fn=get_wiki_original_text, inputs=gr.inputs.Textbox(label="Text"), outputs="text", description='Original Text'),
- gr.Interface(fn=get_wiki_summary_by_lem, inputs=gr.inputs.Textbox(label="Text"), outputs="text", description='Summary 1'),
- gr.Interface(fn=get_wiki_summary_by_tfidf, inputs=gr.inputs.Textbox(label="Text"), outputs="text", description='Summary 2'),
- title= 'Text Summarizer',
- description = desc,
- examples=sample,
- inputs = gr.inputs.Textbox(label="Text"))
-
-iface.launch(inline = False)
\ No newline at end of file
diff --git a/spaces/CVPR/LIVE/edge_query.h b/spaces/CVPR/LIVE/edge_query.h
deleted file mode 100644
index 57f233a3203c1ea8d6b73f6624036578483442bb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/edge_query.h
+++ /dev/null
@@ -1,7 +0,0 @@
-#pragma once
-
-struct EdgeQuery {
- int shape_group_id;
- int shape_id;
- bool hit; // Do we hit the specified shape_group_id & shape_id?
-};
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/general_copy.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/general_copy.h
deleted file mode 100644
index 9546b72e5ef17b082ceda709e1e4ef71c8b864eb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/general_copy.h
+++ /dev/null
@@ -1,147 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file general_copy.h
- * \brief Sequential copy algorithms for general iterators.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace sequential
-{
-namespace general_copy_detail
-{
-
-
-template
-struct lazy_is_assignable
- : thrust::detail::is_assignable<
- typename T1::type,
- typename T2::type
- >
-{};
-
-
-// sometimes OutputIterator's reference type is reported as void
-// in that case, just assume that we're able to assign to it OK
-template
-struct reference_is_assignable
- : thrust::detail::eval_if<
- thrust::detail::is_same<
- typename thrust::iterator_reference::type, void
- >::value,
- thrust::detail::true_type,
- lazy_is_assignable<
- thrust::iterator_reference,
- thrust::iterator_reference
- >
- >::type
-{};
-
-
-// introduce an iterator assign helper to deal with assignments from
-// a wrapped reference
-
-__thrust_exec_check_disable__
-template
-inline __host__ __device__
-typename thrust::detail::enable_if<
- reference_is_assignable::value
->::type
-iter_assign(OutputIterator dst, InputIterator src)
-{
- *dst = *src;
-}
-
-
-__thrust_exec_check_disable__
-template
-inline __host__ __device__
-typename thrust::detail::disable_if<
- reference_is_assignable::value
->::type
-iter_assign(OutputIterator dst, InputIterator src)
-{
- typedef typename thrust::iterator_value::type value_type;
-
- // insert a temporary and hope for the best
- *dst = static_cast(*src);
-}
-
-
-} // end general_copy_detail
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- OutputIterator general_copy(InputIterator first,
- InputIterator last,
- OutputIterator result)
-{
- for(; first != last; ++first, ++result)
- {
- // gcc 4.2 crashes while instantiating iter_assign
-#if (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_GCC) && (THRUST_GCC_VERSION < 40300)
- *result = *first;
-#else
- general_copy_detail::iter_assign(result, first);
-#endif
- }
-
- return result;
-} // end general_copy()
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- OutputIterator general_copy_n(InputIterator first,
- Size n,
- OutputIterator result)
-{
- for(; n > Size(0); ++first, ++result, --n)
- {
- // gcc 4.2 crashes while instantiating iter_assign
-#if (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_GCC) && (THRUST_GCC_VERSION < 40300)
- *result = *first;
-#else
- general_copy_detail::iter_assign(result, first);
-#endif
- }
-
- return result;
-} // end general_copy_n()
-
-
-} // end namespace sequential
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
diff --git a/spaces/CVPR/regionclip-demo/setup.py b/spaces/CVPR/regionclip-demo/setup.py
deleted file mode 100644
index 16f1a522526a56831880c3c28be81782958b085d..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/setup.py
+++ /dev/null
@@ -1,247 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import glob
-import os
-import shutil
-from os import path
-from setuptools import find_packages, setup
-from typing import List
-import torch
-from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension
-from torch.utils.hipify import hipify_python
-
-torch_ver = [int(x) for x in torch.__version__.split(".")[:2]]
-assert torch_ver >= [1, 6], "Requires PyTorch >= 1.6"
-
-
-def get_version():
- init_py_path = path.join(path.abspath(path.dirname(__file__)), "detectron2", "__init__.py")
- init_py = open(init_py_path, "r").readlines()
- version_line = [l.strip() for l in init_py if l.startswith("__version__")][0]
- version = version_line.split("=")[-1].strip().strip("'\"")
-
- # The following is used to build release packages.
- # Users should never use it.
- suffix = os.getenv("D2_VERSION_SUFFIX", "")
- version = version + suffix
- if os.getenv("BUILD_NIGHTLY", "0") == "1":
- from datetime import datetime
-
- date_str = datetime.today().strftime("%y%m%d")
- version = version + ".dev" + date_str
-
- new_init_py = [l for l in init_py if not l.startswith("__version__")]
- new_init_py.append('__version__ = "{}"\n'.format(version))
- with open(init_py_path, "w") as f:
- f.write("".join(new_init_py))
- return version
-
-
-def get_extensions():
- this_dir = path.dirname(path.abspath(__file__))
- extensions_dir = path.join(this_dir, "detectron2", "layers", "csrc")
-
- main_source = path.join(extensions_dir, "vision.cpp")
- sources = glob.glob(path.join(extensions_dir, "**", "*.cpp"))
-
- from torch.utils.cpp_extension import ROCM_HOME
-
- is_rocm_pytorch = (
- True if ((torch.version.hip is not None) and (ROCM_HOME is not None)) else False
- )
-
- hipify_ver = (
- [int(x) for x in torch.utils.hipify.__version__.split(".")]
- if hasattr(torch.utils.hipify, "__version__")
- else [0, 0, 0]
- )
-
- if is_rocm_pytorch and hipify_ver < [1, 0, 0]: # TODO not needed since pt1.8
-
- # Earlier versions of hipification and extension modules were not
- # transparent, i.e. would require an explicit call to hipify, and the
- # hipification would introduce "hip" subdirectories, possibly changing
- # the relationship between source and header files.
- # This path is maintained for backwards compatibility.
-
- hipify_python.hipify(
- project_directory=this_dir,
- output_directory=this_dir,
- includes="/detectron2/layers/csrc/*",
- show_detailed=True,
- is_pytorch_extension=True,
- )
-
- source_cuda = glob.glob(path.join(extensions_dir, "**", "hip", "*.hip")) + glob.glob(
- path.join(extensions_dir, "hip", "*.hip")
- )
-
- shutil.copy(
- "detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_utils.h",
- "detectron2/layers/csrc/box_iou_rotated/hip/box_iou_rotated_utils.h",
- )
- shutil.copy(
- "detectron2/layers/csrc/deformable/deform_conv.h",
- "detectron2/layers/csrc/deformable/hip/deform_conv.h",
- )
-
- sources = [main_source] + sources
- sources = [
- s
- for s in sources
- if not is_rocm_pytorch or torch_ver < [1, 7] or not s.endswith("hip/vision.cpp")
- ]
-
- else:
-
- # common code between cuda and rocm platforms,
- # for hipify version [1,0,0] and later.
-
- source_cuda = glob.glob(path.join(extensions_dir, "**", "*.cu")) + glob.glob(
- path.join(extensions_dir, "*.cu")
- )
-
- sources = [main_source] + sources
-
- extension = CppExtension
-
- extra_compile_args = {"cxx": []}
- define_macros = []
-
- if (torch.cuda.is_available() and ((CUDA_HOME is not None) or is_rocm_pytorch)) or os.getenv(
- "FORCE_CUDA", "0"
- ) == "1":
- extension = CUDAExtension
- sources += source_cuda
-
- if not is_rocm_pytorch:
- define_macros += [("WITH_CUDA", None)]
- extra_compile_args["nvcc"] = [
- "-O3",
- "-DCUDA_HAS_FP16=1",
- "-D__CUDA_NO_HALF_OPERATORS__",
- "-D__CUDA_NO_HALF_CONVERSIONS__",
- "-D__CUDA_NO_HALF2_OPERATORS__",
- ]
- else:
- define_macros += [("WITH_HIP", None)]
- extra_compile_args["nvcc"] = []
-
- if torch_ver < [1, 7]:
- # supported by https://github.com/pytorch/pytorch/pull/43931
- CC = os.environ.get("CC", None)
- if CC is not None:
- extra_compile_args["nvcc"].append("-ccbin={}".format(CC))
-
- include_dirs = [extensions_dir]
-
- ext_modules = [
- extension(
- "detectron2._C",
- sources,
- include_dirs=include_dirs,
- define_macros=define_macros,
- extra_compile_args=extra_compile_args,
- )
- ]
-
- return ext_modules
-
-
-def get_model_zoo_configs() -> List[str]:
- """
- Return a list of configs to include in package for model zoo. Copy over these configs inside
- detectron2/model_zoo.
- """
-
- # Use absolute paths while symlinking.
- source_configs_dir = path.join(path.dirname(path.realpath(__file__)), "configs")
- destination = path.join(
- path.dirname(path.realpath(__file__)), "detectron2", "model_zoo", "configs"
- )
- # Symlink the config directory inside package to have a cleaner pip install.
-
- # Remove stale symlink/directory from a previous build.
- if path.exists(source_configs_dir):
- if path.islink(destination):
- os.unlink(destination)
- elif path.isdir(destination):
- shutil.rmtree(destination)
-
- if not path.exists(destination):
- try:
- os.symlink(source_configs_dir, destination)
- except OSError:
- # Fall back to copying if symlink fails: ex. on Windows.
- shutil.copytree(source_configs_dir, destination)
-
- config_paths = glob.glob("configs/**/*.yaml", recursive=True) + glob.glob(
- "configs/**/*.py", recursive=True
- )
- return config_paths
-
-
-# For projects that are relative small and provide features that are very close
-# to detectron2's core functionalities, we install them under detectron2.projects
-PROJECTS = {
- # "detectron2.projects.point_rend": "projects/PointRend/point_rend",
- # "detectron2.projects.deeplab": "projects/DeepLab/deeplab",
- # "detectron2.projects.panoptic_deeplab": "projects/Panoptic-DeepLab/panoptic_deeplab",
-}
-
-setup(
- name="detectron2",
- version=get_version(),
- author="FAIR",
- url="https://github.com/facebookresearch/detectron2",
- description="Detectron2 is FAIR's next-generation research "
- "platform for object detection and segmentation.",
- packages=find_packages(exclude=("configs", "tests*")) + list(PROJECTS.keys()),
- package_dir=PROJECTS,
- package_data={"detectron2.model_zoo": get_model_zoo_configs()},
- python_requires=">=3.6",
- install_requires=[
- # Do not add opencv here. Just like pytorch, user should install
- # opencv themselves, preferrably by OS's package manager, or by
- # choosing the proper pypi package name at https://github.com/skvark/opencv-python
- "termcolor>=1.1",
- "Pillow>=7.1", # or use pillow-simd for better performance
- "yacs>=0.1.6",
- "tabulate",
- "cloudpickle",
- "matplotlib",
- "tqdm>4.29.0",
- "tensorboard",
- # Lock version of fvcore/iopath because they may have breaking changes
- # NOTE: when updating fvcore/iopath version, make sure fvcore depends
- # on compatible version of iopath.
- "fvcore>=0.1.5,<0.1.6", # required like this to make it pip installable
- "iopath>=0.1.7,<0.1.9",
- "pycocotools>=2.0.2", # corresponds to https://github.com/ppwwyyxx/cocoapi
- "future", # used by caffe2
- "pydot", # used to save caffe2 SVGs
- "dataclasses; python_version<'3.7'",
- "omegaconf>=2.1.0rc1",
- "hydra-core>=1.1.0rc1",
- "black==21.4b2",
- # When adding to the list, may need to update docs/requirements.txt
- # or add mock in docs/conf.py
- ],
- extras_require={
- "all": [
- "shapely",
- "pygments>=2.2",
- "psutil",
- "panopticapi @ https://github.com/cocodataset/panopticapi/archive/master.zip",
- ],
- "dev": [
- "flake8==3.8.1",
- "isort==4.3.21",
- "flake8-bugbear",
- "flake8-comprehensions",
- ],
- },
- ext_modules=get_extensions(),
- cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension},
-)
diff --git a/spaces/Chintan-Donda/KKMS-KSSW-HF/src/kkms_kssw.py b/spaces/Chintan-Donda/KKMS-KSSW-HF/src/kkms_kssw.py
deleted file mode 100644
index 8c714d5fb7399351baa9e019e4766372d4ca59f2..0000000000000000000000000000000000000000
--- a/spaces/Chintan-Donda/KKMS-KSSW-HF/src/kkms_kssw.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import os
-
-import src.constants as constants_utils
-import src.langchain_utils as langchain_utils
-import src.weather as weather_utils
-import src.mandi_price as mandi_utils
-import src.translator as translator_utils
-import src.web_crawler as web_crawler_utils
-
-import logging
-logger = logging.getLogger(__name__)
-logging.basicConfig(
- format="%(asctime)s %(levelname)s [%(name)s] %(message)s", level=logging.INFO, datefmt="%Y-%m-%d %H:%M:%S"
-)
-
-import warnings
-warnings.filterwarnings('ignore')
-
-
-
-class KKMS_KSSW:
- def __init__(self):
- self.index_type = constants_utils.INDEX_TYPE
- self.load_from_existing_index_store = constants_utils.LOAD_FROM_EXISTING_INDEX_STORE
-
- # Instantiate langchain_utils class object
- self.langchain_utils_obj = langchain_utils.LANGCHAIN_UTILS(
- index_type=self.index_type,
- load_from_existing_index_store=self.load_from_existing_index_store
- )
- # Instantiate Mandi Price utils class object
- self.mandi_utils_obj = mandi_utils.MANDI_PRICE()
- # Instantiate Weather class object
- self.weather_utils_obj = weather_utils.WEATHER()
- # Instantiate translator_utils class object
- self.translator_utils_obj = translator_utils.TRANSLATOR()
-
-
-
- # Initialize index (vector store)
- def load_create_index(self):
- logger.info(f"Load/Create index")
- self.langchain_utils_obj.load_create_index()
-
-
- # Upload data and update the index
- def upload_data(
- self,
- doc_type,
- files_or_urls,
- index_category
- ):
- logger.info(f"Uploading data")
- self.langchain_utils_obj.upload_data(
- doc_type=doc_type,
- files_or_urls=files_or_urls,
- index_category=index_category
- )
-
-
- # Define query on index to retrieve the most relevant top K documents from the vector store
- def query(
- self,
- question,
- question_category
- ):
- '''
- Args:
- mode: can be any of [default, embedding]
- response_mode: can be any of [default, compact, tree_summarize]
- '''
- logger.info(f"Querying from index/vector store")
-
- return self.langchain_utils_obj.query(
- question=question,
- question_category=question_category
- )
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/diff/diffusion.py b/spaces/ChrisPreston/diff-svc_minato_aqua/modules/diff/diffusion.py
deleted file mode 100644
index 3d632df5fff17b64f2eb9932a891611ec9447738..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/diff/diffusion.py
+++ /dev/null
@@ -1,312 +0,0 @@
-from collections import deque
-from functools import partial
-from inspect import isfunction
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-from tqdm import tqdm
-
-from modules.encoder import SvcEncoder
-from training.train_pipeline import Batch2Loss
-from utils.hparams import hparams
-
-
-def exists(x):
- return x is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-# gaussian diffusion trainer class
-
-def extract(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
-
-
-def linear_beta_schedule(timesteps, max_beta=hparams.get('max_beta', 0.01)):
- """
- linear schedule
- """
- betas = np.linspace(1e-4, max_beta, timesteps)
- return betas
-
-
-def cosine_beta_schedule(timesteps, s=0.008):
- """
- cosine schedule
- as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
- """
- steps = timesteps + 1
- x = np.linspace(0, steps, steps)
- alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
- alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
- betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
- return np.clip(betas, a_min=0, a_max=0.999)
-
-
-beta_schedule = {
- "cosine": cosine_beta_schedule,
- "linear": linear_beta_schedule,
-}
-
-
-class GaussianDiffusion(nn.Module):
- def __init__(self, phone_encoder, out_dims, denoise_fn,
- timesteps=1000, K_step=1000, loss_type=hparams.get('diff_loss_type', 'l1'), betas=None, spec_min=None,
- spec_max=None):
- super().__init__()
- self.denoise_fn = denoise_fn
- self.fs2 = SvcEncoder(phone_encoder, out_dims)
- self.mel_bins = out_dims
-
- if exists(betas):
- betas = betas.detach().cpu().numpy() if isinstance(betas, torch.Tensor) else betas
- else:
- if 'schedule_type' in hparams.keys():
- betas = beta_schedule[hparams['schedule_type']](timesteps)
- else:
- betas = cosine_beta_schedule(timesteps)
-
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.K_step = K_step
- self.loss_type = loss_type
-
- self.noise_list = deque(maxlen=4)
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- self.register_buffer('spec_min', torch.FloatTensor(spec_min)[None, None, :hparams['keep_bins']])
- self.register_buffer('spec_max', torch.FloatTensor(spec_max)[None, None, :hparams['keep_bins']])
-
- def q_mean_variance(self, x_start, t):
- mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
- variance = extract(1. - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, cond, clip_denoised: bool):
- noise_pred = self.denoise_fn(x, t, cond=cond)
- x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred)
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond, clip_denoised=clip_denoised)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def p_sample_plms(self, x, t, interval, cond, clip_denoised=True, repeat_noise=False):
- """
- Use the PLMS method from [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778).
- """
-
- def get_x_pred(x, noise_t, t):
- a_t = extract(self.alphas_cumprod, t, x.shape)
- a_prev = extract(self.alphas_cumprod, torch.max(t - interval, torch.zeros_like(t)), x.shape)
- a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt()
-
- x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x - 1 / (
- a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
- x_pred = x + x_delta
-
- return x_pred
-
- noise_list = self.noise_list
- noise_pred = self.denoise_fn(x, t, cond=cond)
-
- if len(noise_list) == 0:
- x_pred = get_x_pred(x, noise_pred, t)
- noise_pred_prev = self.denoise_fn(x_pred, max(t - interval, 0), cond=cond)
- noise_pred_prime = (noise_pred + noise_pred_prev) / 2
- elif len(noise_list) == 1:
- noise_pred_prime = (3 * noise_pred - noise_list[-1]) / 2
- elif len(noise_list) == 2:
- noise_pred_prime = (23 * noise_pred - 16 * noise_list[-1] + 5 * noise_list[-2]) / 12
- elif len(noise_list) >= 3:
- noise_pred_prime = (55 * noise_pred - 59 * noise_list[-1] + 37 * noise_list[-2] - 9 * noise_list[-3]) / 24
-
- x_prev = get_x_pred(x, noise_pred_prime, t)
- noise_list.append(noise_pred)
-
- return x_prev
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (
- extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
- )
-
- def p_losses(self, x_start, t, cond, noise=None, nonpadding=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
-
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- x_recon = self.denoise_fn(x_noisy, t, cond)
-
- if self.loss_type == 'l1':
- if nonpadding is not None:
- loss = ((noise - x_recon).abs() * nonpadding.unsqueeze(1)).mean()
- else:
- # print('are you sure w/o nonpadding?')
- loss = (noise - x_recon).abs().mean()
-
- elif self.loss_type == 'l2':
- loss = F.mse_loss(noise, x_recon)
- else:
- raise NotImplementedError()
-
- return loss
-
- def forward(self, hubert, mel2ph=None, spk_embed=None,
- ref_mels=None, f0=None, uv=None, energy=None, infer=False, **kwargs):
- '''
- conditioning diffusion, use fastspeech2 encoder output as the condition
- '''
- ret = self.fs2(hubert, mel2ph, spk_embed, None, f0, uv, energy,
- skip_decoder=True, infer=infer, **kwargs)
- cond = ret['decoder_inp'].transpose(1, 2)
- b, *_, device = *hubert.shape, hubert.device
-
- if not infer:
- Batch2Loss.module4(
- self.p_losses,
- self.norm_spec(ref_mels), cond, ret, self.K_step, b, device
- )
- else:
- if 'use_gt_mel' in kwargs.keys() and kwargs['use_gt_mel']:
- t = kwargs['add_noise_step']
- print('===>using ground truth mel as start, please make sure parameter "key==0" !')
- fs2_mels = ref_mels
- fs2_mels = self.norm_spec(fs2_mels)
- fs2_mels = fs2_mels.transpose(1, 2)[:, None, :, :]
- x = self.q_sample(x_start=fs2_mels, t=torch.tensor([t - 1], device=device).long())
- else:
- t = self.K_step
- shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2])
- x = torch.randn(shape, device=device)
- if hparams.get('pndm_speedup') and hparams['pndm_speedup'] > 1:
- self.noise_list = deque(maxlen=4)
- iteration_interval = hparams['pndm_speedup']
- for i in tqdm(reversed(range(0, t, iteration_interval)), desc='sample time step',
- total=t // iteration_interval):
- x = self.p_sample_plms(x, torch.full((b,), i, device=device, dtype=torch.long), iteration_interval,
- cond)
- else:
- for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- x = x[:, 0].transpose(1, 2)
- if mel2ph is not None: # for singing
- ret['mel_out'] = self.denorm_spec(x) * ((mel2ph > 0).float()[:, :, None])
- else:
- ret['mel_out'] = self.denorm_spec(x)
- return ret
-
- def norm_spec(self, x):
- return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1
-
- def denorm_spec(self, x):
- return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
-
- def out2mel(self, x):
- return x
-
-
-class OfflineGaussianDiffusion(GaussianDiffusion):
- def forward(self, txt_tokens, mel2ph=None, spk_embed=None,
- ref_mels=None, f0=None, uv=None, energy=None, infer=False, **kwargs):
- b, *_, device = *txt_tokens.shape, txt_tokens.device
-
- ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy,
- skip_decoder=True, infer=True, **kwargs)
- cond = ret['decoder_inp'].transpose(1, 2)
- fs2_mels = ref_mels[1]
- ref_mels = ref_mels[0]
-
- if not infer:
- t = torch.randint(0, self.K_step, (b,), device=device).long()
- x = ref_mels
- x = self.norm_spec(x)
- x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
- ret['diff_loss'] = self.p_losses(x, t, cond)
- else:
- t = self.K_step
- fs2_mels = self.norm_spec(fs2_mels)
- fs2_mels = fs2_mels.transpose(1, 2)[:, None, :, :]
-
- x = self.q_sample(x_start=fs2_mels, t=torch.tensor([t - 1], device=device).long())
-
- if hparams.get('gaussian_start') is not None and hparams['gaussian_start']:
- print('===> gaussion start.')
- shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2])
- x = torch.randn(shape, device=device)
- for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- x = x[:, 0].transpose(1, 2)
- ret['mel_out'] = self.denorm_spec(x)
-
- return ret
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/training/svc_task.py b/spaces/ChrisPreston/diff-svc_minato_aqua/training/svc_task.py
deleted file mode 100644
index 5f79f535ebc5cc845287a91626cabb3e287a1136..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/training/svc_task.py
+++ /dev/null
@@ -1,481 +0,0 @@
-import os
-from multiprocessing.pool import Pool
-
-import matplotlib
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-import torch.distributed as dist
-import torch.distributions
-import torch.nn.functional as F
-import torch.optim
-import torch.utils.data
-from tqdm import tqdm
-
-import utils
-from modules.commons.ssim import ssim
-from modules.diff.diffusion import GaussianDiffusion
-from modules.diff.net import DiffNet
-from modules.vocoders.nsf_hifigan import NsfHifiGAN, nsf_hifigan
-from preprocessing.hubertinfer import HubertEncoder
-from preprocessing.process_pipeline import get_pitch_parselmouth
-from training.base_task import BaseTask
-from utils import audio
-from utils.hparams import hparams
-from utils.pitch_utils import denorm_f0
-from utils.pl_utils import data_loader
-from utils.plot import spec_to_figure, f0_to_figure
-from utils.svc_utils import SvcDataset
-
-matplotlib.use('Agg')
-DIFF_DECODERS = {
- 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins'])
-}
-
-
-class SvcTask(BaseTask):
- def __init__(self):
- super(SvcTask, self).__init__()
- self.vocoder = NsfHifiGAN()
- self.phone_encoder = HubertEncoder(hparams['hubert_path'])
- self.saving_result_pool = None
- self.saving_results_futures = None
- self.stats = {}
- self.dataset_cls = SvcDataset
- self.mse_loss_fn = torch.nn.MSELoss()
- mel_losses = hparams['mel_loss'].split("|")
- self.loss_and_lambda = {}
- for i, l in enumerate(mel_losses):
- if l == '':
- continue
- if ':' in l:
- l, lbd = l.split(":")
- lbd = float(lbd)
- else:
- lbd = 1.0
- self.loss_and_lambda[l] = lbd
- print("| Mel losses:", self.loss_and_lambda)
-
- def build_dataloader(self, dataset, shuffle, max_tokens=None, max_sentences=None,
- required_batch_size_multiple=-1, endless=False, batch_by_size=True):
- devices_cnt = torch.cuda.device_count()
- if devices_cnt == 0:
- devices_cnt = 1
- if required_batch_size_multiple == -1:
- required_batch_size_multiple = devices_cnt
-
- def shuffle_batches(batches):
- np.random.shuffle(batches)
- return batches
-
- if max_tokens is not None:
- max_tokens *= devices_cnt
- if max_sentences is not None:
- max_sentences *= devices_cnt
- indices = dataset.ordered_indices()
- if batch_by_size:
- batch_sampler = utils.batch_by_size(
- indices, dataset.num_tokens, max_tokens=max_tokens, max_sentences=max_sentences,
- required_batch_size_multiple=required_batch_size_multiple,
- )
- else:
- batch_sampler = []
- for i in range(0, len(indices), max_sentences):
- batch_sampler.append(indices[i:i + max_sentences])
-
- if shuffle:
- batches = shuffle_batches(list(batch_sampler))
- if endless:
- batches = [b for _ in range(1000) for b in shuffle_batches(list(batch_sampler))]
- else:
- batches = batch_sampler
- if endless:
- batches = [b for _ in range(1000) for b in batches]
- num_workers = dataset.num_workers
- if self.trainer.use_ddp:
- num_replicas = dist.get_world_size()
- rank = dist.get_rank()
- batches = [x[rank::num_replicas] for x in batches if len(x) % num_replicas == 0]
- return torch.utils.data.DataLoader(dataset,
- collate_fn=dataset.collater,
- batch_sampler=batches,
- num_workers=num_workers,
- pin_memory=False)
-
- def test_start(self):
- self.saving_result_pool = Pool(8)
- self.saving_results_futures = []
- self.vocoder = nsf_hifigan
-
- def test_end(self, outputs):
- self.saving_result_pool.close()
- [f.get() for f in tqdm(self.saving_results_futures)]
- self.saving_result_pool.join()
- return {}
-
- @data_loader
- def train_dataloader(self):
- train_dataset = self.dataset_cls(hparams['train_set_name'], shuffle=True)
- return self.build_dataloader(train_dataset, True, self.max_tokens, self.max_sentences,
- endless=hparams['endless_ds'])
-
- @data_loader
- def val_dataloader(self):
- valid_dataset = self.dataset_cls(hparams['valid_set_name'], shuffle=False)
- return self.build_dataloader(valid_dataset, False, self.max_eval_tokens, self.max_eval_sentences)
-
- @data_loader
- def test_dataloader(self):
- test_dataset = self.dataset_cls(hparams['test_set_name'], shuffle=False)
- return self.build_dataloader(test_dataset, False, self.max_eval_tokens,
- self.max_eval_sentences, batch_by_size=False)
-
- def build_model(self):
- self.build_tts_model()
- if hparams['load_ckpt'] != '':
- self.load_ckpt(hparams['load_ckpt'], strict=True)
- utils.print_arch(self.model)
- return self.model
-
- def build_tts_model(self):
- mel_bins = hparams['audio_num_mel_bins']
- self.model = GaussianDiffusion(
- phone_encoder=self.phone_encoder,
- out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
- timesteps=hparams['timesteps'],
- K_step=hparams['K_step'],
- loss_type=hparams['diff_loss_type'],
- spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
- )
-
- def build_optimizer(self, model):
- self.optimizer = optimizer = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, model.parameters()),
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
- return optimizer
-
- @staticmethod
- def run_model(model, sample, return_output=False, infer=False):
- '''
- steps:
- 1. run the full model, calc the main loss
- 2. calculate loss for dur_predictor, pitch_predictor, energy_predictor
- '''
- hubert = sample['hubert'] # [B, T_t,H]
- target = sample['mels'] # [B, T_s, 80]
- mel2ph = sample['mel2ph'] # [B, T_s]
- f0 = sample['f0']
- uv = sample['uv']
- energy = sample.get('energy')
-
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- output = model(hubert, mel2ph=mel2ph, spk_embed=spk_embed, ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer)
-
- losses = {}
- if 'diff_loss' in output:
- losses['mel'] = output['diff_loss']
- if not return_output:
- return losses
- else:
- return losses, output
-
- def build_scheduler(self, optimizer):
- return torch.optim.lr_scheduler.StepLR(optimizer, hparams['decay_steps'], gamma=0.5)
-
- def _training_step(self, sample, batch_idx, _):
- log_outputs = self.run_model(self.model, sample)
- total_loss = sum([v for v in log_outputs.values() if isinstance(v, torch.Tensor) and v.requires_grad])
- log_outputs['batch_size'] = sample['hubert'].size()[0]
- log_outputs['lr'] = self.scheduler.get_lr()[0]
- return total_loss, log_outputs
-
- def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx):
- if optimizer is None:
- return
- optimizer.step()
- optimizer.zero_grad()
- if self.scheduler is not None:
- self.scheduler.step(self.global_step // hparams['accumulate_grad_batches'])
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- hubert = sample['hubert'] # [B, T_t]
- energy = sample.get('energy')
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- mel2ph = sample['mel2ph']
-
- outputs['losses'] = {}
-
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
-
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = utils.tensors_to_scalars(outputs)
- if batch_idx < hparams['num_valid_plots']:
- model_out = self.model(
- hubert, spk_embed=spk_embed, mel2ph=mel2ph, f0=sample['f0'], uv=sample['uv'], energy=energy,
- ref_mels=None, infer=True
- )
-
- gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
- pred_f0 = model_out.get('f0_denorm')
- self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=pred_f0)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'], name=f'diffmel_{batch_idx}')
- if hparams['use_pitch_embed']:
- self.plot_pitch(batch_idx, sample, model_out)
- return outputs
-
- def _validation_end(self, outputs):
- all_losses_meter = {
- 'total_loss': utils.AvgrageMeter(),
- }
- for output in outputs:
- n = output['nsamples']
- for k, v in output['losses'].items():
- if k not in all_losses_meter:
- all_losses_meter[k] = utils.AvgrageMeter()
- all_losses_meter[k].update(v, n)
- all_losses_meter['total_loss'].update(output['total_loss'], n)
- return {k: round(v.avg, 4) for k, v in all_losses_meter.items()}
-
- ############
- # losses
- ############
- def add_mel_loss(self, mel_out, target, losses, postfix='', mel_mix_loss=None):
- if mel_mix_loss is None:
- for loss_name, lbd in self.loss_and_lambda.items():
- if 'l1' == loss_name:
- l = self.l1_loss(mel_out, target)
- elif 'mse' == loss_name:
- raise NotImplementedError
- elif 'ssim' == loss_name:
- l = self.ssim_loss(mel_out, target)
- elif 'gdl' == loss_name:
- raise NotImplementedError
- losses[f'{loss_name}{postfix}'] = l * lbd
- else:
- raise NotImplementedError
-
- def l1_loss(self, decoder_output, target):
- # decoder_output : B x T x n_mel
- # target : B x T x n_mel
- l1_loss = F.l1_loss(decoder_output, target, reduction='none')
- weights = self.weights_nonzero_speech(target)
- l1_loss = (l1_loss * weights).sum() / weights.sum()
- return l1_loss
-
- def ssim_loss(self, decoder_output, target, bias=6.0):
- # decoder_output : B x T x n_mel
- # target : B x T x n_mel
- assert decoder_output.shape == target.shape
- weights = self.weights_nonzero_speech(target)
- decoder_output = decoder_output[:, None] + bias
- target = target[:, None] + bias
- ssim_loss = 1 - ssim(decoder_output, target, size_average=False)
- ssim_loss = (ssim_loss * weights).sum() / weights.sum()
- return ssim_loss
-
- def add_pitch_loss(self, output, sample, losses):
- if hparams['pitch_type'] == 'ph':
- nonpadding = (sample['txt_tokens'] != 0).float()
- pitch_loss_fn = F.l1_loss if hparams['pitch_loss'] == 'l1' else F.mse_loss
- losses['f0'] = (pitch_loss_fn(output['pitch_pred'][:, :, 0], sample['f0'],
- reduction='none') * nonpadding).sum() \
- / nonpadding.sum() * hparams['lambda_f0']
- return
- mel2ph = sample['mel2ph'] # [B, T_s]
- f0 = sample['f0']
- uv = sample['uv']
- nonpadding = (mel2ph != 0).float()
- if hparams['pitch_type'] == 'frame':
- self.add_f0_loss(output['pitch_pred'], f0, uv, losses, nonpadding=nonpadding)
-
- @staticmethod
- def add_f0_loss(p_pred, f0, uv, losses, nonpadding):
- assert p_pred[..., 0].shape == f0.shape
- if hparams['use_uv']:
- assert p_pred[..., 1].shape == uv.shape
- losses['uv'] = (F.binary_cross_entropy_with_logits(
- p_pred[:, :, 1], uv, reduction='none') * nonpadding).sum() \
- / nonpadding.sum() * hparams['lambda_uv']
- nonpadding = nonpadding * (uv == 0).float()
-
- f0_pred = p_pred[:, :, 0]
- if hparams['pitch_loss'] in ['l1', 'l2']:
- pitch_loss_fn = F.l1_loss if hparams['pitch_loss'] == 'l1' else F.mse_loss
- losses['f0'] = (pitch_loss_fn(f0_pred, f0, reduction='none') * nonpadding).sum() \
- / nonpadding.sum() * hparams['lambda_f0']
- elif hparams['pitch_loss'] == 'ssim':
- return NotImplementedError
-
- @staticmethod
- def add_energy_loss(energy_pred, energy, losses):
- nonpadding = (energy != 0).float()
- loss = (F.mse_loss(energy_pred, energy, reduction='none') * nonpadding).sum() / nonpadding.sum()
- loss = loss * hparams['lambda_energy']
- losses['e'] = loss
-
- ############
- # validation plots
- ############
- def plot_mel(self, batch_idx, spec, spec_out, name=None):
- spec_cat = torch.cat([spec, spec_out], -1)
- name = f'mel_{batch_idx}' if name is None else name
- vmin = hparams['mel_vmin']
- vmax = hparams['mel_vmax']
- self.logger.experiment.add_figure(name, spec_to_figure(spec_cat[0], vmin, vmax), self.global_step)
-
- def plot_pitch(self, batch_idx, sample, model_out):
- f0 = sample['f0']
- if hparams['pitch_type'] == 'ph':
- mel2ph = sample['mel2ph']
- f0 = self.expand_f0_ph(f0, mel2ph)
- f0_pred = self.expand_f0_ph(model_out['pitch_pred'][:, :, 0], mel2ph)
- self.logger.experiment.add_figure(
- f'f0_{batch_idx}', f0_to_figure(f0[0], None, f0_pred[0]), self.global_step)
- return
- f0 = denorm_f0(f0, sample['uv'], hparams)
- if hparams['pitch_type'] == 'frame':
- pitch_pred = denorm_f0(model_out['pitch_pred'][:, :, 0], sample['uv'], hparams)
- self.logger.experiment.add_figure(
- f'f0_{batch_idx}', f0_to_figure(f0[0], None, pitch_pred[0]), self.global_step)
-
- def plot_wav(self, batch_idx, gt_wav, wav_out, is_mel=False, gt_f0=None, f0=None, name=None):
- gt_wav = gt_wav[0].cpu().numpy()
- wav_out = wav_out[0].cpu().numpy()
- gt_f0 = gt_f0[0].cpu().numpy()
- f0 = f0[0].cpu().numpy()
- if is_mel:
- gt_wav = self.vocoder.spec2wav(gt_wav, f0=gt_f0)
- wav_out = self.vocoder.spec2wav(wav_out, f0=f0)
- self.logger.experiment.add_audio(f'gt_{batch_idx}', gt_wav, sample_rate=hparams['audio_sample_rate'],
- global_step=self.global_step)
- self.logger.experiment.add_audio(f'wav_{batch_idx}', wav_out, sample_rate=hparams['audio_sample_rate'],
- global_step=self.global_step)
-
- ############
- # infer
- ############
- def test_step(self, sample, batch_idx):
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- hubert = sample['hubert']
- ref_mels = None
- mel2ph = sample['mel2ph']
- f0 = sample['f0']
- uv = sample['uv']
- outputs = self.model(hubert, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, ref_mels=ref_mels,
- infer=True)
- sample['outputs'] = self.model.out2mel(outputs['mel_out'])
- sample['mel2ph_pred'] = outputs['mel2ph']
- sample['f0'] = denorm_f0(sample['f0'], sample['uv'], hparams)
- sample['f0_pred'] = outputs.get('f0_denorm')
- return self.after_infer(sample)
-
- def after_infer(self, predictions):
- if self.saving_result_pool is None and not hparams['profile_infer']:
- self.saving_result_pool = Pool(min(int(os.getenv('N_PROC', os.cpu_count())), 16))
- self.saving_results_futures = []
- predictions = utils.unpack_dict_to_list(predictions)
- t = tqdm(predictions)
- for num_predictions, prediction in enumerate(t):
- for k, v in prediction.items():
- if type(v) is torch.Tensor:
- prediction[k] = v.cpu().numpy()
-
- item_name = prediction.get('item_name')
-
- # remove paddings
- mel_gt = prediction["mels"]
- mel_gt_mask = np.abs(mel_gt).sum(-1) > 0
- mel_gt = mel_gt[mel_gt_mask]
- mel_pred = prediction["outputs"]
- mel_pred_mask = np.abs(mel_pred).sum(-1) > 0
- mel_pred = mel_pred[mel_pred_mask]
- mel_gt = np.clip(mel_gt, hparams['mel_vmin'], hparams['mel_vmax'])
- mel_pred = np.clip(mel_pred, hparams['mel_vmin'], hparams['mel_vmax'])
-
- f0_gt = prediction.get("f0")
- f0_pred = f0_gt
- if f0_pred is not None:
- f0_gt = f0_gt[mel_gt_mask]
- if len(f0_pred) > len(mel_pred_mask):
- f0_pred = f0_pred[:len(mel_pred_mask)]
- f0_pred = f0_pred[mel_pred_mask]
- gen_dir = os.path.join(hparams['work_dir'],
- f'generated_{self.trainer.global_step}_{hparams["gen_dir_name"]}')
- wav_pred = self.vocoder.spec2wav(mel_pred, f0=f0_pred)
- if not hparams['profile_infer']:
- os.makedirs(gen_dir, exist_ok=True)
- os.makedirs(f'{gen_dir}/wavs', exist_ok=True)
- os.makedirs(f'{gen_dir}/plot', exist_ok=True)
- os.makedirs(os.path.join(hparams['work_dir'], 'P_mels_npy'), exist_ok=True)
- os.makedirs(os.path.join(hparams['work_dir'], 'G_mels_npy'), exist_ok=True)
- self.saving_results_futures.append(
- self.saving_result_pool.apply_async(self.save_result, args=[
- wav_pred, mel_pred, 'P', item_name, gen_dir]))
-
- if mel_gt is not None and hparams['save_gt']:
- wav_gt = self.vocoder.spec2wav(mel_gt, f0=f0_gt)
- self.saving_results_futures.append(
- self.saving_result_pool.apply_async(self.save_result, args=[
- wav_gt, mel_gt, 'G', item_name, gen_dir]))
- if hparams['save_f0']:
- import matplotlib.pyplot as plt
- f0_pred_ = f0_pred
- f0_gt_, _ = get_pitch_parselmouth(wav_gt, mel_gt, hparams)
- fig = plt.figure()
- plt.plot(f0_pred_, label=r'$f0_P$')
- plt.plot(f0_gt_, label=r'$f0_G$')
- plt.legend()
- plt.tight_layout()
- plt.savefig(f'{gen_dir}/plot/[F0][{item_name}]{text}.png', format='png')
- plt.close(fig)
-
- t.set_description(
- f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}")
- else:
- if 'gen_wav_time' not in self.stats:
- self.stats['gen_wav_time'] = 0
- self.stats['gen_wav_time'] += len(wav_pred) / hparams['audio_sample_rate']
- print('gen_wav_time: ', self.stats['gen_wav_time'])
-
- return {}
-
- @staticmethod
- def save_result(wav_out, mel, prefix, item_name, gen_dir):
- item_name = item_name.replace('/', '-')
- base_fn = f'[{item_name}][{prefix}]'
- base_fn += ('-' + hparams['exp_name'])
- np.save(os.path.join(hparams['work_dir'], f'{prefix}_mels_npy', item_name), mel)
- audio.save_wav(wav_out, f'{gen_dir}/wavs/{base_fn}.wav', 24000, # hparams['audio_sample_rate'],
- norm=hparams['out_wav_norm'])
- fig = plt.figure(figsize=(14, 10))
- spec_vmin = hparams['mel_vmin']
- spec_vmax = hparams['mel_vmax']
- heatmap = plt.pcolor(mel.T, vmin=spec_vmin, vmax=spec_vmax)
- fig.colorbar(heatmap)
- f0, _ = get_pitch_parselmouth(wav_out, mel, hparams)
- f0 = (f0 - 100) / (800 - 100) * 80 * (f0 > 0)
- plt.plot(f0, c='white', linewidth=1, alpha=0.6)
- plt.tight_layout()
- plt.savefig(f'{gen_dir}/plot/{base_fn}.png', format='png', dpi=1000)
- plt.close(fig)
-
- ##############
- # utils
- ##############
- @staticmethod
- def expand_f0_ph(f0, mel2ph):
- f0 = denorm_f0(f0, None, hparams)
- f0 = F.pad(f0, [1, 0])
- f0 = torch.gather(f0, 1, mel2ph) # [B, T_mel]
- return f0
-
- @staticmethod
- def weights_nonzero_speech(target):
- # target : B x T x mel
- # Assign weight 1.0 to all labels except for padding (id=0).
- dim = target.size(-1)
- return target.abs().sum(-1, keepdim=True).ne(0).float().repeat(1, 1, dim)
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/recallMsg.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/recallMsg.js
deleted file mode 100644
index 889d0fd741eb642df15fc0e9982ad58636277ba3..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/recallMsg.js
+++ /dev/null
@@ -1,28 +0,0 @@
-export class recallMsg extends plugin {
- constructor () {
- super({
- name: "回复撤回",
- dsc: "撤回回复消息",
- event: "message",
- rule: [
- {
- reg: `^#?撤回$`,
- fnc: "recall"
- }
- ]
- })
- }
-
- async recall(e) {
- if (e.isMaster && e.reply_id) {
- if (e.group?.recallMsg) {
- e.group.recallMsg(e.reply_id)
- e.group.recallMsg(e.message_id)
- } else if (e.friend?.recallMsg) {
- e.friend.recallMsg(e.reply_id)
- e.friend.recallMsg(e.message_id)
- }
- }
- return false
- }
-}
\ No newline at end of file
diff --git a/spaces/CikeyQI/meme-api/meme_generator/app.py b/spaces/CikeyQI/meme-api/meme_generator/app.py
deleted file mode 100644
index 20a27306f38d2e7ea575874a0ff7b061b00526a8..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/app.py
+++ /dev/null
@@ -1,226 +0,0 @@
-from typing import Any, Dict, List, Literal, Optional, Tuple
-
-import filetype
-from fastapi import Depends, FastAPI, Form, HTTPException, Response, UploadFile
-from pil_utils.types import ColorType, FontStyle, FontWeight
-from pydantic import BaseModel, ValidationError
-
-from meme_generator.config import meme_config
-from meme_generator.exception import MemeGeneratorException, NoSuchMeme
-from meme_generator.log import LOGGING_CONFIG, setup_logger
-from meme_generator.manager import get_meme, get_meme_keys, get_memes
-from meme_generator.meme import Meme, MemeArgsModel
-from meme_generator.utils import TextProperties, render_meme_list
-
-app = FastAPI()
-
-
-class MemeArgsResponse(BaseModel):
- name: str
- type: str
- description: Optional[str] = None
- default: Optional[Any] = None
- enum: Optional[List[Any]] = None
-
-
-class MemeParamsResponse(BaseModel):
- min_images: int
- max_images: int
- min_texts: int
- max_texts: int
- default_texts: List[str]
- args: List[MemeArgsResponse]
-
-
-class MemeInfoResponse(BaseModel):
- key: str
- keywords: List[str]
- patterns: List[str]
- params: MemeParamsResponse
-
-
-def register_router(meme: Meme):
- if args_type := meme.params_type.args_type:
- args_model = args_type.model
- else:
- args_model = MemeArgsModel
-
- def args_checker(args: Optional[str] = Form(default=str(args_model().json()))):
- if not args:
- return MemeArgsModel()
- try:
- model = args_model.parse_raw(args)
- except ValidationError as e:
- raise HTTPException(status_code=552, detail=str(e))
- return model
-
- @app.post(f"/memes/{meme.key}/")
- async def _(
- images: List[UploadFile] = [],
- texts: List[str] = meme.params_type.default_texts,
- args: args_model = Depends(args_checker), # type: ignore
- ):
- imgs: List[bytes] = []
- for image in images:
- imgs.append(await image.read())
-
- texts = [text for text in texts if text]
-
- assert isinstance(args, args_model)
-
- try:
- result = await meme(images=imgs, texts=texts, args=args.dict())
- except MemeGeneratorException as e:
- raise HTTPException(status_code=e.status_code, detail=str(e))
-
- content = result.getvalue()
- media_type = str(filetype.guess_mime(content)) or "text/plain"
- return Response(content=content, media_type=media_type)
-
-
-class MemeKeyWithProperties(BaseModel):
- meme_key: str
- fill: ColorType = "black"
- style: FontStyle = "normal"
- weight: FontWeight = "normal"
- stroke_width: int = 0
- stroke_fill: Optional[ColorType] = None
-
-
-default_meme_list = [
- MemeKeyWithProperties(meme_key=meme.key)
- for meme in sorted(get_memes(), key=lambda meme: meme.key)
-]
-
-
-class RenderMemeListRequest(BaseModel):
- meme_list: List[MemeKeyWithProperties] = default_meme_list
- order_direction: Literal["row", "column"] = "column"
- columns: int = 4
- column_align: Literal["left", "center", "right"] = "left"
- item_padding: Tuple[int, int] = (15, 2)
- image_padding: Tuple[int, int] = (50, 50)
- bg_color: ColorType = "white"
- fontsize: int = 30
- fontname: str = ""
- fallback_fonts: List[str] = []
-
-
-def register_routers():
- @app.post("/memes/render_list")
- def _(params: RenderMemeListRequest = RenderMemeListRequest()):
- try:
- meme_list = [
- (
- get_meme(p.meme_key),
- TextProperties(
- fill=p.fill,
- style=p.style,
- weight=p.weight,
- stroke_width=p.stroke_width,
- stroke_fill=p.stroke_fill,
- ),
- )
- for p in params.meme_list
- ]
- except NoSuchMeme as e:
- raise HTTPException(status_code=e.status_code, detail=str(e))
-
- result = render_meme_list(
- meme_list,
- order_direction=params.order_direction,
- columns=params.columns,
- column_align=params.column_align,
- item_padding=params.item_padding,
- image_padding=params.image_padding,
- bg_color=params.bg_color,
- fontsize=params.fontsize,
- fontname=params.fontname,
- fallback_fonts=params.fallback_fonts,
- )
- content = result.getvalue()
- media_type = str(filetype.guess_mime(content)) or "text/plain"
- return Response(content=content, media_type=media_type)
-
- @app.get("/memes/keys")
- def _():
- return get_meme_keys()
-
- @app.get("/memes/{key}/info")
- def _(key: str):
- try:
- meme = get_meme(key)
- except NoSuchMeme as e:
- raise HTTPException(status_code=e.status_code, detail=str(e))
-
- args_model = (
- meme.params_type.args_type.model
- if meme.params_type.args_type
- else MemeArgsModel
- )
- properties: Dict[str, Dict[str, Any]] = (
- args_model.schema().get("properties", {}).copy()
- )
- properties.pop("user_infos")
- return MemeInfoResponse(
- key=meme.key,
- keywords=meme.keywords,
- patterns=meme.patterns,
- params=MemeParamsResponse(
- min_images=meme.params_type.min_images,
- max_images=meme.params_type.max_images,
- min_texts=meme.params_type.min_texts,
- max_texts=meme.params_type.max_texts,
- default_texts=meme.params_type.default_texts,
- args=[
- MemeArgsResponse(
- name=name,
- type=info.get("type", ""),
- description=info.get("description"),
- default=info.get("default"),
- enum=info.get("enum"),
- )
- for name, info in properties.items()
- ],
- ),
- )
-
- @app.get("/memes/{key}/preview")
- async def _(key: str):
- try:
- meme = get_meme(key)
- result = await meme.generate_preview()
- except MemeGeneratorException as e:
- raise HTTPException(status_code=e.status_code, detail=str(e))
-
- content = result.getvalue()
- media_type = str(filetype.guess_mime(content)) or "text/plain"
- return Response(content=content, media_type=media_type)
-
- @app.post("/memes/{key}/parse_args")
- async def _(key: str, args: List[str] = []):
- try:
- meme = get_meme(key)
- return meme.parse_args(args)
- except MemeGeneratorException as e:
- raise HTTPException(status_code=e.status_code, detail=str(e))
-
- for meme in sorted(get_memes(), key=lambda meme: meme.key):
- register_router(meme)
-
-
-def run_server():
- import uvicorn
-
- register_routers()
- uvicorn.run(
- app,
- host=meme_config.server.host,
- port=meme_config.server.port,
- log_config=LOGGING_CONFIG,
- )
-
-
-if __name__ == "__main__":
- setup_logger()
- run_server()
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/imprison/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/imprison/__init__.py
deleted file mode 100644
index 4033971bf33c636b961fbddce334f3b36ee9486f..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/imprison/__init__.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from pathlib import Path
-from typing import List
-
-from pil_utils import BuildImage
-
-from meme_generator import add_meme
-from meme_generator.exception import TextOverLength
-
-img_dir = Path(__file__).parent / "images"
-
-
-def imprison(images, texts: List[str], args):
- text = texts[0]
- frame = BuildImage.open(img_dir / "0.jpg")
- try:
- frame.draw_text(
- (10, 157, 230, 197),
- text,
- allow_wrap=True,
- max_fontsize=35,
- min_fontsize=15,
- )
- except ValueError:
- raise TextOverLength(text)
- return frame.save_jpg()
-
-
-add_meme(
- "imprison",
- imprison,
- min_texts=1,
- max_texts=1,
- default_texts=["我发涩图被抓起来了"],
- keywords=["坐牢"],
-)
diff --git a/spaces/ClearLove443/Robby-chatbot/modules/chatbot.py b/spaces/ClearLove443/Robby-chatbot/modules/chatbot.py
deleted file mode 100644
index 66b92a4d69ef0dedd27ca96a4ea6358b3d3f676c..0000000000000000000000000000000000000000
--- a/spaces/ClearLove443/Robby-chatbot/modules/chatbot.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# fix Error: module 'langchain' has no attribute 'verbose'
-import langchain
-import streamlit as st
-from langchain.callbacks import get_openai_callback
-from langchain.chains import ConversationalRetrievalChain
-from langchain.chat_models import ChatOpenAI
-from langchain.prompts.prompt import PromptTemplate
-
-langchain.verbose = False
-
-
-class Chatbot:
- def __init__(self, model_name, temperature, vectors):
- self.model_name = model_name
- self.temperature = temperature
- self.vectors = vectors
-
- qa_template = """
- You are a helpful AI assistant named Robby. The user gives you a file its content is represented by the following pieces of context, use them to answer the question at the end.
- If you don't know the answer, just say you don't know. Do NOT try to make up an answer.
- If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.
- Use as much detail as possible when responding.
-
- context: {context}
- =========
- question: {question}
- ======
- """
-
- QA_PROMPT = PromptTemplate(
- template=qa_template, input_variables=["context", "question"]
- )
-
- def conversational_chat(self, query):
- """
- Start a conversational chat with a model via Langchain
- """
- # llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature)
-
- from modules.llm import ChatGLM
-
- llm = ChatGLM()
-
- retriever = self.vectors.as_retriever()
-
- chain = ConversationalRetrievalChain.from_llm(
- llm=llm,
- retriever=retriever,
- verbose=True,
- return_source_documents=True,
- max_tokens_limit=4097,
- combine_docs_chain_kwargs={"prompt": self.QA_PROMPT},
- )
-
- chain_input = {"question": query, "chat_history": st.session_state["history"]}
- with get_openai_callback() as cb:
- result = chain(chain_input)
-
- st.session_state["history"].append((query, result["answer"]))
- # count_tokens_chain(chain, chain_input)
- st.write(
- f"###### Tokens used in this conversation : {cb.total_tokens} tokens"
- )
-
- return result["answer"]
-
-
-def count_tokens_chain(chain, query):
- with get_openai_callback() as cb:
- result = chain(query)
- st.write(f"###### Tokens used in this conversation : {cb.total_tokens} tokens")
- return result
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/utils.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/utils.py
deleted file mode 100644
index d29a5a7d97c56bc2ce60af3f562d40e5ed98125b..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/utils.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-"""
-Utility functions minipulating the prediction layers
-"""
-
-from ..utils import cat
-
-import torch
-
-def permute_and_flatten(layer, N, A, C, H, W):
- layer = layer.view(N, -1, C, H, W)
- layer = layer.permute(0, 3, 4, 1, 2) #N H W A C
- layer = layer.reshape(N, -1, C) # N H*W*A C
- return layer
-
-
-def concat_box_prediction_layers(box_cls, box_regression):
- box_cls_flattened = []
- box_regression_flattened = []
- # for each feature level, permute the outputs to make them be in the
- # same format as the labels. Note that the labels are computed for
- # all feature levels concatenated, so we keep the same representation
- # for the objectness and the box_regression
- for box_cls_per_level, box_regression_per_level in zip(
- box_cls, box_regression
- ):
- N, AxC, H, W = box_cls_per_level.shape
- Ax4 = box_regression_per_level.shape[1]
- A = 5
- C = AxC // A # 1
-
- box_cls_per_level = permute_and_flatten( box_cls_per_level, N, A, C, H, W)
- box_cls_flattened.append(box_cls_per_level)
- box_regression_flattened.append(box_regression_per_level)
- # concatenate on the first dimension (representing the feature levels), to
- # take into account the way the labels were generated (with all feature maps
- # being concatenated as well)
- box_cls = cat(box_cls_flattened, dim=1).reshape(-1, C)
- box_regression = cat(box_regression_flattened, dim=1).reshape(-1, 4)
-
- return box_cls, box_regression
diff --git a/spaces/DEBO-PROJECT/DEBO-V1/bots/one_to_one_debate.py b/spaces/DEBO-PROJECT/DEBO-V1/bots/one_to_one_debate.py
deleted file mode 100644
index b9d81e1790b6d88017bf537aec56df664eae71e2..0000000000000000000000000000000000000000
--- a/spaces/DEBO-PROJECT/DEBO-V1/bots/one_to_one_debate.py
+++ /dev/null
@@ -1,395 +0,0 @@
-import re
-import random
-from langchain.prompts import PromptTemplate
-from modules.gpt_modules import gpt_call
-
-def erase_start_word_and_after(text, start_word):
- pattern = re.compile(re.escape(start_word) + '.*')
- return re.sub(pattern, '', text)
-
-def one_to_one_debator(prompt, history, debate_subject, bot_role, history_num):
- # Debate Rule 설명하기
- if history_num == 0:
- print("history_num", history_num)
-
- user_role = ""
- bot_response = ""
-
- debate_role = [
- "pro side",
- "con side"
- ]
-
- # user role random으로 정하기
- user_debate_role = random.choice(debate_role)
- # user role이 아닌 것이 bot의 role임
- bot_debate_role_list = [role for role in debate_role if role != user_debate_role]
-
- print("user_debate_role", user_debate_role)
- print("bot_debate_role_list", bot_debate_role_list)
-
- debate_preset = "\n".join([
- "Debate Rules: ",
- "1) This debate will be divided into two teams, pro and con, with two debates on each team.",
- "2) The order of speaking is: first debater for the pro side, first debater for the con side, second debater for the pro side, second debater for the con side.",
- "3) Answer logically with an introduction, body, and conclusion.\n", #add this one.
- "User debate role: " + user_debate_role,
- "Bot debate roles: " + ", ".join(bot_debate_role_list) + "\n",
- "Debate subject: " + debate_subject
- ])
-
- # User가 첫번째 차례라면, User에게 먼저 prompt를 받아야 함
- if user_debate_role == debate_role[0]:
- #print("user_debate_role", user_debate_role)
- bot_preset = "\n".join([
- debate_preset + "\n",
- "It's your turn! Write your opinion!"
- ])
- bot_response = bot_preset
- print("bot_response", bot_response)
- #return bot_response
-
- # User가 두번째 차례라면, Bot이 1번째 차례에 대한 response를 만들고, 사용자의 답변을 받아야 함
- elif user_debate_role == debate_role[1]:
-
- bot_preset = "\n".join([
- debate_preset,
- ])
-
- first_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- bot_preset, #persona
- "{prompt}",
- "Only say " + debate_role[0] + "\'s opinion after \':\'. Do not write " + debate_role[1] + "\'s " + "opinions, " + debate_role[2] + "\'s " + "opinions and " + debate_role[3] + "\'s " + "opinions.",
- debate_role[0] + ": "
- ])
- )
- first_bot_prompt = first_prompt_template.format(
- prompt=""
- )
- first_response = gpt_call(first_bot_prompt)
-
- # preprocess
- # if first_response contain the first debater for the con side's opinion, remove it.
- first_response = erase_start_word_and_after(first_response, debate_role[1])
- first_response = erase_start_word_and_after(first_response, debate_role[2])
- first_response = erase_start_word_and_after(first_response, debate_role[3])
-
- #first_response = re.sub(debate_role[1] + ":.*", "", first_response)
-
- bot_response = "\n".join([
- bot_preset + "\n",
- "-----------------------------------------------------------------",
- "[First debater for the pro side]: " + "\n" + first_response + "\n",
- "-----------------------------------------------------------------",
- "It's your turn! Write your opinion!"
- ])
-
- # User가 세번째 차례라면, Bot이 1, 2번째 차례에 대한 response를 만들고, 사용자의 답변을 받아야 함
- elif user_debate_role == debate_role[2]:
-
- bot_preset = "\n".join([
- debate_preset,
- ])
- # first
- first_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- bot_preset, #persona
- "{prompt}",
- debate_role[0] + ": ",
- ])
- )
- first_bot_prompt = first_prompt_template.format(
- prompt=""
- )
- first_response = gpt_call(first_bot_prompt)
-
- # second
- second_prompt_template = PromptTemplate(
- input_variables=["first_prompt"],
- template="\n".join([
- bot_preset, #persona
- "Only say " + debate_role[1] + "\'s opinion after \':\'. Do not write " + debate_role[0] + "\'s " + "opinions, " + debate_role[2] + "\'s " + "opinions and " + debate_role[3] + "\'s " + "opinions.",
- debate_role[0] + ": " + "{first_prompt}",
- debate_role[1] + ": "
- ])
- )
- second_bot_prompt = second_prompt_template.format(
- first_prompt=first_response
- )
- second_response = gpt_call(second_bot_prompt)
-
- # preprocess
- # if first_response contain the first debater for the con side's opinion, remove it.
- first_response = erase_start_word_and_after(first_response, debate_role[1])
- first_response = erase_start_word_and_after(first_response, debate_role[2])
- first_response = erase_start_word_and_after(first_response, debate_role[3])
- # if second_response contain the first debater for the con side's opinion, remove it.
- #second_response = re.sub(debate_role[2] + ":.*", "", second_response)
- second_response = erase_start_word_and_after(second_response, debate_role[2])
- second_response = erase_start_word_and_after(second_response, debate_role[3])
-
- bot_response = "\n".join([
- bot_preset + "\n",
- "-----------------------------------------------------------------",
- "[First debater for the pro side]: " + "\n" + first_response + "\n",
- "-----------------------------------------------------------------",
- "[First debater for the con side]: " + "\n" + second_response + "\n",
- "-----------------------------------------------------------------",
- "It's your turn! Write your opinion!"
- ])
-
-
- elif user_debate_role == debate_role[3]:
-
- bot_preset = "\n".join([
- debate_preset,
- ])
-
- # first
- first_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- bot_preset, #persona
- "{prompt}",
- debate_role[0] + ": ",
- ])
- )
- first_bot_prompt = first_prompt_template.format(
- prompt=""
- )
- first_response = gpt_call(first_bot_prompt)
-
- # second
- second_prompt_template = PromptTemplate(
- input_variables=["first_prompt"],
- template="\n".join([
- bot_preset, #persona
- "Only say " + debate_role[1] + "'s opinion after \':\'. Do not write " + debate_role[0] + "\'s " + "opinions, " + debate_role[2] + "\'s " + "opinions and " + debate_role[3] + "\'s " + "opinions.",
- debate_role[0] + ": " + "{first_prompt}",
- debate_role[1] + ": "
- ])
- )
- second_bot_prompt = second_prompt_template.format(
- first_prompt=first_response
- )
- second_response = gpt_call(second_bot_prompt)
-
- # third
- third_prompt_template = PromptTemplate(
- input_variables=["first_prompt", "second_prompt"],
- template="\n".join([
- bot_preset, #persona
- "Only say " + debate_role[2] + "\'s opinion after \':\'. Do not write " + debate_role[0] + "\'s " + "opinions, " + debate_role[1] + "\'s " + "opinions and " + debate_role[3] + "\'s " + "opinions.",
- debate_role[0] + ": " + "{first_prompt}",
- debate_role[1] + ": " + "{second_prompt}",
- debate_role[2] + ": "
- ])
- )
- third_bot_prompt = third_prompt_template.format(
- first_prompt=first_response,
- second_prompt=second_response
- )
- third_response = gpt_call(third_bot_prompt)
-
- # preprocess
- # if first_response contain the first debater for the con side's opinion, remove it.
- first_response = erase_start_word_and_after(first_response, debate_role[1])
- first_response = erase_start_word_and_after(first_response, debate_role[2])
- first_response = erase_start_word_and_after(first_response, debate_role[3])
- # if second_response contain the first debater for the con side's opinion, remove it.
- #second_response = re.sub(debate_role[2] + ":.*", "", second_response)
- second_response = erase_start_word_and_after(second_response, debate_role[2])
- second_response = erase_start_word_and_after(second_response, debate_role[3])
- # if third_response contain the first debater for the con side's opinion, remove it.
- thir_response = erase_start_word_and_after(thir_response, debate_role[3])
- #third_response = re.sub(debate_role[3] + ":.*", "", third_response)
-
- bot_response = "\n".join([
- bot_preset + "\n",
- "-----------------------------------------------------------------",
- "[First debater for the pro side]: " + "\n" + first_response + "\n",
- "-----------------------------------------------------------------",
- "[First debater for the con side]: " + "\n" + second_response + "\n",
- "-----------------------------------------------------------------",
- "[Second debater for the pro side]: " + "\n" + third_response + "\n",
- "-----------------------------------------------------------------",
- "It's your turn! Write your opinion!"
- ])
- else:
- pass
-
- # Answer and Ask Judgement.
- if history_num == 1:
-
- debate_role = [
- "first debater for the pro side",
- "first debater for the con side",
- "second debater for the pro side",
- "second debater for the con side"
- ]
-
- print("history1: ", history)
-
- # user가 가장 첫번째로 답변했다면, 봇이 2, 3, 4 답변을 하고, 평가할지를 물어보면 됨.
- if "User debate role: first debater for the pro side" in history:
-
- # second
- second_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- history,
- "User: {prompt}",
- debate_role[2] + ": "
- ])
- )
- second_bot_prompt = second_prompt_template.format(
- prompt=prompt
- )
- second_response = gpt_call(second_bot_prompt)
-
-
- # third
- third_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- history,
- "User: {prompt}",
- debate_role[2] + ": "
- ])
- )
- third_bot_prompt = third_prompt_template.format(
- prompt=prompt
- )
- third_response = gpt_call(third_bot_prompt)
-
- # fourth
- fourth_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- history,
- "User: {prompt}",
- debate_role[3] + ": "
- ])
- )
- fourth_bot_prompt = fourth_prompt_template.format(
- prompt=prompt
- )
- fourth_response = gpt_call(fourth_bot_prompt)
-
- ask_judgement = "Do you want to be the judge of this debate? (If you want, enter any words.)"
- bot_response = "\n".join([
- "[first debater for the con side]: " + "\n" + second_response + "\n",
- "-----------------------------------------------------------------",
- "[second debater for the pro sid]: " + "\n" + third_response + "\n",
- "-----------------------------------------------------------------",
- "[second debater for the con side]: " + "\n" + fourth_response + "\n",
- "-----------------------------------------------------------------",
- ask_judgement
- ])
-
- # user가 두번째로 답변했다면, 봇이 3, 4 번째 답변을 하고, 평가할지를 물어보면 됨.
- elif "User debate role: first debater for the con side" in history:
-
- # third
- third_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- history,
- "User: {prompt}",
- debate_role[2] + ": "
- ])
- )
- third_bot_prompt = third_prompt_template.format(
- prompt=prompt
- )
- third_response = gpt_call(third_bot_prompt)
-
- # fourth
- fourth_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- history,
- "User: {prompt}",
- debate_role[2] + ": " + third_response,
- debate_role[3] + ": "
- ])
- )
- fourth_bot_prompt = fourth_prompt_template.format(
- prompt=prompt
- )
- fourth_response = gpt_call(fourth_bot_prompt)
-
- # ask_judgement
- ask_judgement = "Do you want to be the judge of this debate? (If you want, enter any words.)"
- bot_response = "\n".join([
- "[second debater for the pro sid]: " + "\n" + third_response + "\n",
- "-----------------------------------------------------------------",
- "[second debater for the con side]: " + "\n" + fourth_response + "\n",
- "-----------------------------------------------------------------",
- ask_judgement
- ])
-
- # user가 세번째로 답변했다면, 봇이 4 번째 답변을 하고, 평가할지를 물어보면 됨.
- elif "User debate role: second debater for the pro side" in history:
-
- fourth_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- history,
- "User: {prompt}",
- debate_role[3] + ": "
- ])
- )
- fourth_bot_prompt = fourth_prompt_template.format(
- prompt=prompt
- )
- fourth_response = gpt_call(fourth_bot_prompt)
-
-
-
- ask_judgement = "Do you want to be the judge of this debate? (If you want, enter any words.)"
- bot_response = "\n".join([
- "[second debater for the con side]: " + "\n" + fourth_response + "\n",
- "-----------------------------------------------------------------",
- ask_judgement
- ])
-
- # user가 네번째로 답변했다면, 바로 평가할지를 물어보면 됨.
- elif "User debate role: second debater for the con side" in history:
- ask_judgement = "Do you want to be the judge of this debate? (If you want, enter any words.)"
- bot_response = ask_judgement
- else:
- pass
-
- # Judgement.
- if history_num == 2:
- judgement_word_list = "\n".join([
- "!!Instruction!",
- "You are now the judge of this debate. Evaluate the debate according to the rules below.",
- "Rule 1. Decide between the pro and con teams.",
- "Rule 2. Summarize the debate as a whole and what each debater said.",
- "Rule 3. For each debater, explain what was persuasive and what made the differnce between winning and losing.",
- ])
-
- judgement_prompt_template = PromptTemplate(
- input_variables=["prompt"],
- template="\n".join([
- history,
- "{prompt}",
- judgement_word_list,
- "Judgement: "
- ])
- )
- judgement_bot_prompt = judgement_prompt_template.format(
- prompt=""
- )
- judgement_response = gpt_call(judgement_bot_prompt)
-
- bot_response = "\n".join([
- "[Judgement]: " + "\n" + judgement_response + "\n",
- ])
-
- return bot_response
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_s_b_i_x.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_s_b_i_x.py
deleted file mode 100644
index 29b82c3e43e8bd199a841c577774885d92499aba..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_s_b_i_x.py
+++ /dev/null
@@ -1,119 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.textTools import safeEval, num2binary, binary2num
-from . import DefaultTable
-from .sbixStrike import Strike
-
-
-sbixHeaderFormat = """
- >
- version: H # Version number (set to 1)
- flags: H # The only two bits used in the flags field are bits 0
- # and 1. For historical reasons, bit 0 must always be 1.
- # Bit 1 is a sbixDrawOutlines flag and is interpreted as
- # follows:
- # 0: Draw only 'sbix' bitmaps
- # 1: Draw both 'sbix' bitmaps and outlines, in that
- # order
- numStrikes: L # Number of bitmap strikes to follow
-"""
-sbixHeaderFormatSize = sstruct.calcsize(sbixHeaderFormat)
-
-
-sbixStrikeOffsetFormat = """
- >
- strikeOffset: L # Offset from begining of table to data for the
- # individual strike
-"""
-sbixStrikeOffsetFormatSize = sstruct.calcsize(sbixStrikeOffsetFormat)
-
-
-class table__s_b_i_x(DefaultTable.DefaultTable):
- def __init__(self, tag=None):
- DefaultTable.DefaultTable.__init__(self, tag)
- self.version = 1
- self.flags = 1
- self.numStrikes = 0
- self.strikes = {}
- self.strikeOffsets = []
-
- def decompile(self, data, ttFont):
- # read table header
- sstruct.unpack(sbixHeaderFormat, data[:sbixHeaderFormatSize], self)
- # collect offsets to individual strikes in self.strikeOffsets
- for i in range(self.numStrikes):
- current_offset = sbixHeaderFormatSize + i * sbixStrikeOffsetFormatSize
- offset_entry = sbixStrikeOffset()
- sstruct.unpack(
- sbixStrikeOffsetFormat,
- data[current_offset : current_offset + sbixStrikeOffsetFormatSize],
- offset_entry,
- )
- self.strikeOffsets.append(offset_entry.strikeOffset)
-
- # decompile Strikes
- for i in range(self.numStrikes - 1, -1, -1):
- current_strike = Strike(rawdata=data[self.strikeOffsets[i] :])
- data = data[: self.strikeOffsets[i]]
- current_strike.decompile(ttFont)
- # print " Strike length: %xh" % len(bitmapSetData)
- # print "Number of Glyph entries:", len(current_strike.glyphs)
- if current_strike.ppem in self.strikes:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("Pixel 'ppem' must be unique for each Strike")
- self.strikes[current_strike.ppem] = current_strike
-
- # after the glyph data records have been extracted, we don't need the offsets anymore
- del self.strikeOffsets
- del self.numStrikes
-
- def compile(self, ttFont):
- sbixData = b""
- self.numStrikes = len(self.strikes)
- sbixHeader = sstruct.pack(sbixHeaderFormat, self)
-
- # calculate offset to start of first strike
- setOffset = sbixHeaderFormatSize + sbixStrikeOffsetFormatSize * self.numStrikes
-
- for si in sorted(self.strikes.keys()):
- current_strike = self.strikes[si]
- current_strike.compile(ttFont)
- # append offset to this strike to table header
- current_strike.strikeOffset = setOffset
- sbixHeader += sstruct.pack(sbixStrikeOffsetFormat, current_strike)
- setOffset += len(current_strike.data)
- sbixData += current_strike.data
-
- return sbixHeader + sbixData
-
- def toXML(self, xmlWriter, ttFont):
- xmlWriter.simpletag("version", value=self.version)
- xmlWriter.newline()
- xmlWriter.simpletag("flags", value=num2binary(self.flags, 16))
- xmlWriter.newline()
- for i in sorted(self.strikes.keys()):
- self.strikes[i].toXML(xmlWriter, ttFont)
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "version":
- setattr(self, name, safeEval(attrs["value"]))
- elif name == "flags":
- setattr(self, name, binary2num(attrs["value"]))
- elif name == "strike":
- current_strike = Strike()
- for element in content:
- if isinstance(element, tuple):
- name, attrs, content = element
- current_strike.fromXML(name, attrs, content, ttFont)
- self.strikes[current_strike.ppem] = current_strike
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("can't handle '%s' element" % name)
-
-
-# Helper classes
-
-
-class sbixStrikeOffset(object):
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/tests/abstract/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/tests/abstract/__init__.py
deleted file mode 100644
index d2bc1627d4535d8e8fea50c65c4ff3e4a75827b5..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/tests/abstract/__init__.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import os
-
-import pytest
-
-from fsspec.implementations.local import LocalFileSystem
-from fsspec.tests.abstract.copy import AbstractCopyTests # noqa
-from fsspec.tests.abstract.get import AbstractGetTests # noqa
-from fsspec.tests.abstract.put import AbstractPutTests # noqa
-
-
-class BaseAbstractFixtures:
- """
- Abstract base class containing fixtures that are used by but never need to
- be overridden in derived filesystem-specific classes to run the abstract
- tests on such filesystems.
- """
-
- @pytest.fixture
- def fs_bulk_operations_scenario_0(self, fs, fs_join, fs_path):
- """
- Scenario on remote filesystem that is used for many cp/get/put tests.
-
- Cleans up at the end of each test it which it is used.
- """
- source = self._bulk_operations_scenario_0(fs, fs_join, fs_path)
- yield source
- fs.rm(source, recursive=True)
-
- @pytest.fixture
- def fs_target(self, fs, fs_join, fs_path):
- """
- Return name of remote directory that does not yet exist to copy into.
-
- Cleans up at the end of each test it which it is used.
- """
- target = fs_join(fs_path, "target")
- yield target
- if fs.exists(target):
- fs.rm(target, recursive=True)
-
- @pytest.fixture
- def local_bulk_operations_scenario_0(self, local_fs, local_join, local_path):
- """
- Scenario on local filesystem that is used for many cp/get/put tests.
-
- Cleans up at the end of each test it which it is used.
- """
- source = self._bulk_operations_scenario_0(local_fs, local_join, local_path)
- yield source
- local_fs.rm(source, recursive=True)
-
- @pytest.fixture
- def local_target(self, local_fs, local_join, local_path):
- """
- Return name of local directory that does not yet exist to copy into.
-
- Cleans up at the end of each test it which it is used.
- """
- target = local_join(local_path, "target")
- yield target
- if local_fs.exists(target):
- local_fs.rm(target, recursive=True)
-
- def _bulk_operations_scenario_0(self, some_fs, some_join, some_path):
- """
- Scenario that is used for many cp/get/put tests. Creates the following
- directory and file structure:
-
- 📁 source
- ├── 📄 file1
- ├── 📄 file2
- └── 📁 subdir
- ├── 📄 subfile1
- ├── 📄 subfile2
- └── 📁 nesteddir
- └── 📄 nestedfile
- """
- source = some_join(some_path, "source")
- subdir = some_join(source, "subdir")
- nesteddir = some_join(subdir, "nesteddir")
- some_fs.makedirs(nesteddir)
- some_fs.touch(some_join(source, "file1"))
- some_fs.touch(some_join(source, "file2"))
- some_fs.touch(some_join(subdir, "subfile1"))
- some_fs.touch(some_join(subdir, "subfile2"))
- some_fs.touch(some_join(nesteddir, "nestedfile"))
- return source
-
-
-class AbstractFixtures(BaseAbstractFixtures):
- """
- Abstract base class containing fixtures that may be overridden in derived
- filesystem-specific classes to run the abstract tests on such filesystems.
-
- For any particular filesystem some of these fixtures must be overridden,
- such as ``fs`` and ``fs_path``, and others may be overridden if the
- default functions here are not appropriate, such as ``fs_join``.
- """
-
- @pytest.fixture
- def fs(self):
- raise NotImplementedError("This function must be overridden in derived classes")
-
- @pytest.fixture
- def fs_join(self):
- """
- Return a function that joins its arguments together into a path.
-
- Most fsspec implementations join paths in a platform-dependent way,
- but some will override this to always use a forward slash.
- """
- return os.path.join
-
- @pytest.fixture
- def fs_path(self):
- raise NotImplementedError("This function must be overridden in derived classes")
-
- @pytest.fixture(scope="class")
- def local_fs(self):
- # Maybe need an option for auto_mkdir=False? This is only relevant
- # for certain implementations.
- return LocalFileSystem(auto_mkdir=True)
-
- @pytest.fixture
- def local_join(self):
- """
- Return a function that joins its arguments together into a path, on
- the local filesystem.
- """
- return os.path.join
-
- @pytest.fixture
- def local_path(self, tmpdir):
- return tmpdir
-
- def supports_empty_directories(self):
- """
- Return whether this implementation supports empty directories.
- """
- return True
diff --git a/spaces/Dachus/Realfee/README.md b/spaces/Dachus/Realfee/README.md
deleted file mode 100644
index 958d30f95209db007c3f3917732ba96f5c0ed536..0000000000000000000000000000000000000000
--- a/spaces/Dachus/Realfee/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Livebook
-emoji: 📓
-colorFrom: pink
-colorTo: purple
-sdk: docker
-fullWidth: true
-duplicated_from: livebook-dev/livebook
-license: bigscience-openrail-m
----
-
-You can install and run [Livebook](https://livebook.dev/) inside a Hugging Face Space. Here's [a tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-livebook) on how to do that.
\ No newline at end of file
diff --git a/spaces/Dantra1/CeliaSensei/monotonic_align/core.py b/spaces/Dantra1/CeliaSensei/monotonic_align/core.py
deleted file mode 100644
index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000
--- a/spaces/Dantra1/CeliaSensei/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
\ No newline at end of file
diff --git a/spaces/Deci/DeciLM-6b-instruct/README.md b/spaces/Deci/DeciLM-6b-instruct/README.md
deleted file mode 100644
index f57c76df6b9d447db6043b3bd05ddbcc169a39d0..0000000000000000000000000000000000000000
--- a/spaces/Deci/DeciLM-6b-instruct/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DeciLM 6b Instruct
-emoji: 🔥
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.44.0
-app_file: app.py
-pinned: false
-license: llama2
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DeeKayG/COCO-Google/app.py b/spaces/DeeKayG/COCO-Google/app.py
deleted file mode 100644
index e35efbd8cc6f1b85e0dc84a52d79b4cd78933fda..0000000000000000000000000000000000000000
--- a/spaces/DeeKayG/COCO-Google/app.py
+++ /dev/null
@@ -1,136 +0,0 @@
-"""
-The image retreival section of the project takes a text prompt as input and fetches images from dataset that are semantically close the text prompt.
-1. Gradio version that is deployed providing interface in Hugging Face Space.
-"""
-# Authors: DeeKay Goswami & Naresh Kumar Devulapally
-
-import os
-import json
-import uuid
-import time
-import zipfile
-import threading
-import subprocess
-import numpy as np
-import gradio as gr
-import pandas as pd
-from PIL import Image
-from transformers import CLIPProcessor, CLIPModel
-from sklearn.metrics.pairwise import cosine_similarity
-
-# This will load the model and processor...
-model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
-processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
-
-# This will load features from Parquet file...
-parquet_feature_path = "./COCO_Features/COCO_Features.parquet"
-df = pd.read_parquet(parquet_feature_path)
-
-# This will store filenames and features from the dataframe...
-all_filenames = df["filename"].tolist()
-features_for_all_images = df["feature"].tolist()
-
-image_directory = "./Dataset/"
-
-def build_image_index(base_directory):
- image_index = {}
- for root, dirs, files in os.walk(base_directory):
- for filename in files:
- if filename.endswith('.jpg'):
- full_path = os.path.join(root, filename)
- image_index[filename] = full_path
- return image_index
-
-image_index = build_image_index(image_directory)
-
-with open("image_index.json", "w") as f:
- json.dump(image_index, f)
-
-with open('image_index.json', 'r') as f:
- loaded_index = json.load(f)
-
-def transfer_sh_upload(file_name):
- try:
- output = subprocess.check_output(["curl", "--upload-file", file_name, f"https://transfer.sh/{os.path.basename(file_name)}"])
- url = output.decode("utf-8").strip()
- url = url.replace("https://transfer.sh/", "https://transfer.sh/get/")
- return url
-
- except Exception as e:
- print(f"Error downloading file: {e}")
- return None
-
-def delayed_delete(filename, delay = 500):
- time.sleep(delay)
- try:
- os.remove(filename)
- except Exception as e:
- print(f"Error deleting {filename}: {e}")
-
-def get_image_path(filename):
- folders = os.listdir(image_directory)
- for folder in folders:
- potential_path = os.path.join(image_directory, folder, filename)
- if os.path.exists(potential_path):
- return potential_path
- raise FileNotFoundError(f"Could not locate {filename} in any subdirectory.")
-
-def concatenate_images(images):
- images = [img if len(img.shape) == 3 and img.shape[2] == 3 else np.stack([img, img, img], axis=-1) for img in images]
- max_height = max(img.shape[0] for img in images)
- padded_images = [np.pad(img, ((0, max_height - img.shape[0]), (0, 0), (0, 0))) for img in images]
- concatenated_images = np.concatenate(padded_images, axis=1)
-
- return concatenated_images
-
-def fetch_images(query, number):
- text = [query]
- number = int(number)
- text_inputs = processor(text=text, return_tensors="pt", padding=True, truncation=True)
- text_outputs = model.get_text_features(**text_inputs)
- text_features = text_outputs.detach()
-
- sim_scores = [cosine_similarity(text_features, np.array(img_feature).reshape(1, -1))[0][0] for img_feature in features_for_all_images]
- top_indices = sorted(range(len(sim_scores)), key=lambda i: sim_scores[i])[-number:]
-
- top_image_paths = [loaded_index[all_filenames[i]] for i in top_indices]
-
- zip_filename = f"images_{uuid.uuid4()}.zip"
- with zipfile.ZipFile(zip_filename, "w") as zipf:
- for img_path in top_image_paths:
- zipf.write(img_path, os.path.basename(img_path))
- url = transfer_sh_upload(zip_filename)
- download_link = f"Click here to download requested images"
- threading.Thread(target=delayed_delete, args=(zip_filename,)).start()
-
- top_images_display = [np.array(Image.open(img_path)) for img_path in top_image_paths[:2]]
- while len(top_images_display) < 1:
- top_images_display.append(None)
-
- return (concatenate_images(top_images_display), download_link)
-
-examples = [
- ["Surfing", "2"],
- ["Children on Picnic", "2"],
- ["Kid Playing BaseBall", "2"],
- ["Girl with the Umbrella", "2"],
- ["Girl with the Fancy Tattoo", "2"],
-]
-
-iface = gr.Interface(
- fn=fetch_images,
- inputs=[
- gr.inputs.Textbox(label="Search Query"),
- gr.inputs.Textbox(label="Number of Images", default="2"),
- ],
- outputs=[
- gr.outputs.Image(type="numpy", label="Most Similar Images"),
- gr.outputs.HTML(label="Download Link")
- ],
- examples=examples,
- title="Zero-Shot COCO-Google",
- description="Enter a query just like you Google to search images. Powered by OpenAI's Visual Transformer Model & Microsoft's COCO dataset of 1.5 Million Objects"
-)
-
-iface.launch()
-
diff --git a/spaces/Detomo/ai-comic-generation/src/lib/sleep.ts b/spaces/Detomo/ai-comic-generation/src/lib/sleep.ts
deleted file mode 100644
index 2885c6e75c0dc415c9eaf71beabac7461eee5588..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/lib/sleep.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-export const sleep = async (durationInMs: number) =>
- new Promise((resolve) => {
- setTimeout(() => {
- resolve(true)
- }, durationInMs)
- })
\ No newline at end of file
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/imagenet.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/imagenet.py
deleted file mode 100644
index 9a02ec44ba4af9e993f58c91fa43482a4ecbe54c..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/imagenet.py
+++ /dev/null
@@ -1,558 +0,0 @@
-import os, tarfile, glob, shutil
-import yaml
-import numpy as np
-from tqdm import tqdm
-from PIL import Image
-import albumentations
-from omegaconf import OmegaConf
-from torch.utils.data import Dataset
-
-from taming.data.base import ImagePaths
-from taming.util import download, retrieve
-import taming.data.utils as bdu
-
-
-def give_synsets_from_indices(indices, path_to_yaml="data/imagenet_idx_to_synset.yaml"):
- synsets = []
- with open(path_to_yaml) as f:
- di2s = yaml.load(f)
- for idx in indices:
- synsets.append(str(di2s[idx]))
- print("Using {} different synsets for construction of Restriced Imagenet.".format(len(synsets)))
- return synsets
-
-
-def str_to_indices(string):
- """Expects a string in the format '32-123, 256, 280-321'"""
- assert not string.endswith(","), "provided string '{}' ends with a comma, pls remove it".format(string)
- subs = string.split(",")
- indices = []
- for sub in subs:
- subsubs = sub.split("-")
- assert len(subsubs) > 0
- if len(subsubs) == 1:
- indices.append(int(subsubs[0]))
- else:
- rang = [j for j in range(int(subsubs[0]), int(subsubs[1]))]
- indices.extend(rang)
- return sorted(indices)
-
-
-class ImageNetBase(Dataset):
- def __init__(self, config=None):
- self.config = config or OmegaConf.create()
- if not type(self.config)==dict:
- self.config = OmegaConf.to_container(self.config)
- self._prepare()
- self._prepare_synset_to_human()
- self._prepare_idx_to_synset()
- self._load()
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- return self.data[i]
-
- def _prepare(self):
- raise NotImplementedError()
-
- def _filter_relpaths(self, relpaths):
- ignore = set([
- "n06596364_9591.JPEG",
- ])
- relpaths = [rpath for rpath in relpaths if not rpath.split("/")[-1] in ignore]
- if "sub_indices" in self.config:
- indices = str_to_indices(self.config["sub_indices"])
- synsets = give_synsets_from_indices(indices, path_to_yaml=self.idx2syn) # returns a list of strings
- files = []
- for rpath in relpaths:
- syn = rpath.split("/")[0]
- if syn in synsets:
- files.append(rpath)
- return files
- else:
- return relpaths
-
- def _prepare_synset_to_human(self):
- SIZE = 2655750
- URL = "https://heibox.uni-heidelberg.de/f/9f28e956cd304264bb82/?dl=1"
- self.human_dict = os.path.join(self.root, "synset_human.txt")
- if (not os.path.exists(self.human_dict) or
- not os.path.getsize(self.human_dict)==SIZE):
- download(URL, self.human_dict)
-
- def _prepare_idx_to_synset(self):
- URL = "https://heibox.uni-heidelberg.de/f/d835d5b6ceda4d3aa910/?dl=1"
- self.idx2syn = os.path.join(self.root, "index_synset.yaml")
- if (not os.path.exists(self.idx2syn)):
- download(URL, self.idx2syn)
-
- def _load(self):
- with open(self.txt_filelist, "r") as f:
- self.relpaths = f.read().splitlines()
- l1 = len(self.relpaths)
- self.relpaths = self._filter_relpaths(self.relpaths)
- print("Removed {} files from filelist during filtering.".format(l1 - len(self.relpaths)))
-
- self.synsets = [p.split("/")[0] for p in self.relpaths]
- self.abspaths = [os.path.join(self.datadir, p) for p in self.relpaths]
-
- unique_synsets = np.unique(self.synsets)
- class_dict = dict((synset, i) for i, synset in enumerate(unique_synsets))
- self.class_labels = [class_dict[s] for s in self.synsets]
-
- with open(self.human_dict, "r") as f:
- human_dict = f.read().splitlines()
- human_dict = dict(line.split(maxsplit=1) for line in human_dict)
-
- self.human_labels = [human_dict[s] for s in self.synsets]
-
- labels = {
- "relpath": np.array(self.relpaths),
- "synsets": np.array(self.synsets),
- "class_label": np.array(self.class_labels),
- "human_label": np.array(self.human_labels),
- }
- self.data = ImagePaths(self.abspaths,
- labels=labels,
- size=retrieve(self.config, "size", default=0),
- random_crop=self.random_crop)
-
-
-class ImageNetTrain(ImageNetBase):
- NAME = "ILSVRC2012_train"
- URL = "http://www.image-net.org/challenges/LSVRC/2012/"
- AT_HASH = "a306397ccf9c2ead27155983c254227c0fd938e2"
- FILES = [
- "ILSVRC2012_img_train.tar",
- ]
- SIZES = [
- 147897477120,
- ]
-
- def _prepare(self):
- self.random_crop = retrieve(self.config, "ImageNetTrain/random_crop",
- default=True)
- cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache"))
- self.root = os.path.join(cachedir, "autoencoders/data", self.NAME)
- self.datadir = os.path.join(self.root, "data")
- self.txt_filelist = os.path.join(self.root, "filelist.txt")
- self.expected_length = 1281167
- if not bdu.is_prepared(self.root):
- # prep
- print("Preparing dataset {} in {}".format(self.NAME, self.root))
-
- datadir = self.datadir
- if not os.path.exists(datadir):
- path = os.path.join(self.root, self.FILES[0])
- if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]:
- import academictorrents as at
- atpath = at.get(self.AT_HASH, datastore=self.root)
- assert atpath == path
-
- print("Extracting {} to {}".format(path, datadir))
- os.makedirs(datadir, exist_ok=True)
- with tarfile.open(path, "r:") as tar:
- tar.extractall(path=datadir)
-
- print("Extracting sub-tars.")
- subpaths = sorted(glob.glob(os.path.join(datadir, "*.tar")))
- for subpath in tqdm(subpaths):
- subdir = subpath[:-len(".tar")]
- os.makedirs(subdir, exist_ok=True)
- with tarfile.open(subpath, "r:") as tar:
- tar.extractall(path=subdir)
-
-
- filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG"))
- filelist = [os.path.relpath(p, start=datadir) for p in filelist]
- filelist = sorted(filelist)
- filelist = "\n".join(filelist)+"\n"
- with open(self.txt_filelist, "w") as f:
- f.write(filelist)
-
- bdu.mark_prepared(self.root)
-
-
-class ImageNetValidation(ImageNetBase):
- NAME = "ILSVRC2012_validation"
- URL = "http://www.image-net.org/challenges/LSVRC/2012/"
- AT_HASH = "5d6d0df7ed81efd49ca99ea4737e0ae5e3a5f2e5"
- VS_URL = "https://heibox.uni-heidelberg.de/f/3e0f6e9c624e45f2bd73/?dl=1"
- FILES = [
- "ILSVRC2012_img_val.tar",
- "validation_synset.txt",
- ]
- SIZES = [
- 6744924160,
- 1950000,
- ]
-
- def _prepare(self):
- self.random_crop = retrieve(self.config, "ImageNetValidation/random_crop",
- default=False)
- cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache"))
- self.root = os.path.join(cachedir, "autoencoders/data", self.NAME)
- self.datadir = os.path.join(self.root, "data")
- self.txt_filelist = os.path.join(self.root, "filelist.txt")
- self.expected_length = 50000
- if not bdu.is_prepared(self.root):
- # prep
- print("Preparing dataset {} in {}".format(self.NAME, self.root))
-
- datadir = self.datadir
- if not os.path.exists(datadir):
- path = os.path.join(self.root, self.FILES[0])
- if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]:
- import academictorrents as at
- atpath = at.get(self.AT_HASH, datastore=self.root)
- assert atpath == path
-
- print("Extracting {} to {}".format(path, datadir))
- os.makedirs(datadir, exist_ok=True)
- with tarfile.open(path, "r:") as tar:
- tar.extractall(path=datadir)
-
- vspath = os.path.join(self.root, self.FILES[1])
- if not os.path.exists(vspath) or not os.path.getsize(vspath)==self.SIZES[1]:
- download(self.VS_URL, vspath)
-
- with open(vspath, "r") as f:
- synset_dict = f.read().splitlines()
- synset_dict = dict(line.split() for line in synset_dict)
-
- print("Reorganizing into synset folders")
- synsets = np.unique(list(synset_dict.values()))
- for s in synsets:
- os.makedirs(os.path.join(datadir, s), exist_ok=True)
- for k, v in synset_dict.items():
- src = os.path.join(datadir, k)
- dst = os.path.join(datadir, v)
- shutil.move(src, dst)
-
- filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG"))
- filelist = [os.path.relpath(p, start=datadir) for p in filelist]
- filelist = sorted(filelist)
- filelist = "\n".join(filelist)+"\n"
- with open(self.txt_filelist, "w") as f:
- f.write(filelist)
-
- bdu.mark_prepared(self.root)
-
-
-def get_preprocessor(size=None, random_crop=False, additional_targets=None,
- crop_size=None):
- if size is not None and size > 0:
- transforms = list()
- rescaler = albumentations.SmallestMaxSize(max_size = size)
- transforms.append(rescaler)
- if not random_crop:
- cropper = albumentations.CenterCrop(height=size,width=size)
- transforms.append(cropper)
- else:
- cropper = albumentations.RandomCrop(height=size,width=size)
- transforms.append(cropper)
- flipper = albumentations.HorizontalFlip()
- transforms.append(flipper)
- preprocessor = albumentations.Compose(transforms,
- additional_targets=additional_targets)
- elif crop_size is not None and crop_size > 0:
- if not random_crop:
- cropper = albumentations.CenterCrop(height=crop_size,width=crop_size)
- else:
- cropper = albumentations.RandomCrop(height=crop_size,width=crop_size)
- transforms = [cropper]
- preprocessor = albumentations.Compose(transforms,
- additional_targets=additional_targets)
- else:
- preprocessor = lambda **kwargs: kwargs
- return preprocessor
-
-
-def rgba_to_depth(x):
- assert x.dtype == np.uint8
- assert len(x.shape) == 3 and x.shape[2] == 4
- y = x.copy()
- y.dtype = np.float32
- y = y.reshape(x.shape[:2])
- return np.ascontiguousarray(y)
-
-
-class BaseWithDepth(Dataset):
- DEFAULT_DEPTH_ROOT="data/imagenet_depth"
-
- def __init__(self, config=None, size=None, random_crop=False,
- crop_size=None, root=None):
- self.config = config
- self.base_dset = self.get_base_dset()
- self.preprocessor = get_preprocessor(
- size=size,
- crop_size=crop_size,
- random_crop=random_crop,
- additional_targets={"depth": "image"})
- self.crop_size = crop_size
- if self.crop_size is not None:
- self.rescaler = albumentations.Compose(
- [albumentations.SmallestMaxSize(max_size = self.crop_size)],
- additional_targets={"depth": "image"})
- if root is not None:
- self.DEFAULT_DEPTH_ROOT = root
-
- def __len__(self):
- return len(self.base_dset)
-
- def preprocess_depth(self, path):
- rgba = np.array(Image.open(path))
- depth = rgba_to_depth(rgba)
- depth = (depth - depth.min())/max(1e-8, depth.max()-depth.min())
- depth = 2.0*depth-1.0
- return depth
-
- def __getitem__(self, i):
- e = self.base_dset[i]
- e["depth"] = self.preprocess_depth(self.get_depth_path(e))
- # up if necessary
- h,w,c = e["image"].shape
- if self.crop_size and min(h,w) < self.crop_size:
- # have to upscale to be able to crop - this just uses bilinear
- out = self.rescaler(image=e["image"], depth=e["depth"])
- e["image"] = out["image"]
- e["depth"] = out["depth"]
- transformed = self.preprocessor(image=e["image"], depth=e["depth"])
- e["image"] = transformed["image"]
- e["depth"] = transformed["depth"]
- return e
-
-
-class ImageNetTrainWithDepth(BaseWithDepth):
- # default to random_crop=True
- def __init__(self, random_crop=True, sub_indices=None, **kwargs):
- self.sub_indices = sub_indices
- super().__init__(random_crop=random_crop, **kwargs)
-
- def get_base_dset(self):
- if self.sub_indices is None:
- return ImageNetTrain()
- else:
- return ImageNetTrain({"sub_indices": self.sub_indices})
-
- def get_depth_path(self, e):
- fid = os.path.splitext(e["relpath"])[0]+".png"
- fid = os.path.join(self.DEFAULT_DEPTH_ROOT, "train", fid)
- return fid
-
-
-class ImageNetValidationWithDepth(BaseWithDepth):
- def __init__(self, sub_indices=None, **kwargs):
- self.sub_indices = sub_indices
- super().__init__(**kwargs)
-
- def get_base_dset(self):
- if self.sub_indices is None:
- return ImageNetValidation()
- else:
- return ImageNetValidation({"sub_indices": self.sub_indices})
-
- def get_depth_path(self, e):
- fid = os.path.splitext(e["relpath"])[0]+".png"
- fid = os.path.join(self.DEFAULT_DEPTH_ROOT, "val", fid)
- return fid
-
-
-class RINTrainWithDepth(ImageNetTrainWithDepth):
- def __init__(self, config=None, size=None, random_crop=True, crop_size=None):
- sub_indices = "30-32, 33-37, 151-268, 281-285, 80-100, 365-382, 389-397, 118-121, 300-319"
- super().__init__(config=config, size=size, random_crop=random_crop,
- sub_indices=sub_indices, crop_size=crop_size)
-
-
-class RINValidationWithDepth(ImageNetValidationWithDepth):
- def __init__(self, config=None, size=None, random_crop=False, crop_size=None):
- sub_indices = "30-32, 33-37, 151-268, 281-285, 80-100, 365-382, 389-397, 118-121, 300-319"
- super().__init__(config=config, size=size, random_crop=random_crop,
- sub_indices=sub_indices, crop_size=crop_size)
-
-
-class DRINExamples(Dataset):
- def __init__(self):
- self.preprocessor = get_preprocessor(size=256, additional_targets={"depth": "image"})
- with open("data/drin_examples.txt", "r") as f:
- relpaths = f.read().splitlines()
- self.image_paths = [os.path.join("data/drin_images",
- relpath) for relpath in relpaths]
- self.depth_paths = [os.path.join("data/drin_depth",
- relpath.replace(".JPEG", ".png")) for relpath in relpaths]
-
- def __len__(self):
- return len(self.image_paths)
-
- def preprocess_image(self, image_path):
- image = Image.open(image_path)
- if not image.mode == "RGB":
- image = image.convert("RGB")
- image = np.array(image).astype(np.uint8)
- image = self.preprocessor(image=image)["image"]
- image = (image/127.5 - 1.0).astype(np.float32)
- return image
-
- def preprocess_depth(self, path):
- rgba = np.array(Image.open(path))
- depth = rgba_to_depth(rgba)
- depth = (depth - depth.min())/max(1e-8, depth.max()-depth.min())
- depth = 2.0*depth-1.0
- return depth
-
- def __getitem__(self, i):
- e = dict()
- e["image"] = self.preprocess_image(self.image_paths[i])
- e["depth"] = self.preprocess_depth(self.depth_paths[i])
- transformed = self.preprocessor(image=e["image"], depth=e["depth"])
- e["image"] = transformed["image"]
- e["depth"] = transformed["depth"]
- return e
-
-
-def imscale(x, factor, keepshapes=False, keepmode="bicubic"):
- if factor is None or factor==1:
- return x
-
- dtype = x.dtype
- assert dtype in [np.float32, np.float64]
- assert x.min() >= -1
- assert x.max() <= 1
-
- keepmode = {"nearest": Image.NEAREST, "bilinear": Image.BILINEAR,
- "bicubic": Image.BICUBIC}[keepmode]
-
- lr = (x+1.0)*127.5
- lr = lr.clip(0,255).astype(np.uint8)
- lr = Image.fromarray(lr)
-
- h, w, _ = x.shape
- nh = h//factor
- nw = w//factor
- assert nh > 0 and nw > 0, (nh, nw)
-
- lr = lr.resize((nw,nh), Image.BICUBIC)
- if keepshapes:
- lr = lr.resize((w,h), keepmode)
- lr = np.array(lr)/127.5-1.0
- lr = lr.astype(dtype)
-
- return lr
-
-
-class ImageNetScale(Dataset):
- def __init__(self, size=None, crop_size=None, random_crop=False,
- up_factor=None, hr_factor=None, keep_mode="bicubic"):
- self.base = self.get_base()
-
- self.size = size
- self.crop_size = crop_size if crop_size is not None else self.size
- self.random_crop = random_crop
- self.up_factor = up_factor
- self.hr_factor = hr_factor
- self.keep_mode = keep_mode
-
- transforms = list()
-
- if self.size is not None and self.size > 0:
- rescaler = albumentations.SmallestMaxSize(max_size = self.size)
- self.rescaler = rescaler
- transforms.append(rescaler)
-
- if self.crop_size is not None and self.crop_size > 0:
- if len(transforms) == 0:
- self.rescaler = albumentations.SmallestMaxSize(max_size = self.crop_size)
-
- if not self.random_crop:
- cropper = albumentations.CenterCrop(height=self.crop_size,width=self.crop_size)
- else:
- cropper = albumentations.RandomCrop(height=self.crop_size,width=self.crop_size)
- transforms.append(cropper)
-
- if len(transforms) > 0:
- if self.up_factor is not None:
- additional_targets = {"lr": "image"}
- else:
- additional_targets = None
- self.preprocessor = albumentations.Compose(transforms,
- additional_targets=additional_targets)
- else:
- self.preprocessor = lambda **kwargs: kwargs
-
- def __len__(self):
- return len(self.base)
-
- def __getitem__(self, i):
- example = self.base[i]
- image = example["image"]
- # adjust resolution
- image = imscale(image, self.hr_factor, keepshapes=False)
- h,w,c = image.shape
- if self.crop_size and min(h,w) < self.crop_size:
- # have to upscale to be able to crop - this just uses bilinear
- image = self.rescaler(image=image)["image"]
- if self.up_factor is None:
- image = self.preprocessor(image=image)["image"]
- example["image"] = image
- else:
- lr = imscale(image, self.up_factor, keepshapes=True,
- keepmode=self.keep_mode)
-
- out = self.preprocessor(image=image, lr=lr)
- example["image"] = out["image"]
- example["lr"] = out["lr"]
-
- return example
-
-class ImageNetScaleTrain(ImageNetScale):
- def __init__(self, random_crop=True, **kwargs):
- super().__init__(random_crop=random_crop, **kwargs)
-
- def get_base(self):
- return ImageNetTrain()
-
-class ImageNetScaleValidation(ImageNetScale):
- def get_base(self):
- return ImageNetValidation()
-
-
-from skimage.feature import canny
-from skimage.color import rgb2gray
-
-
-class ImageNetEdges(ImageNetScale):
- def __init__(self, up_factor=1, **kwargs):
- super().__init__(up_factor=1, **kwargs)
-
- def __getitem__(self, i):
- example = self.base[i]
- image = example["image"]
- h,w,c = image.shape
- if self.crop_size and min(h,w) < self.crop_size:
- # have to upscale to be able to crop - this just uses bilinear
- image = self.rescaler(image=image)["image"]
-
- lr = canny(rgb2gray(image), sigma=2)
- lr = lr.astype(np.float32)
- lr = lr[:,:,None][:,:,[0,0,0]]
-
- out = self.preprocessor(image=image, lr=lr)
- example["image"] = out["image"]
- example["lr"] = out["lr"]
-
- return example
-
-
-class ImageNetEdgesTrain(ImageNetEdges):
- def __init__(self, random_crop=True, **kwargs):
- super().__init__(random_crop=random_crop, **kwargs)
-
- def get_base(self):
- return ImageNetTrain()
-
-class ImageNetEdgesValidation(ImageNetEdges):
- def get_base(self):
- return ImageNetValidation()
diff --git a/spaces/Ellight/Steady-state-heat-conduction-GANs-Vision-Transformer/app.py b/spaces/Ellight/Steady-state-heat-conduction-GANs-Vision-Transformer/app.py
deleted file mode 100644
index ba175756811b8e89d935d84b9b059ccf710c2e17..0000000000000000000000000000000000000000
--- a/spaces/Ellight/Steady-state-heat-conduction-GANs-Vision-Transformer/app.py
+++ /dev/null
@@ -1,373 +0,0 @@
-from torch import nn
-import gradio as gr
-import torch.nn as nn
-import torch
-import numpy as np
-import matplotlib.pyplot as plt
-from torch.autograd import Variable
-from torch.utils.data import DataLoader
-import torch.nn.functional as F
-import tensorflow as tf
-from tensorflow import Tensor
-from tensorflow.keras.layers import Input, Conv2D, ReLU, BatchNormalization,\
- Add, AveragePooling2D, Flatten, Dense
-from tensorflow.keras.models import Model
-from tensorflow.keras import layers
-torch.manual_seed(0)
-physical_devices = tf.config.list_physical_devices('GPU')
-for device in physical_devices:
- tf.config.experimental.set_memory_growth(device, True)
-
-def weights_init_normal(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- torch.nn.init.normal_(m.weight.data, 0.0, 0.02)
- elif classname.find("BatchNorm2d") != -1:
- torch.nn.init.normal_(m.weight.data, 1.0, 0.02)
- torch.nn.init.constant_(m.bias.data, 0.0)
-
-class UNetDown(nn.Module):
- def __init__(self, in_size, out_size, normalize=True, dropout=0.0):
- super(UNetDown, self).__init__()
- layers = [nn.Conv2d(in_size, out_size, 4, 2, 1, bias=False)]
- if normalize:
- layers.append(nn.InstanceNorm2d(out_size))
- layers.append(nn.LeakyReLU(0.2))
- if dropout:
- layers.append(nn.Dropout(dropout))
- self.model = nn.Sequential(*layers)
-
- def forward(self, x):
- return self.model(x)
-
-
-class UNetUp(nn.Module):
- def __init__(self, in_size, out_size, dropout=0.0):
- super(UNetUp, self).__init__()
- layers = [
- nn.ConvTranspose2d(in_size, out_size, 4, 2, 1, bias=False),
- nn.InstanceNorm2d(out_size),
- nn.ReLU(inplace=True),
- ]
- if dropout:
- layers.append(nn.Dropout(dropout))
-
- self.model = nn.Sequential(*layers)
-
- def forward(self, x, skip_input):
- x = self.model(x)
- x = torch.cat((x, skip_input), 1)
-
- return x
-
-
-class GeneratorUNet(nn.Module):
- def __init__(self, in_channels=1, out_channels=1):
- super(GeneratorUNet, self).__init__()
-
- self.down1 = UNetDown(in_channels, 64, normalize=False)
- self.down2 = UNetDown(64, 128)
- self.down3 = UNetDown(128, 256,dropout=0.5)
- self.down4 = UNetDown(256, 512, normalize=False,dropout=0.5)
- self.down5 = UNetDown(512, 512, normalize=False,dropout=0.5)
- self.up1 = UNetUp(512, 512, dropout=0.5)
- self.up2 = UNetUp(1024, 256, dropout=0.5)
- self.up3 = UNetUp(512, 128)
- self.up4 = UNetUp(256, 64)
- self.final = nn.Sequential(
- nn.Upsample(scale_factor=2),
- nn.ZeroPad2d((1, 0, 1, 0)),
- nn.Conv2d(128, out_channels, 4, padding=1),
- nn.Tanh(),
- )
-
- def forward(self, x):
- # U-Net generator with skip connections from encoder to decoder
- d1 = self.down1(x)
- d2 = self.down2(d1)
- d3 = self.down3(d2)
- d4 = self.down4(d3)
- d5 = self.down5(d4)
- u1 = self.up1(d5, d4)
- u2 = self.up2(u1, d3)
- u3 = self.up3(u2, d2)
- u4 = self.up4(u3, d1)
- u5 = self.final(u4)
- return u5
-
-
-
-import tensorflow as tf
-from tensorflow import Tensor
-from tensorflow.keras.layers import Input, Conv2D, ReLU, BatchNormalization,\
- Add, AveragePooling2D, Flatten, Dense
-from tensorflow.keras.models import Model
-
-from tensorflow.keras import layers
-tf.random.set_seed(123)
-class Patches(tf.keras.layers.Layer):
- def __init__(self, patch_size):
- super(Patches, self).__init__()
- self.patch_size = patch_size
-
- def call(self, images):
- batch_size = tf.shape(images)[0]
- patches = tf.image.extract_patches(
- images=images,
- sizes=[1, self.patch_size, self.patch_size, 1],
- strides=[1, self.patch_size, self.patch_size, 1],
- rates=[1, 1, 1, 1],
- padding="SAME",
- )
- patch_dims = patches.shape[-1]
- patches = tf.reshape(patches, [batch_size, -1, patch_dims])
- return patches
-
- def get_config(self):
- config = super(Patches, self).get_config()
- config.update({
- 'patch_size': self.patch_size
- })
- return config
- #def get_config(self):
- # return {"patch_size": self.patch_size}
-
-
-class PatchEncoder(tf.keras.layers.Layer):
- def __init__(self, num_patches, projection_dim):
- super(PatchEncoder, self).__init__()
- self.num_patches = num_patches
- self.projection = layers.Dense(units=projection_dim)
- self.position_embedding = layers.Embedding(
- input_dim=num_patches, output_dim=projection_dim
- )
-
- def call(self, patch):
- positions = tf.range(start=0, limit=self.num_patches, delta=1)
- encoded = self.projection(patch) + self.position_embedding(positions)
- return encoded
-
- def get_config(self):
- config = super(PatchEncoder, self).get_config()
- config.update({
- 'num_patches': self.num_patches,
- 'projection': self.projection,
- 'position_embedding': self.position_embedding,
-
- })
- return config
-
-
-
-class TransformerBlock(tf.keras.layers.Layer):
- def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1):
- super(TransformerBlock, self).__init__()
- self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)
- self.ffn = tf.keras.Sequential(
- [layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),]
- )
- self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
- self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
- self.dropout1 = layers.Dropout(rate)
- self.dropout2 = layers.Dropout(rate)
-
- def call(self, inputs, training):
- attn_output = self.att(inputs, inputs)
- attn_output = self.dropout1(attn_output, training=training)
- out1 = self.layernorm1(inputs + attn_output)
- ffn_output = self.ffn(out1)
- ffn_output = self.dropout2(ffn_output, training=training)
- return self.layernorm2(out1 + ffn_output)
-
- def get_config(self):
- config = super(TransformerBlock, self).get_config()
- config.update({
- 'att': self.att,
- 'ffn': self.ffn,
- 'layernorm1': self.layernorm1,
- 'layernorm2':self.layernorm2,
- 'dropout1':self.dropout1,
- 'dropout2':self.dropout2,
- })
- return config
-
-def relu_bn(inputs: Tensor) -> Tensor:
- relu = ReLU()(inputs)
- bn = BatchNormalization()(relu)
- return bn
-
-
-def residual_block(x: Tensor, downsample: bool, filters: int, kernel_size: int = 3) -> Tensor:
- y = Conv2D(kernel_size=kernel_size,
- strides= (1 if not downsample else 2),
- filters=filters,
- padding="same")(x)
- y = relu_bn(y)
- y = Conv2D(kernel_size=kernel_size,
- strides=1,
- filters=filters,
- padding="same")(y)
-
- if downsample:
- x = Conv2D(kernel_size=1,
- strides=2,
- filters=filters,
- padding="same")(x)
- out = Add()([x, y])
- out = relu_bn(out)
- return out
-
-
-
-def Generator(input_shape,
- patch_size,
- num_patches,
- projection_dim,
- num_heads,
- ff_dim):
-
- inputs = layers.Input(shape=(32, 32, 1))
-
- patches = Patches(patch_size)(inputs)
- print("patches:",patches.shape)
- encoded_patches = PatchEncoder(num_patches, projection_dim)(patches)
- print("encoded patches:",encoded_patches.shape)
- x = TransformerBlock(16, num_heads, ff_dim)(encoded_patches)
- print("first transformer block:",x.shape)
- x = TransformerBlock(16, num_heads, ff_dim)(x)
- #x = TransformerBlock(16, num_heads, ff_dim)(x)
- #x = TransformerBlock(16, num_heads, ff_dim)(x)
- print("Before reshape:",x.shape)
- x = layers.Reshape((4, 4, x.shape[1]))(x)
- print("After reshape: ",x.shape)
- x = layers.Conv2DTranspose(32, (5, 5), strides=(2, 2), padding='same', use_bias=False)(x)
- x = layers.BatchNormalization()(x)
- x = layers.LeakyReLU()(x)
- print("First conv2dtrans:",x.shape)
- x = residual_block(x, downsample=False, filters=32)
-
- x = layers.Conv2DTranspose(16, (5, 5), strides=(2, 2), padding='same', use_bias=False)(x)
- x = layers.BatchNormalization()(x)
- x = layers.LeakyReLU()(x)
- print("First conv2dtrans:",x.shape)
- x = residual_block(x, downsample=False, filters=16)
- x = layers.Conv2DTranspose(8, (5, 5), strides=(2, 2), padding='same', use_bias=False)(x)
- x = layers.BatchNormalization()(x)
- x = layers.LeakyReLU()(x)
- print("First conv2dtrans:",x.shape)
- x = residual_block(x, downsample=False, filters=8)
-
- #x = layers.Conv2DTranspose(16, (5, 5), strides=(2, 2), padding='same', use_bias=False)(x)
- #x = layers.BatchNormalization()(x)
- #x = layers.LeakyReLU()(x)
- #print("First conv2dtrans:",x.shape)
- #x = residual_block(x, downsample=False, filters=16)
-
- #x = layers.Conv2DTranspose(32, (5, 5), strides=(4, 4), padding='same', use_bias=False)(x)
- #x = layers.BatchNormalization()(x)
- #x = layers.LeakyReLU()(x)
- #print("First conv2dtrans:",x.shape)
- #x = residual_block(x, downsample=False, filters=32)
-
- x = layers.Conv2D(1, (3, 3), strides=(1, 1), padding='same', use_bias=False, activation='tanh')(x)
- print("Final shape:- ",x.shape)
- return tf.keras.Model(inputs=inputs, outputs=x)
-
-
-def transform_img(t,b,l,r,mode):
- T = np.empty((32,32))
- T.fill(0)
- T[31:, :] = t
- T[:1, :] = b
- T[:, 31:] = r
- T[:, :1] = l
- new_t = []
- new_t.append(T)
- new_t.append(T)
- new_t = np.array(new_t)
- valid_cond_norm = np.zeros_like(new_t)
- for i in range(len(new_t)):
- valid_cond_norm[i] = ((new_t[i]/(new_t[i].max()/2))-1.)
- valid_cond_norm1 = valid_cond_norm.reshape((valid_cond_norm.shape[0],valid_cond_norm.shape[1],valid_cond_norm.shape[2],1))
- valid_cond_norm2 = np.transpose(valid_cond_norm1,(0,3,2,1))
- valid_cond_norm = torch.Tensor(valid_cond_norm2)
- return valid_cond_norm
-
-def transform_img_tf(t,b,l,r):
- T = np.empty((32,32))
- T.fill(0)
- T[31:, :] = t
- T[:1, :] = b
- T[:, 31:] = r
- T[:, :1] = l
- new_t = []
- new_t.append(T)
- new_t.append(T)
- new_t = np.array(new_t)
- valid_cond_norm = np.zeros_like(new_t)
- for i in range(len(new_t)):
- valid_cond_norm[i] = ((new_t[i]/(new_t[i].max()/2))-1.)
- valid_cond_norm1 = valid_cond_norm.reshape((valid_cond_norm.shape[0],valid_cond_norm.shape[1],valid_cond_norm.shape[2],1))
- valid_cond_norm1 = tf.constant(valid_cond_norm1,dtype=tf.float32)
- print(valid_cond_norm1.shape)
- dataset_valid = tf.data.Dataset.from_tensor_slices((valid_cond_norm1))
- dataset_valid = dataset_valid.batch(1)
- return dataset_valid
-
-#cuda = True if torch.cuda.is_available() else False
-#Tensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
-def predict(text,top,bottom,right,left):
-
- if text == 'Generator':
- x = transform_img(top,bottom,right,left,'gen')
- model = GeneratorUNet()
- model.load_state_dict(torch.load("./generator_499.pth", map_location=torch.device('cpu'))) # Use 'cuda' if you have a GPU available
- #if cuda:
- # model = model.cuda()
- real_A1 = Variable(x.type(torch.FloatTensor))
- fake_B1 = model(real_A1)
- fake1 = fake_B1.detach().cpu().permute(0,3, 2, 1)
- colourMap = plt.cm.jet
- X, Y = np.meshgrid(np.arange(0, 32), np.arange(0, 32))
- fig, ax = plt.subplots(1, 1)
- ax.contourf(X, Y, fake1[0,:,:,0], cmap=colourMap)
- return fig
- if text =='Transformer':
- x = transform_img_tf(top,bottom,right,left)
- #print(x.shape)
- LAMBDA = 100
- IMG_WIDTH = 32
- IMG_HEIGHT = 32
- patch_size = 4
- num_patches = (IMG_HEIGHT // patch_size) ** 2
- projection_dim = 16
- embed_dim = 16
- num_heads = 16
- ff_dim = 8
- input_shape = (32, 32, 1)
- generator1 = Generator(input_shape, patch_size, num_patches, projection_dim, num_heads, ff_dim)
- generator1.load_weights('./epoch_149.h5')
- for img in x.take(2):
- print(img)
- #print("Image:-",img.shape)
- pred = generator1(img)
- print(pred.shape)
- pred = pred.numpy()
- pred = pred[0,:,:,0]
- colourMap = plt.cm.jet
- X, Y = np.meshgrid(np.arange(0, 32), np.arange(0, 32))
- fig, ax = plt.subplots(1, 1)
- ax.contourf(X, Y, pred, cmap=colourMap)
- return fig
-gr.Interface(
- predict,
- inputs=[
- gr.Dropdown(["Generator", "Transformer"]),
- gr.Slider(0, 40, label='top', step=10),
- gr.Slider(40, 80, label='bottom', step=10),
- gr.Slider(80, 120, label='right', step=10),
- gr.Slider(120, 150, label='left', step=10),
- ],
- outputs=gr.Plot(),
-
-).launch()
diff --git a/spaces/EuroPython2022/BayesCap/src/losses.py b/spaces/EuroPython2022/BayesCap/src/losses.py
deleted file mode 100644
index 990af85be1163124a385b06ac5ffc63a47b0cfdd..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/BayesCap/src/losses.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision.models as models
-from torch import Tensor
-
-class ContentLoss(nn.Module):
- """Constructs a content loss function based on the VGG19 network.
- Using high-level feature mapping layers from the latter layers will focus more on the texture content of the image.
-
- Paper reference list:
- -`Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network ` paper.
- -`ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks ` paper.
- -`Perceptual Extreme Super Resolution Network with Receptive Field Block ` paper.
-
- """
-
- def __init__(self) -> None:
- super(ContentLoss, self).__init__()
- # Load the VGG19 model trained on the ImageNet dataset.
- vgg19 = models.vgg19(pretrained=True).eval()
- # Extract the thirty-sixth layer output in the VGG19 model as the content loss.
- self.feature_extractor = nn.Sequential(*list(vgg19.features.children())[:36])
- # Freeze model parameters.
- for parameters in self.feature_extractor.parameters():
- parameters.requires_grad = False
-
- # The preprocessing method of the input data. This is the VGG model preprocessing method of the ImageNet dataset.
- self.register_buffer("mean", torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1))
- self.register_buffer("std", torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1))
-
- def forward(self, sr: Tensor, hr: Tensor) -> Tensor:
- # Standardized operations
- sr = sr.sub(self.mean).div(self.std)
- hr = hr.sub(self.mean).div(self.std)
-
- # Find the feature map difference between the two images
- loss = F.l1_loss(self.feature_extractor(sr), self.feature_extractor(hr))
-
- return loss
-
-
-class GenGaussLoss(nn.Module):
- def __init__(
- self, reduction='mean',
- alpha_eps = 1e-4, beta_eps=1e-4,
- resi_min = 1e-4, resi_max=1e3
- ) -> None:
- super(GenGaussLoss, self).__init__()
- self.reduction = reduction
- self.alpha_eps = alpha_eps
- self.beta_eps = beta_eps
- self.resi_min = resi_min
- self.resi_max = resi_max
-
- def forward(
- self,
- mean: Tensor, one_over_alpha: Tensor, beta: Tensor, target: Tensor
- ):
- one_over_alpha1 = one_over_alpha + self.alpha_eps
- beta1 = beta + self.beta_eps
-
- resi = torch.abs(mean - target)
- # resi = torch.pow(resi*one_over_alpha1, beta1).clamp(min=self.resi_min, max=self.resi_max)
- resi = (resi*one_over_alpha1*beta1).clamp(min=self.resi_min, max=self.resi_max)
- ## check if resi has nans
- if torch.sum(resi != resi) > 0:
- print('resi has nans!!')
- return None
-
- log_one_over_alpha = torch.log(one_over_alpha1)
- log_beta = torch.log(beta1)
- lgamma_beta = torch.lgamma(torch.pow(beta1, -1))
-
- if torch.sum(log_one_over_alpha != log_one_over_alpha) > 0:
- print('log_one_over_alpha has nan')
- if torch.sum(lgamma_beta != lgamma_beta) > 0:
- print('lgamma_beta has nan')
- if torch.sum(log_beta != log_beta) > 0:
- print('log_beta has nan')
-
- l = resi - log_one_over_alpha + lgamma_beta - log_beta
-
- if self.reduction == 'mean':
- return l.mean()
- elif self.reduction == 'sum':
- return l.sum()
- else:
- print('Reduction not supported')
- return None
-
-class TempCombLoss(nn.Module):
- def __init__(
- self, reduction='mean',
- alpha_eps = 1e-4, beta_eps=1e-4,
- resi_min = 1e-4, resi_max=1e3
- ) -> None:
- super(TempCombLoss, self).__init__()
- self.reduction = reduction
- self.alpha_eps = alpha_eps
- self.beta_eps = beta_eps
- self.resi_min = resi_min
- self.resi_max = resi_max
-
- self.L_GenGauss = GenGaussLoss(
- reduction=self.reduction,
- alpha_eps=self.alpha_eps, beta_eps=self.beta_eps,
- resi_min=self.resi_min, resi_max=self.resi_max
- )
- self.L_l1 = nn.L1Loss(reduction=self.reduction)
-
- def forward(
- self,
- mean: Tensor, one_over_alpha: Tensor, beta: Tensor, target: Tensor,
- T1: float, T2: float
- ):
- l1 = self.L_l1(mean, target)
- l2 = self.L_GenGauss(mean, one_over_alpha, beta, target)
- l = T1*l1 + T2*l2
-
- return l
-
-
-# x1 = torch.randn(4,3,32,32)
-# x2 = torch.rand(4,3,32,32)
-# x3 = torch.rand(4,3,32,32)
-# x4 = torch.randn(4,3,32,32)
-
-# L = GenGaussLoss(alpha_eps=1e-4, beta_eps=1e-4, resi_min=1e-4, resi_max=1e3)
-# L2 = TempCombLoss(alpha_eps=1e-4, beta_eps=1e-4, resi_min=1e-4, resi_max=1e3)
-# print(L(x1, x2, x3, x4), L2(x1, x2, x3, x4, 1e0, 1e-2))
\ No newline at end of file
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/onnx_export.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/onnx_export.py
deleted file mode 100644
index 5deda785cf22b341f7d2e6399ef5fcdad6fe129e..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/onnx_export.py
+++ /dev/null
@@ -1,226 +0,0 @@
-from diffusion_onnx import GaussianDiffusion
-import os
-import yaml
-import torch
-import torch.nn as nn
-import numpy as np
-from wavenet import WaveNet
-import torch.nn.functional as F
-import diffusion
-
-class DotDict(dict):
- def __getattr__(*args):
- val = dict.get(*args)
- return DotDict(val) if type(val) is dict else val
-
- __setattr__ = dict.__setitem__
- __delattr__ = dict.__delitem__
-
-
-def load_model_vocoder(
- model_path,
- device='cpu'):
- config_file = os.path.join(os.path.split(model_path)[0], 'config.yaml')
- with open(config_file, "r") as config:
- args = yaml.safe_load(config)
- args = DotDict(args)
-
- # load model
- model = Unit2Mel(
- args.data.encoder_out_channels,
- args.model.n_spk,
- args.model.use_pitch_aug,
- 128,
- args.model.n_layers,
- args.model.n_chans,
- args.model.n_hidden)
-
- print(' [Loading] ' + model_path)
- ckpt = torch.load(model_path, map_location=torch.device(device))
- model.to(device)
- model.load_state_dict(ckpt['model'])
- model.eval()
- return model, args
-
-
-class Unit2Mel(nn.Module):
- def __init__(
- self,
- input_channel,
- n_spk,
- use_pitch_aug=False,
- out_dims=128,
- n_layers=20,
- n_chans=384,
- n_hidden=256):
- super().__init__()
- self.unit_embed = nn.Linear(input_channel, n_hidden)
- self.f0_embed = nn.Linear(1, n_hidden)
- self.volume_embed = nn.Linear(1, n_hidden)
- if use_pitch_aug:
- self.aug_shift_embed = nn.Linear(1, n_hidden, bias=False)
- else:
- self.aug_shift_embed = None
- self.n_spk = n_spk
- if n_spk is not None and n_spk > 1:
- self.spk_embed = nn.Embedding(n_spk, n_hidden)
-
- # diffusion
- self.decoder = GaussianDiffusion(out_dims, n_layers, n_chans, n_hidden)
- self.hidden_size = n_hidden
- self.speaker_map = torch.zeros((self.n_spk,1,1,n_hidden))
-
-
-
- def forward(self, units, mel2ph, f0, volume, g = None):
-
- '''
- input:
- B x n_frames x n_unit
- return:
- dict of B x n_frames x feat
- '''
-
- decoder_inp = F.pad(units, [0, 0, 1, 0])
- mel2ph_ = mel2ph.unsqueeze(2).repeat([1, 1, units.shape[-1]])
- units = torch.gather(decoder_inp, 1, mel2ph_) # [B, T, H]
-
- x = self.unit_embed(units) + self.f0_embed((1 + f0.unsqueeze(-1) / 700).log()) + self.volume_embed(volume.unsqueeze(-1))
-
- if self.n_spk is not None and self.n_spk > 1: # [N, S] * [S, B, 1, H]
- g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
- g = g * self.speaker_map # [N, S, B, 1, H]
- g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
- g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
- x = x.transpose(1, 2) + g
- return x
- else:
- return x.transpose(1, 2)
-
-
- def init_spkembed(self, units, f0, volume, spk_id = None, spk_mix_dict = None, aug_shift = None,
- gt_spec=None, infer=True, infer_speedup=10, method='dpm-solver', k_step=300, use_tqdm=True):
-
- '''
- input:
- B x n_frames x n_unit
- return:
- dict of B x n_frames x feat
- '''
- x = self.unit_embed(units) + self.f0_embed((1+ f0 / 700).log()) + self.volume_embed(volume)
- if self.n_spk is not None and self.n_spk > 1:
- if spk_mix_dict is not None:
- spk_embed_mix = torch.zeros((1,1,self.hidden_size))
- for k, v in spk_mix_dict.items():
- spk_id_torch = torch.LongTensor(np.array([[k]])).to(units.device)
- spk_embeddd = self.spk_embed(spk_id_torch)
- self.speaker_map[k] = spk_embeddd
- spk_embed_mix = spk_embed_mix + v * spk_embeddd
- x = x + spk_embed_mix
- else:
- x = x + self.spk_embed(spk_id - 1)
- self.speaker_map = self.speaker_map.unsqueeze(0)
- self.speaker_map = self.speaker_map.detach()
- return x.transpose(1, 2)
-
- def OnnxExport(self, project_name=None, init_noise=None, export_encoder=True, export_denoise=True, export_pred=True, export_after=True):
- hubert_hidden_size = 768
- n_frames = 100
- hubert = torch.randn((1, n_frames, hubert_hidden_size))
- mel2ph = torch.arange(end=n_frames).unsqueeze(0).long()
- f0 = torch.randn((1, n_frames))
- volume = torch.randn((1, n_frames))
- spk_mix = []
- spks = {}
- if self.n_spk is not None and self.n_spk > 1:
- for i in range(self.n_spk):
- spk_mix.append(1.0/float(self.n_spk))
- spks.update({i:1.0/float(self.n_spk)})
- spk_mix = torch.tensor(spk_mix)
- spk_mix = spk_mix.repeat(n_frames, 1)
- orgouttt = self.init_spkembed(hubert, f0.unsqueeze(-1), volume.unsqueeze(-1), spk_mix_dict=spks)
- outtt = self.forward(hubert, mel2ph, f0, volume, spk_mix)
- if export_encoder:
- torch.onnx.export(
- self,
- (hubert, mel2ph, f0, volume, spk_mix),
- f"{project_name}_encoder.onnx",
- input_names=["hubert", "mel2ph", "f0", "volume", "spk_mix"],
- output_names=["mel_pred"],
- dynamic_axes={
- "hubert": [1],
- "f0": [1],
- "volume": [1],
- "mel2ph": [1],
- "spk_mix": [0],
- },
- opset_version=16
- )
-
- self.decoder.OnnxExport(project_name, init_noise=init_noise, export_denoise=export_denoise, export_pred=export_pred, export_after=export_after)
-
- def ExportOnnx(self, project_name=None):
- hubert_hidden_size = 768
- n_frames = 100
- hubert = torch.randn((1, n_frames, hubert_hidden_size))
- mel2ph = torch.arange(end=n_frames).unsqueeze(0).long()
- f0 = torch.randn((1, n_frames))
- volume = torch.randn((1, n_frames))
- spk_mix = []
- spks = {}
- if self.n_spk is not None and self.n_spk > 1:
- for i in range(self.n_spk):
- spk_mix.append(1.0/float(self.n_spk))
- spks.update({i:1.0/float(self.n_spk)})
- spk_mix = torch.tensor(spk_mix)
- orgouttt = self.orgforward(hubert, f0.unsqueeze(-1), volume.unsqueeze(-1), spk_mix_dict=spks)
- outtt = self.forward(hubert, mel2ph, f0, volume, spk_mix)
-
- torch.onnx.export(
- self,
- (hubert, mel2ph, f0, volume, spk_mix),
- f"{project_name}_encoder.onnx",
- input_names=["hubert", "mel2ph", "f0", "volume", "spk_mix"],
- output_names=["mel_pred"],
- dynamic_axes={
- "hubert": [1],
- "f0": [1],
- "volume": [1],
- "mel2ph": [1]
- },
- opset_version=16
- )
-
- condition = torch.randn(1,self.decoder.n_hidden,n_frames)
- noise = torch.randn((1, 1, self.decoder.mel_bins, condition.shape[2]), dtype=torch.float32)
- pndm_speedup = torch.LongTensor([100])
- K_steps = torch.LongTensor([1000])
- self.decoder = torch.jit.script(self.decoder)
- self.decoder(condition, noise, pndm_speedup, K_steps)
-
- torch.onnx.export(
- self.decoder,
- (condition, noise, pndm_speedup, K_steps),
- f"{project_name}_diffusion.onnx",
- input_names=["condition", "noise", "pndm_speedup", "K_steps"],
- output_names=["mel"],
- dynamic_axes={
- "condition": [2],
- "noise": [3],
- },
- opset_version=16
- )
-
-
-if __name__ == "__main__":
- project_name = "dddsp"
- model_path = f'{project_name}/model_500000.pt'
-
- model, _ = load_model_vocoder(model_path)
-
- # 分开Diffusion导出(需要使用MoeSS/MoeVoiceStudio或者自己编写Pndm/Dpm采样)
- model.OnnxExport(project_name, export_encoder=True, export_denoise=True, export_pred=True, export_after=True)
-
- # 合并Diffusion导出(Encoder和Diffusion分开,直接将Encoder的结果和初始噪声输入Diffusion即可)
- # model.ExportOnnx(project_name)
-
diff --git a/spaces/Froleptan/stablediffusion-infinity/js/setup.js b/spaces/Froleptan/stablediffusion-infinity/js/setup.js
deleted file mode 100644
index 2b9c2913a0437d9b009b933d5e6b545877d3f3a3..0000000000000000000000000000000000000000
--- a/spaces/Froleptan/stablediffusion-infinity/js/setup.js
+++ /dev/null
@@ -1,28 +0,0 @@
-function(token_val, width, height, size, model_choice, model_path){
- let app=document.querySelector("gradio-app");
- app=app.shadowRoot??app;
- app.querySelector("#sdinfframe").style.height=80+Number(height)+"px";
- // app.querySelector("#setup_row").style.display="none";
- app.querySelector("#model_path_input").style.display="none";
- let frame=app.querySelector("#sdinfframe").contentWindow.document;
-
- if(frame.querySelector("#setup").value=="0")
- {
- window.my_setup=setInterval(function(){
- let app=document.querySelector("gradio-app");
- app=app.shadowRoot??app;
- let frame=app.querySelector("#sdinfframe").contentWindow.document;
- console.log("Check PyScript...")
- if(frame.querySelector("#setup").value=="1")
- {
- frame.querySelector("#draw").click();
- clearInterval(window.my_setup);
- }
- }, 100)
- }
- else
- {
- frame.querySelector("#draw").click();
- }
- return [token_val, width, height, size, model_choice, model_path];
-}
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/construct_colorful_arch.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/construct_colorful_arch.py
deleted file mode 100644
index 38f8e7d8d17833ec8e3c65e8cdfa352acd0eccac..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/construct_colorful_arch.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-
-class ConstructColorfulArch(Task):
- """Construct an arch using six blocks: three red, and three blue."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 20
- self.lang_template = "Construct an arch using six blocks: three red, and three blue."
- self.task_completed_desc = "done constructing colorful arch."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add blocks.
- # x, y, z dimensions for the asset size
- block_size = (0.04, 0.04, 0.04)
- block_urdf = 'stacking/block.urdf'
- colors = [utils.COLORS['red'], utils.COLORS['blue']]
- blocks = []
- for i in range(6):
- block_pose = self.get_random_pose(env, block_size)
- color = colors[i // 3] # First three blocks are red, last three are blue
- block_id = env.add_object(block_urdf, block_pose, color=color)
- blocks.append(block_id)
-
- # Associate placement locations for goals.
- place_pos = [(0, -0.05, 0.02), (0, 0.05, 0.02), # Base layer
- (0, 0, 0.06), # Second layer
- (0, -0.05, 0.10), (0, 0.05, 0.10), # Third layer
- (0, 0, 0.14)] # Top layer
- targs = [(utils.apply(block_pose, i), block_pose[1]) for i in place_pos]
-
- # Goal: blocks are stacked in an arch (bottom layer: red, red).
- self.add_goal(objs=blocks[:2], matches=np.ones((2, 2)), targ_poses=targs[:2], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 3, symmetries=[np.pi/2]*2,
- language_goal="Place two red blocks on the tabletop parallel to each other")
-
- # Goal: blocks are stacked in an arch (second layer: blue).
- self.add_goal(objs=blocks[2:3], matches=np.ones((1, 1)), targ_poses=targs[2:3], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 3, symmetries=[np.pi/2],
- language_goal="Place a blue block on top of the red blocks to form a basic arch")
-
- # Goal: blocks are stacked in an arch (third layer: red, red).
- self.add_goal(objs=blocks[3:5], matches=np.ones((2, 2)), targ_poses=targs[3:5], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 3, symmetries=[np.pi/2]*2,
- language_goal="Place a red block on each side of the base arch")
-
- # Goal: blocks are stacked in an arch (top layer: blue).
- self.add_goal(objs=blocks[5:], matches=np.ones((1, 1)), targ_poses=targs[5:], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 3, symmetries=[np.pi/2],
- language_goal="Bridge them with the last blue block")
\ No newline at end of file
diff --git a/spaces/Godrose0728/sound-link/app.py b/spaces/Godrose0728/sound-link/app.py
deleted file mode 100644
index 1629cceafb494840b211bae2e0517b9b0e8f4aeb..0000000000000000000000000000000000000000
--- a/spaces/Godrose0728/sound-link/app.py
+++ /dev/null
@@ -1,320 +0,0 @@
-import argparse
-import json
-import os
-import re
-import tempfile
-from pathlib import Path
-
-import librosa
-import numpy as np
-import torch
-from torch import no_grad, LongTensor
-import commons
-import utils
-import gradio as gr
-import gradio.utils as gr_utils
-import gradio.processing_utils as gr_processing_utils
-from models import SynthesizerTrn
-from text import text_to_sequence, _clean_text
-from mel_processing import spectrogram_torch
-
-limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces
-
-audio_postprocess_ori = gr.Audio.postprocess
-
-
-def audio_postprocess(self, y):
- data = audio_postprocess_ori(self, y)
- if data is None:
- return None
- return gr_processing_utils.encode_url_or_file_to_base64(data["name"])
-
-
-gr.Audio.postprocess = audio_postprocess
-
-
-def get_text(text, hps, is_symbol):
- text_norm = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm
-
-
-def create_tts_fn(model, hps, speaker_ids):
- def tts_fn(text, speaker, speed, is_symbol):
- if limitation:
- text_len = len(re.sub("\[([A-Z]{2})\]", "", text))
- max_len = 150
- if is_symbol:
- max_len *= 3
- if text_len > max_len:
- return "Error: Text is too long", None
-
- speaker_id = speaker_ids[speaker]
- stn_tst = get_text(text, hps, is_symbol)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0).to(device)
- x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device)
- sid = LongTensor([speaker_id]).to(device)
- audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8,
- length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy()
- del stn_tst, x_tst, x_tst_lengths, sid
- return "Success", (hps.data.sampling_rate, audio)
-
- return tts_fn
-
-
-def create_vc_fn(model, hps, speaker_ids):
- def vc_fn(original_speaker, target_speaker, input_audio):
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = input_audio
- duration = audio.shape[0] / sampling_rate
- if limitation and duration > 30:
- return "Error: Audio is too long", None
- original_speaker_id = speaker_ids[original_speaker]
- target_speaker_id = speaker_ids[target_speaker]
-
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != hps.data.sampling_rate:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=hps.data.sampling_rate)
- with no_grad():
- y = torch.FloatTensor(audio)
- y = y.unsqueeze(0)
- spec = spectrogram_torch(y, hps.data.filter_length,
- hps.data.sampling_rate, hps.data.hop_length, hps.data.win_length,
- center=False).to(device)
- spec_lengths = LongTensor([spec.size(-1)]).to(device)
- sid_src = LongTensor([original_speaker_id]).to(device)
- sid_tgt = LongTensor([target_speaker_id]).to(device)
- audio = model.voice_conversion(spec, spec_lengths, sid_src=sid_src, sid_tgt=sid_tgt)[0][
- 0, 0].data.cpu().float().numpy()
- del y, spec, spec_lengths, sid_src, sid_tgt
- return "Success", (hps.data.sampling_rate, audio)
-
- return vc_fn
-
-
-def create_soft_vc_fn(model, hps, speaker_ids):
- def soft_vc_fn(target_speaker, input_audio1, input_audio2):
- input_audio = input_audio1
- if input_audio is None:
- input_audio = input_audio2
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = input_audio
- duration = audio.shape[0] / sampling_rate
- if limitation and duration > 30:
- return "Error: Audio is too long", None
- target_speaker_id = speaker_ids[target_speaker]
-
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- with torch.inference_mode():
- units = hubert.units(torch.FloatTensor(audio).unsqueeze(0).unsqueeze(0).to(device))
- with no_grad():
- unit_lengths = LongTensor([units.size(1)]).to(device)
- sid = LongTensor([target_speaker_id]).to(device)
- audio = model.infer(units, unit_lengths, sid=sid, noise_scale=.667,
- noise_scale_w=0.8)[0][0, 0].data.cpu().float().numpy()
- del units, unit_lengths, sid
- return "Success", (hps.data.sampling_rate, audio)
-
- return soft_vc_fn
-
-
-def create_to_symbol_fn(hps):
- def to_symbol_fn(is_symbol_input, input_text, temp_text):
- return (_clean_text(input_text, hps.data.text_cleaners), input_text) if is_symbol_input \
- else (temp_text, temp_text)
-
- return to_symbol_fn
-
-
-download_audio_js = """
-() =>{{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let audio = root.querySelector("#{audio_id}").querySelector("audio");
- if (audio == undefined)
- return;
- audio = audio.src;
- let oA = document.createElement("a");
- oA.download = Math.floor(Math.random()*100000000)+'.wav';
- oA.href = audio;
- document.body.appendChild(oA);
- oA.click();
- oA.remove();
-}}
-"""
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--device', type=str, default='cpu')
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- args = parser.parse_args()
-
- device = torch.device(args.device)
- models_tts = []
- models_vc = []
- models_soft_vc = []
- with open("saved_model/info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for i, info in models_info.items():
- name = info["title"]
- author = info["author"]
- lang = info["lang"]
- example = info["example"]
- config_path = f"saved_model/{i}/config.json"
- model_path = f"saved_model/{i}/model.pth"
- cover = info["cover"]
- cover_path = f"saved_model/{i}/{cover}" if cover else None
- hps = utils.get_hparams_from_file(config_path)
- model = SynthesizerTrn(
- len(hps.symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model)
- utils.load_checkpoint(model_path, model, None)
- model.eval().to(device)
- speaker_ids = [sid for sid, name in enumerate(hps.speakers) if name != "None"]
- speakers = [name for sid, name in enumerate(hps.speakers) if name != "None"]
-
- t = info["type"]
- if t == "vits":
- models_tts.append((name, author, cover_path, speakers, lang, example,
- hps.symbols, create_tts_fn(model, hps, speaker_ids),
- create_to_symbol_fn(hps)))
- models_vc.append((name, author, cover_path, speakers, create_vc_fn(model, hps, speaker_ids)))
- elif t == "soft-vits-vc":
- models_soft_vc.append((name, author, cover_path, speakers, create_soft_vc_fn(model, hps, speaker_ids)))
-
- hubert = torch.hub.load("bshall/hubert:main", "hubert_soft", trust_repo=True).to(device)
-
- app = gr.Blocks()
-
- with app:
- gr.Markdown("# Moe TTS And Voice Conversion Using VITS Model\n\n"
- "\n\n"
- "[Open In Colab]"
- "(https://colab.research.google.com/drive/14Pb8lpmwZL-JI5Ub6jpG4sz2-8KS0kbS?usp=sharing)"
- " without queue and length limitation.\n\n"
- "Feel free to [open discussion](https://huggingface.co/spaces/skytnt/moe-tts/discussions/new) "
- "if you want to add your model to this app.")
- with gr.Tabs():
- with gr.TabItem("TTS"):
- with gr.Tabs():
- for i, (name, author, cover_path, speakers, lang, example, symbols, tts_fn,
- to_symbol_fn) in enumerate(models_tts):
- with gr.TabItem(f"model{i}"):
- with gr.Column():
- cover_markdown = f"\n\n" if cover_path else ""
- gr.Markdown(f"## {name}\n\n"
- f"{cover_markdown}"
- f"model author: {author}\n\n"
- f"language: {lang}")
- tts_input1 = gr.TextArea(label="Text (150 words limitation)", value=example,
- elem_id=f"tts-input{i}")
- tts_input2 = gr.Dropdown(label="Speaker", choices=speakers,
- type="index", value=speakers[0])
- tts_input3 = gr.Slider(label="Speed", value=1, minimum=0.5, maximum=2, step=0.1)
- with gr.Accordion(label="Advanced Options", open=False):
- temp_text_var = gr.Variable()
- symbol_input = gr.Checkbox(value=False, label="Symbol input")
- symbol_list = gr.Dataset(label="Symbol list", components=[tts_input1],
- samples=[[x] for x in symbols],
- elem_id=f"symbol-list{i}")
- symbol_list_json = gr.Json(value=symbols, visible=False)
- tts_submit = gr.Button("Generate", variant="primary")
- tts_output1 = gr.Textbox(label="Output Message")
- tts_output2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio{i}")
- download = gr.Button("Download Audio")
- download.click(None, [], [], _js=download_audio_js.format(audio_id=f"tts-audio{i}"))
-
- tts_submit.click(tts_fn, [tts_input1, tts_input2, tts_input3, symbol_input],
- [tts_output1, tts_output2])
- symbol_input.change(to_symbol_fn,
- [symbol_input, tts_input1, temp_text_var],
- [tts_input1, temp_text_var])
- symbol_list.click(None, [symbol_list, symbol_list_json], [],
- _js=f"""
- (i,symbols) => {{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let text_input = root.querySelector("#tts-input{i}").querySelector("textarea");
- let startPos = text_input.selectionStart;
- let endPos = text_input.selectionEnd;
- let oldTxt = text_input.value;
- let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos);
- text_input.value = result;
- let x = window.scrollX, y = window.scrollY;
- text_input.focus();
- text_input.selectionStart = startPos + symbols[i].length;
- text_input.selectionEnd = startPos + symbols[i].length;
- text_input.blur();
- window.scrollTo(x, y);
- return [];
- }}""")
-
- with gr.TabItem("Voice Conversion"):
- with gr.Tabs():
- for i, (name, author, cover_path, speakers, vc_fn) in enumerate(models_vc):
- with gr.TabItem(f"model{i}"):
- cover_markdown = f"\n\n" if cover_path else ""
- gr.Markdown(f"## {name}\n\n"
- f"{cover_markdown}"
- f"model author: {author}")
- vc_input1 = gr.Dropdown(label="Original Speaker", choices=speakers, type="index",
- value=speakers[0])
- vc_input2 = gr.Dropdown(label="Target Speaker", choices=speakers, type="index",
- value=speakers[min(len(speakers) - 1, 1)])
- vc_input3 = gr.Audio(label="Input Audio (30s limitation)")
- vc_submit = gr.Button("Convert", variant="primary")
- vc_output1 = gr.Textbox(label="Output Message")
- vc_output2 = gr.Audio(label="Output Audio", elem_id=f"vc-audio{i}")
- download = gr.Button("Download Audio")
- download.click(None, [], [], _js=download_audio_js.format(audio_id=f"vc-audio{i}"))
- vc_submit.click(vc_fn, [vc_input1, vc_input2, vc_input3], [vc_output1, vc_output2])
- with gr.TabItem("Soft Voice Conversion"):
- with gr.Tabs():
- for i, (name, author, cover_path, speakers, soft_vc_fn) in enumerate(models_soft_vc):
- with gr.TabItem(f"model{i}"):
- cover_markdown = f"\n\n" if cover_path else ""
- gr.Markdown(f"## {name}\n\n"
- f"{cover_markdown}"
- f"model author: {author}")
- vc_input1 = gr.Dropdown(label="Target Speaker", choices=speakers, type="index",
- value=speakers[0])
- source_tabs = gr.Tabs()
- with source_tabs:
- with gr.TabItem("microphone"):
- vc_input2 = gr.Audio(label="Input Audio (30s limitation)", source="microphone")
- with gr.TabItem("upload"):
- vc_input3 = gr.Audio(label="Input Audio (30s limitation)", source="upload")
- vc_submit = gr.Button("Convert", variant="primary")
- vc_output1 = gr.Textbox(label="Output Message")
- vc_output2 = gr.Audio(label="Output Audio", elem_id=f"svc-audio{i}")
- download = gr.Button("Download Audio")
- download.click(None, [], [], _js=download_audio_js.format(audio_id=f"svc-audio{i}"))
- # clear inputs
- source_tabs.set_event_trigger("change", None, [], [vc_input2, vc_input3],
- js="()=>[null,null]")
- vc_submit.click(soft_vc_fn, [vc_input1, vc_input2, vc_input3],
- [vc_output1, vc_output2])
- gr.Markdown(
- "unofficial demo for \n\n"
- "- [https://github.com/CjangCjengh/MoeGoe](https://github.com/CjangCjengh/MoeGoe)\n"
- "- [https://github.com/Francis-Komizu/VITS](https://github.com/Francis-Komizu/VITS)\n"
- "- [https://github.com/luoyily/MoeTTS](https://github.com/luoyily/MoeTTS)\n"
- "- [https://github.com/Francis-Komizu/Sovits](https://github.com/Francis-Komizu/Sovits)"
- )
- app.queue(concurrency_count=3).launch(show_api=False, share=args.share)
diff --git a/spaces/Gradio-Blocks/DualStyleGAN/images/README.md b/spaces/Gradio-Blocks/DualStyleGAN/images/README.md
deleted file mode 100644
index cfd45bb9d2799fa93f74a1ca1ab1252d81bdaf0b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/DualStyleGAN/images/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
-These images are freely-usable ones from [Unsplash](https://unsplash.com/).
-
-- https://unsplash.com/photos/rDEOVtE7vOs
-- https://unsplash.com/photos/et_78QkMMQs
-- https://unsplash.com/photos/ILip77SbmOE
-- https://unsplash.com/photos/95UF6LXe-Lo
-
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/exp/cascade_mask_rcnn_3x_ms_hybrid_small/config.py b/spaces/Gradio-Blocks/uniformer_image_detection/exp/cascade_mask_rcnn_3x_ms_hybrid_small/config.py
deleted file mode 100644
index 1935f1914df202018438a21021ea1e7acf69e983..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/exp/cascade_mask_rcnn_3x_ms_hybrid_small/config.py
+++ /dev/null
@@ -1,142 +0,0 @@
-_base_ = [
- '../../configs/_base_/models/cascade_mask_rcnn_uniformer_fpn.py',
- '../../configs/_base_/datasets/coco_instance.py',
- '../../configs/_base_/schedules/schedule_1x.py',
- '../../configs/_base_/default_runtime.py'
-]
-
-model = dict(
- backbone=dict(
- embed_dim=[64, 128, 320, 512],
- layers=[3, 4, 8, 3],
- head_dim=64,
- drop_path_rate=0.2,
- use_checkpoint=True,
- checkpoint_num=[0, 0, 8, 0],
- windows=False,
- hybrid=True,
- window_size=14
- ),
- neck=dict(in_channels=[64, 128, 320, 512]),
- roi_head=dict(
- bbox_head=[
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.05, 0.05, 0.1, 0.1]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.033, 0.033, 0.067, 0.067]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0))
- ]))
-
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-# augmentation strategy originates from DETR / Sparse RCNN
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='AutoAugment',
- policies=[
- [
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
- (736, 1333), (768, 1333), (800, 1333)],
- multiscale_mode='value',
- keep_ratio=True)
- ],
- [
- dict(type='Resize',
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomCrop',
- crop_type='absolute_range',
- crop_size=(384, 600),
- allow_negative_crop=True),
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
- (576, 1333), (608, 1333), (640, 1333),
- (672, 1333), (704, 1333), (736, 1333),
- (768, 1333), (800, 1333)],
- multiscale_mode='value',
- override=True,
- keep_ratio=True)
- ]
- ]),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-
-optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-lr_config = dict(step=[27, 33])
-runner = dict(type='EpochBasedRunnerAmp', max_epochs=36)
-
-# do not use mmdet version fp16
-fp16 = None
-optimizer_config = dict(
- type="DistOptimizerHook",
- update_interval=1,
- grad_clip=None,
- coalesce=True,
- bucket_size_mb=-1,
- use_fp16=True,
-)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index 55648c08b2c4eb78d7d5ae65482e5e5b291c058a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './encnet_r50-d8_769x769_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/autocast.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/autocast.py
deleted file mode 100644
index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/autocast.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-class TorchAutocast:
- """TorchAutocast utility class.
- Allows you to enable and disable autocast. This is specially useful
- when dealing with different architectures and clusters with different
- levels of support.
-
- Args:
- enabled (bool): Whether to enable torch.autocast or not.
- args: Additional args for torch.autocast.
- kwargs: Additional kwargs for torch.autocast
- """
- def __init__(self, enabled: bool, *args, **kwargs):
- self.autocast = torch.autocast(*args, **kwargs) if enabled else None
-
- def __enter__(self):
- if self.autocast is None:
- return
- try:
- self.autocast.__enter__()
- except RuntimeError:
- device = self.autocast.device
- dtype = self.autocast.fast_dtype
- raise RuntimeError(
- f"There was an error autocasting with dtype={dtype} device={device}\n"
- "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16"
- )
-
- def __exit__(self, *args, **kwargs):
- if self.autocast is None:
- return
- self.autocast.__exit__(*args, **kwargs)
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/losses/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/tests/losses/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/losses/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/HUBioDataLab/DrugGEN/loss.py b/spaces/HUBioDataLab/DrugGEN/loss.py
deleted file mode 100644
index e30700d7742902b71d865e2e5628dd5b322dd92a..0000000000000000000000000000000000000000
--- a/spaces/HUBioDataLab/DrugGEN/loss.py
+++ /dev/null
@@ -1,158 +0,0 @@
-
-import torch
-
-def discriminator_loss(generator, discriminator, mol_graph, adj, annot, batch_size, device, grad_pen, lambda_gp,z_edge,z_node):
-
- # Compute loss with real molecules.
-
- logits_real_disc = discriminator(mol_graph)
-
- prediction_real = - torch.mean(logits_real_disc)
-
- # Compute loss with fake molecules.
-
- node, edge, node_sample, edge_sample = generator(z_edge, z_node)
-
- graph = torch.cat((node_sample.view(batch_size, -1), edge_sample.view(batch_size, -1)), dim=-1)
-
- logits_fake_disc = discriminator(graph.detach())
-
- prediction_fake = torch.mean(logits_fake_disc)
-
- # Compute gradient loss.
-
- eps = torch.rand(mol_graph.size(0),1).to(device)
- x_int0 = (eps * mol_graph + (1. - eps) * graph).requires_grad_(True)
-
- grad0 = discriminator(x_int0)
- d_loss_gp = grad_pen(grad0, x_int0)
-
- # Calculate total loss
-
- d_loss = prediction_fake + prediction_real + d_loss_gp * lambda_gp
-
- return node, edge,d_loss
-
-
-def generator_loss(generator, discriminator, v, adj, annot, batch_size, penalty, matrices2mol, fps_r,submodel, dataset_name):
-
- # Compute loss with fake molecules.
-
- node, edge, node_sample, edge_sample = generator(adj, annot)
-
-
- graph = torch.cat((node_sample.view(batch_size, -1), edge_sample.view(batch_size, -1)), dim=-1)
-
-
- logits_fake_disc = discriminator(graph)
-
- prediction_fake = - torch.mean(logits_fake_disc)
-
- # Produce molecules.
-
- g_edges_hat_sample = torch.max(edge_sample, -1)[1]
- g_nodes_hat_sample = torch.max(node_sample , -1)[1]
-
- fake_mol = [matrices2mol(n_.data.cpu().numpy(), e_.data.cpu().numpy(), strict=True, file_name=dataset_name)
- for e_, n_ in zip(g_edges_hat_sample, g_nodes_hat_sample)]
- g_loss = prediction_fake
- # Compute penalty loss.
- if submodel == "RL":
- reward = penalty(fake_mol, fps_r)
-
- # Reinforcement Loss
-
- rew_fake = v(graph)
-
- reward_loss = torch.mean(rew_fake) ** 2 + reward
-
- # Calculate total loss
-
- g_loss = prediction_fake + reward_loss * 1
-
-
- return g_loss, fake_mol, g_edges_hat_sample, g_nodes_hat_sample, node, edge
-
-def discriminator2_loss(generator, discriminator, mol_graph, adj, annot, batch_size, device, grad_pen, lambda_gp,akt1_adj,akt1_annot):
-
- # Generate molecules.
-
- dr_edges, dr_nodes = generator(adj,
- annot,
- akt1_adj,
- akt1_annot)
-
-
- dr_edges_hat = dr_edges.view(batch_size, -1)
-
- dr_nodes_hat = dr_nodes.view(batch_size, -1)
-
- dr_graph = torch.cat((dr_nodes_hat, dr_edges_hat), dim=-1)
-
- # Compute loss with fake molecules.
-
- dr_logits_fake = discriminator(dr_graph.detach())
-
- d2_loss_fake = torch.mean(dr_logits_fake)
-
- # Compute loss with real molecules.
-
- dr_logits_real2 = discriminator(mol_graph)
-
- d2_loss_real = - torch.mean(dr_logits_real2)
-
- # Compute gradient loss.
-
- eps_dr = torch.rand(mol_graph.size(0),1).to(device)
- x_int0_dr = (eps_dr * mol_graph + (1. - eps_dr) * dr_graph).requires_grad_(True)
-
-
- grad0_dr = discriminator(x_int0_dr)
- d2_loss_gp = grad_pen(grad0_dr, x_int0_dr)
-
- # Compute total loss.
-
- d2_loss = d2_loss_fake + d2_loss_real + d2_loss_gp * lambda_gp
-
- return d2_loss
-
-def generator2_loss(generator, discriminator, v, adj, annot, batch_size, penalty, matrices2mol, fps_r,ak1_adj,akt1_annot, submodel, drugs_name):
-
- # Generate molecules.
-
- dr_edges_g, dr_nodes_g = generator(adj,
- annot,
- ak1_adj,
- akt1_annot)
-
- dr_edges_hat_g = dr_edges_g.view(batch_size, -1)
-
- dr_nodes_hat_g = dr_nodes_g.view(batch_size, -1)
-
- dr_graph_g = torch.cat((dr_nodes_hat_g, dr_edges_hat_g), dim=-1)
-
- # Compute loss with fake molecules.
-
- dr_g_edges_hat_sample, dr_g_nodes_hat_sample = torch.max(dr_edges_g, -1)[1], torch.max(dr_nodes_g, -1)[1]
-
- g_tra_logits_fake2 = discriminator(dr_graph_g)
-
- g2_loss_fake = - torch.mean(g_tra_logits_fake2)
-
- # Reward
- fake_mol_g = [matrices2mol(n_.data.cpu().numpy(), e_.data.cpu().numpy(), strict=True, file_name=drugs_name)
- for e_, n_ in zip(dr_g_edges_hat_sample, dr_g_nodes_hat_sample)]
- g2_loss = g2_loss_fake
- if submodel == "RL":
- reward2 = penalty(fake_mol_g, fps_r)
-
- # Reinforcement Loss
-
- rew_fake2 = v(dr_graph_g)
- reward_loss2 = torch.mean(rew_fake2) ** 2 + reward2
-
- # Calculate total loss
-
- g2_loss = g2_loss_fake + reward_loss2 * 10
-
- return g2_loss, fake_mol_g, dr_g_edges_hat_sample, dr_g_nodes_hat_sample#, reward2
\ No newline at end of file
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/utils/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/utils/__init__.py
deleted file mode 100644
index b392268121643a5ce6fdc6d6a0f712ad8dd867a9..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/utils/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .universal_checkpoint import UniversalCheckpoint
-from .utils import chinese_char_tokenize
-__all__ = ['UniversalCheckpoint', 'chinese_char_tokenize']
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/logmel_feature_reader.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/logmel_feature_reader.py
deleted file mode 100644
index 106f50247622deca688b223f1ad63275d5b65e58..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/logmel_feature_reader.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import soundfile as sf
-import torch
-import torchaudio.compliance.kaldi as kaldi
-
-
-class LogMelFeatureReader:
- """
- Wrapper class to run inference on HuBERT model.
- Helps extract features for a given audio file.
- """
-
- def __init__(self, *args, **kwargs):
- self.num_mel_bins = kwargs.get("num_mel_bins", 80)
- self.frame_length = kwargs.get("frame_length", 25.0)
-
- def get_feats(self, file_path):
- wav, sr = sf.read(file_path)
- feats = torch.from_numpy(wav).float()
- feats = kaldi.fbank(
- feats.unsqueeze(0),
- num_mel_bins=self.num_mel_bins,
- frame_length=self.frame_length,
- sample_frequency=sr,
- )
- return feats
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/logging/metrics.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/logging/metrics.py
deleted file mode 100644
index 58c2fb64e186ed9d5e9a06c73194d98a21bb7560..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/logging/metrics.py
+++ /dev/null
@@ -1,314 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-A standalone module for aggregating metrics.
-
-Metrics can be logged from anywhere using the `log_*` functions defined
-in this module. The logged values will be aggregated dynamically based
-on the aggregation context in which the logging occurs. See the
-:func:`aggregate` context manager for more details.
-"""
-
-import contextlib
-import uuid
-from collections import defaultdict
-from typing import Callable, List, Optional
-
-from .meters import *
-
-
-# Aggregation contexts are considered "active" when inside the scope
-# created by the :func:`aggregate` context manager.
-_aggregators = OrderedDict()
-_active_aggregators = OrderedDict()
-_active_aggregators_cnt = defaultdict(lambda: 0)
-
-
-def reset() -> None:
- """Reset all metrics aggregators."""
- _aggregators.clear()
- _active_aggregators.clear()
- _active_aggregators_cnt.clear()
-
- # The "default" aggregator observes all logged values.
- _aggregators["default"] = MetersDict()
- _active_aggregators["default"] = _aggregators["default"]
- _active_aggregators_cnt["default"] = 1
-
-
-reset()
-
-
-@contextlib.contextmanager
-def aggregate(name: Optional[str] = None, new_root: bool = False):
- """Context manager to aggregate metrics under a given name.
-
- Aggregations can be nested. If *new_root* is ``False``, then logged
- metrics will be recorded along the entire stack of nested
- aggregators, including a global "default" aggregator. If *new_root*
- is ``True``, then this aggregator will be the root of a new
- aggregation stack, thus bypassing any parent aggregators.
-
- Note that aggregation contexts are uniquely identified by their
- *name* (e.g., train, valid). Creating a context with an existing
- name will reuse the corresponding :class:`MetersDict` instance.
- If no name is given, then a temporary aggregator will be created.
-
- Usage::
-
- with metrics.aggregate("train"):
- for step, batch in enumerate(epoch):
- with metrics.aggregate("train_inner") as agg:
- metrics.log_scalar("loss", get_loss(batch))
- if step % log_interval == 0:
- print(agg.get_smoothed_value("loss"))
- agg.reset()
- print(metrics.get_smoothed_values("train")["loss"])
-
- Args:
- name (str): name of the aggregation. Defaults to a
- random/temporary name if not given explicitly.
- new_root (bool): make this aggregation the root of a new
- aggregation stack.
- """
- if name is None:
- # generate a temporary name
- name = str(uuid.uuid4())
- assert name not in _aggregators
- agg = MetersDict()
- else:
- assert name != "default"
- agg = _aggregators.setdefault(name, MetersDict())
-
- if new_root:
- backup_aggregators = _active_aggregators.copy()
- _active_aggregators.clear()
- backup_aggregators_cnt = _active_aggregators_cnt.copy()
- _active_aggregators_cnt.clear()
-
- _active_aggregators[name] = agg
- _active_aggregators_cnt[name] += 1
-
- yield agg
-
- _active_aggregators_cnt[name] -= 1
- if _active_aggregators_cnt[name] == 0 and name in _active_aggregators:
- del _active_aggregators[name]
-
- if new_root:
- _active_aggregators.clear()
- _active_aggregators.update(backup_aggregators)
- _active_aggregators_cnt.clear()
- _active_aggregators_cnt.update(backup_aggregators_cnt)
-
-
-def get_active_aggregators() -> List[MetersDict]:
- return list(_active_aggregators.values())
-
-
-def log_scalar(
- key: str,
- value: float,
- weight: float = 1,
- priority: int = 10,
- round: Optional[int] = None,
-):
- """Log a scalar value.
-
- Args:
- key (str): name of the field to log
- value (float): value to log
- weight (float): weight that this value contributes to the average.
- A weight of 0 will always log the latest value.
- priority (int): smaller values are logged earlier in the output
- round (Optional[int]): number of digits to round to when displaying
- """
- for agg in get_active_aggregators():
- if key not in agg:
- agg.add_meter(key, AverageMeter(round=round), priority)
- agg[key].update(value, weight)
-
-def log_scalar_sum(
- key: str,
- value: float,
- priority: int = 10,
- round: Optional[int] = None,
-):
- """Log a scalar value that is summed for reporting.
-
- Args:
- key (str): name of the field to log
- value (float): value to log
- priority (int): smaller values are logged earlier in the output
- round (Optional[int]): number of digits to round to when displaying
- """
- for agg in get_active_aggregators():
- if key not in agg:
- agg.add_meter(key, SumMeter(round=round), priority)
- agg[key].update(value)
-
-
-def log_derived(key: str, fn: Callable[[MetersDict], float], priority: int = 20):
- """Log a scalar value derived from other meters.
-
- Args:
- key (str): name of the field to log
- fn (Callable[[MetersDict], float]): function that takes a single
- argument *meters* and returns the derived value
- priority (int): smaller values are logged earlier in the output
- """
- for agg in get_active_aggregators():
- if key not in agg:
- agg.add_meter(key, MetersDict._DerivedMeter(fn), priority)
-
-
-def log_speed(
- key: str,
- value: float,
- priority: int = 30,
- round: Optional[int] = None,
-):
- """Log the rate of some quantity per second.
-
- Args:
- key (str): name of the field to log
- value (float): value to log
- priority (int): smaller values are logged earlier in the output
- round (Optional[int]): number of digits to round to when displaying
- """
- for agg in get_active_aggregators():
- if key not in agg:
- agg.add_meter(key, TimeMeter(round=round), priority)
- agg[key].reset() # reset meter on the first call
- else:
- agg[key].update(value)
-
-
-def log_start_time(key: str, priority: int = 40, round: Optional[int] = None):
- """Log the duration of some event in seconds.
-
- The duration will be computed once :func:`log_stop_time` is called.
-
- Args:
- key (str): name of the field to log
- priority (int): smaller values are logged earlier in the output
- round (Optional[int]): number of digits to round to when displaying
- """
- for agg in get_active_aggregators():
- if key not in agg:
- agg.add_meter(key, StopwatchMeter(round=round), priority)
- agg[key].start()
-
-
-def log_stop_time(key: str, weight: float = 0.0, prehook=None):
- """Log the duration of some event in seconds.
-
- The duration will be computed since :func:`log_start_time` was called.
- Set weight > 0 to report the average time instead of the sum.
-
- Args:
- key (str): name of the field to log
- weight (float): weight that this time contributes to the average
- prehook (function, no arguments): will be called before the timer
- is stopped. For example, use prehook=torch.cuda.synchronize to
- make sure all gpu operations are done before timer is stopped.
- """
- for agg in get_active_aggregators():
- if key in agg:
- agg[key].stop(weight, prehook)
-
-
-def log_custom(
- new_meter_fn: Callable[[], Meter],
- key: str,
- *args,
- priority: int = 50,
- **kwargs,
-):
- """Log using a custom Meter.
-
- Any extra *args* or *kwargs* will be passed through to the Meter's
- *update* method.
-
- Args:
- new_meter_fn (Callable[[], Meter]): function that returns a new
- Meter instance
- key (str): name of the field to log
- priority (int): smaller values are logged earlier in the output
- """
- for agg in get_active_aggregators():
- if key not in agg:
- agg.add_meter(key, new_meter_fn(), priority)
- agg[key].update(*args, **kwargs)
-
-
-def reset_meter(name: str, key: str) -> None:
- """Reset Meter instance aggregated under a given *name* and *key*."""
- meter = get_meter(name, key)
- if meter is not None:
- meter.reset()
-
-
-def reset_meters(name: str) -> None:
- """Reset Meter instances aggregated under a given *name*."""
- meters = get_meters(name)
- if meters is not None:
- meters.reset()
-
-
-def get_meter(name: str, key: str) -> Meter:
- """Get a single Meter instance aggregated under *name* and *key*.
-
- Returns:
- Meter or None if no metrics have been logged under *name* and *key*.
- """
- if name not in _aggregators:
- return None
- return _aggregators[name].get(key, None)
-
-
-def get_meters(name: str) -> MetersDict:
- """Get Meter instances aggregated under a given *name*.
-
- Returns:
- MetersDict or None if no metrics have been logged under *name*.
- """
- return _aggregators.get(name, None)
-
-
-def get_smoothed_value(name: str, key: str) -> float:
- """Get a single smoothed value.
-
- Raises:
- KeyError: if no metrics have been logged under *name* and *key*.
- """
- return _aggregators[name].get_smoothed_value(key)
-
-
-def get_smoothed_values(name: str) -> Dict[str, float]:
- """Get smoothed values aggregated under a given *name*.
-
- Raises:
- KeyError: if no metrics have been logged under *name*.
- """
- return _aggregators[name].get_smoothed_values()
-
-
-def state_dict():
- return OrderedDict([(name, agg.state_dict()) for name, agg in _aggregators.items()])
-
-
-def load_state_dict(state_dict):
- for name, agg_state in state_dict.items():
- _aggregators[name] = MetersDict()
- _aggregators[name].load_state_dict(agg_state)
-
-
-def xla_metrics_report():
- try:
- import torch_xla.debug.metrics as met
- print(met.metrics_report())
- except ImportError:
- return
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/hf_bert_bpe.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/hf_bert_bpe.py
deleted file mode 100644
index a41c059343ec7e2914b2c9d2f53f526c33f9659d..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/hf_bert_bpe.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-from typing import Optional
-
-from fairseq.data.encoders import register_bpe
-from fairseq.dataclass import FairseqDataclass
-
-
-@dataclass
-class BertBPEConfig(FairseqDataclass):
- bpe_cased: bool = field(default=False, metadata={"help": "set for cased BPE"})
- bpe_vocab_file: Optional[str] = field(
- default=None, metadata={"help": "bpe vocab file"}
- )
-
-
-@register_bpe("bert", dataclass=BertBPEConfig)
-class BertBPE(object):
- def __init__(self, cfg):
- try:
- from transformers import BertTokenizer
- except ImportError:
- raise ImportError(
- "Please install transformers with: pip install transformers"
- )
-
- if cfg.bpe_vocab_file:
- self.bert_tokenizer = BertTokenizer(
- cfg.bpe_vocab_file, do_lower_case=not cfg.bpe_cased
- )
- else:
- vocab_file_name = (
- "bert-base-cased" if cfg.bpe_cased else "bert-base-uncased"
- )
- self.bert_tokenizer = BertTokenizer.from_pretrained(vocab_file_name)
-
- def encode(self, x: str) -> str:
- return " ".join(self.bert_tokenizer.tokenize(x))
-
- def decode(self, x: str) -> str:
- return self.bert_tokenizer.clean_up_tokenization(
- self.bert_tokenizer.convert_tokens_to_string(x.split(" "))
- )
-
- def is_beginning_of_word(self, x: str) -> bool:
- return not x.startswith("##")
diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/cleaners.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/cleaners.py
deleted file mode 100644
index 263df9c0f7c185290600454abfff464e7f774576..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/text/cleaners.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import re
-from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3
-from text.korean import latin_to_hangul, number_to_hangul, divide_hangul, korean_to_lazy_ipa, korean_to_ipa
-from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo, chinese_to_romaji, chinese_to_lazy_ipa, chinese_to_ipa, chinese_to_ipa2
-from text.sanskrit import devanagari_to_ipa
-from text.english import english_to_lazy_ipa, english_to_ipa2, english_to_lazy_ipa2
-from text.thai import num_to_thai, latin_to_thai
-# from text.shanghainese import shanghainese_to_ipa
-# from text.cantonese import cantonese_to_ipa
-# from text.ngu_dialect import ngu_dialect_to_ipa
-
-
-def japanese_cleaners(text):
- text = japanese_to_romaji_with_accent(text)
- text = re.sub(r'([A-Za-z])$', r'\1.', text)
- return text
-
-
-def japanese_cleaners2(text):
- return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…')
-
-
-def korean_cleaners(text):
- '''Pipeline for Korean text'''
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text = divide_hangul(text)
- text = re.sub(r'([\u3131-\u3163])$', r'\1.', text)
- return text
-
-
-# def chinese_cleaners(text):
-# '''Pipeline for Chinese text'''
-# text = number_to_chinese(text)
-# text = chinese_to_bopomofo(text)
-# text = latin_to_bopomofo(text)
-# text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text)
-# return text
-
-def chinese_cleaners(text):
- from pypinyin import Style, pinyin
- text = text.replace("[ZH]", "")
- phones = [phone[0] for phone in pinyin(text, style=Style.TONE3)]
- return ' '.join(phones)
-
-
-def zh_ja_mixture_cleaners(text):
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_romaji(x.group(1))+' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent(
- x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def sanskrit_cleaners(text):
- text = text.replace('॥', '।').replace('ॐ', 'ओम्')
- text = re.sub(r'([^।])$', r'\1।', text)
- return text
-
-
-def cjks_cleaners(text):
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]',
- lambda x: japanese_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[KO\](.*?)\[KO\]',
- lambda x: korean_to_lazy_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[SA\](.*?)\[SA\]',
- lambda x: devanagari_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]',
- lambda x: english_to_lazy_ipa(x.group(1))+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def cjke_cleaners(text):
- text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace(
- 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace(
- 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text)
- text = re.sub(r'\[KO\](.*?)\[KO\]',
- lambda x: korean_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace(
- 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def cjke_cleaners2(text):
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
- lambda x: chinese_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]',
- lambda x: japanese_to_ipa2(x.group(1))+' ', text)
- text = re.sub(r'\[KO\](.*?)\[KO\]',
- lambda x: korean_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]',
- lambda x: english_to_ipa2(x.group(1))+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
-
-
-def thai_cleaners(text):
- text = num_to_thai(text)
- text = latin_to_thai(text)
- return text
-
-
-# def shanghainese_cleaners(text):
-# text = shanghainese_to_ipa(text)
-# text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
-# return text
-
-
-# def chinese_dialect_cleaners(text):
-# text = re.sub(r'\[ZH\](.*?)\[ZH\]',
-# lambda x: chinese_to_ipa2(x.group(1))+' ', text)
-# text = re.sub(r'\[JA\](.*?)\[JA\]',
-# lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text)
-# text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5',
-# '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text)
-# text = re.sub(r'\[GD\](.*?)\[GD\]',
-# lambda x: cantonese_to_ipa(x.group(1))+' ', text)
-# text = re.sub(r'\[EN\](.*?)\[EN\]',
-# lambda x: english_to_lazy_ipa2(x.group(1))+' ', text)
-# text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group(
-# 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text)
-# text = re.sub(r'\s+$', '', text)
-# text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
-# return text
diff --git a/spaces/JUNGU/VToonify/vtoonify/style_transfer.py b/spaces/JUNGU/VToonify/vtoonify/style_transfer.py
deleted file mode 100644
index 3e6ba13ca84dc595dfa9eb9ef85a638889d8cdd3..0000000000000000000000000000000000000000
--- a/spaces/JUNGU/VToonify/vtoonify/style_transfer.py
+++ /dev/null
@@ -1,232 +0,0 @@
-import os
-#os.environ['CUDA_VISIBLE_DEVICES'] = "0"
-import argparse
-import numpy as np
-import cv2
-import dlib
-import torch
-from torchvision import transforms
-import torch.nn.functional as F
-from tqdm import tqdm
-from model.vtoonify import VToonify
-from model.bisenet.model import BiSeNet
-from model.encoder.align_all_parallel import align_face
-from util import save_image, load_image, visualize, load_psp_standalone, get_video_crop_parameter, tensor2cv2
-
-
-class TestOptions():
- def __init__(self):
-
- self.parser = argparse.ArgumentParser(description="Style Transfer")
- self.parser.add_argument("--content", type=str, default='./data/077436.jpg', help="path of the content image/video")
- self.parser.add_argument("--style_id", type=int, default=26, help="the id of the style image")
- self.parser.add_argument("--style_degree", type=float, default=0.5, help="style degree for VToonify-D")
- self.parser.add_argument("--color_transfer", action="store_true", help="transfer the color of the style")
- self.parser.add_argument("--ckpt", type=str, default='./checkpoint/vtoonify_d_cartoon/vtoonify_s_d.pt', help="path of the saved model")
- self.parser.add_argument("--output_path", type=str, default='./output/', help="path of the output images")
- self.parser.add_argument("--scale_image", action="store_true", help="resize and crop the image to best fit the model")
- self.parser.add_argument("--style_encoder_path", type=str, default='./checkpoint/encoder.pt', help="path of the style encoder")
- self.parser.add_argument("--exstyle_path", type=str, default=None, help="path of the extrinsic style code")
- self.parser.add_argument("--faceparsing_path", type=str, default='./checkpoint/faceparsing.pth', help="path of the face parsing model")
- self.parser.add_argument("--video", action="store_true", help="if true, video stylization; if false, image stylization")
- self.parser.add_argument("--cpu", action="store_true", help="if true, only use cpu")
- self.parser.add_argument("--backbone", type=str, default='dualstylegan', help="dualstylegan | toonify")
- self.parser.add_argument("--padding", type=int, nargs=4, default=[200,200,200,200], help="left, right, top, bottom paddings to the face center")
- self.parser.add_argument("--batch_size", type=int, default=4, help="batch size of frames when processing video")
- self.parser.add_argument("--parsing_map_path", type=str, default=None, help="path of the refined parsing map of the target video")
-
- def parse(self):
- self.opt = self.parser.parse_args()
- if self.opt.exstyle_path is None:
- self.opt.exstyle_path = os.path.join(os.path.dirname(self.opt.ckpt), 'exstyle_code.npy')
- args = vars(self.opt)
- print('Load options')
- for name, value in sorted(args.items()):
- print('%s: %s' % (str(name), str(value)))
- return self.opt
-
-if __name__ == "__main__":
-
- parser = TestOptions()
- args = parser.parse()
- print('*'*98)
-
-
- device = "cpu" if args.cpu else "cuda"
-
- transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- vtoonify = VToonify(backbone = args.backbone)
- vtoonify.load_state_dict(torch.load(args.ckpt, map_location=lambda storage, loc: storage)['g_ema'])
- vtoonify.to(device)
-
- parsingpredictor = BiSeNet(n_classes=19)
- parsingpredictor.load_state_dict(torch.load(args.faceparsing_path, map_location=lambda storage, loc: storage))
- parsingpredictor.to(device).eval()
-
- modelname = './checkpoint/shape_predictor_68_face_landmarks.dat'
- if not os.path.exists(modelname):
- import wget, bz2
- wget.download('http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2', modelname+'.bz2')
- zipfile = bz2.BZ2File(modelname+'.bz2')
- data = zipfile.read()
- open(modelname, 'wb').write(data)
- landmarkpredictor = dlib.shape_predictor(modelname)
-
- pspencoder = load_psp_standalone(args.style_encoder_path, device)
-
- if args.backbone == 'dualstylegan':
- exstyles = np.load(args.exstyle_path, allow_pickle='TRUE').item()
- stylename = list(exstyles.keys())[args.style_id]
- exstyle = torch.tensor(exstyles[stylename]).to(device)
- with torch.no_grad():
- exstyle = vtoonify.zplus2wplus(exstyle)
-
- if args.video and args.parsing_map_path is not None:
- x_p_hat = torch.tensor(np.load(args.parsing_map_path))
-
- print('Load models successfully!')
-
-
- filename = args.content
- basename = os.path.basename(filename).split('.')[0]
- scale = 1
- kernel_1d = np.array([[0.125],[0.375],[0.375],[0.125]])
- print('Processing ' + os.path.basename(filename) + ' with vtoonify_' + args.backbone[0])
- if args.video:
- cropname = os.path.join(args.output_path, basename + '_input.mp4')
- savename = os.path.join(args.output_path, basename + '_vtoonify_' + args.backbone[0] + '.mp4')
-
- video_cap = cv2.VideoCapture(filename)
- num = int(video_cap.get(7))
-
- first_valid_frame = True
- batch_frames = []
- for i in tqdm(range(num)):
- success, frame = video_cap.read()
- if success == False:
- assert('load video frames error')
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- # We proprocess the video by detecting the face in the first frame,
- # and resizing the frame so that the eye distance is 64 pixels.
- # Centered on the eyes, we crop the first frame to almost 400x400 (based on args.padding).
- # All other frames use the same resizing and cropping parameters as the first frame.
- if first_valid_frame:
- if args.scale_image:
- paras = get_video_crop_parameter(frame, landmarkpredictor, args.padding)
- if paras is None:
- continue
- h,w,top,bottom,left,right,scale = paras
- H, W = int(bottom-top), int(right-left)
- # for HR video, we apply gaussian blur to the frames to avoid flickers caused by bilinear downsampling
- # this can also prevent over-sharp stylization results.
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
- else:
- H, W = frame.shape[0], frame.shape[1]
-
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- videoWriter = cv2.VideoWriter(cropname, fourcc, video_cap.get(5), (W, H))
- videoWriter2 = cv2.VideoWriter(savename, fourcc, video_cap.get(5), (4*W, 4*H))
-
- # For each video, we detect and align the face in the first frame for pSp to obtain the style code.
- # This style code is used for all other frames.
- with torch.no_grad():
- I = align_face(frame, landmarkpredictor)
- I = transform(I).unsqueeze(dim=0).to(device)
- s_w = pspencoder(I)
- s_w = vtoonify.zplus2wplus(s_w)
- if vtoonify.backbone == 'dualstylegan':
- if args.color_transfer:
- s_w = exstyle
- else:
- s_w[:,:7] = exstyle[:,:7]
- first_valid_frame = False
- elif args.scale_image:
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
-
- videoWriter.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
-
- batch_frames += [transform(frame).unsqueeze(dim=0).to(device)]
-
- if len(batch_frames) == args.batch_size or (i+1) == num:
- x = torch.cat(batch_frames, dim=0)
- batch_frames = []
- with torch.no_grad():
- # parsing network works best on 512x512 images, so we predict parsing maps on upsmapled frames
- # followed by downsampling the parsing maps
- if args.video and args.parsing_map_path is not None:
- x_p = x_p_hat[i+1-x.size(0):i+1].to(device)
- else:
- x_p = F.interpolate(parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- # we give parsing maps lower weight (1/16)
- inputs = torch.cat((x, x_p/16.), dim=1)
- # d_s has no effect when backbone is toonify
- y_tilde = vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), d_s = args.style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
- for k in range(y_tilde.size(0)):
- videoWriter2.write(tensor2cv2(y_tilde[k].cpu()))
-
- videoWriter.release()
- videoWriter2.release()
- video_cap.release()
-
-
- else:
- cropname = os.path.join(args.output_path, basename + '_input.jpg')
- savename = os.path.join(args.output_path, basename + '_vtoonify_' + args.backbone[0] + '.jpg')
-
- frame = cv2.imread(filename)
- frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
-
- # We detect the face in the image, and resize the image so that the eye distance is 64 pixels.
- # Centered on the eyes, we crop the image to almost 400x400 (based on args.padding).
- if args.scale_image:
- paras = get_video_crop_parameter(frame, landmarkpredictor, args.padding)
- if paras is not None:
- h,w,top,bottom,left,right,scale = paras
- H, W = int(bottom-top), int(right-left)
- # for HR image, we apply gaussian blur to it to avoid over-sharp stylization results
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
-
- with torch.no_grad():
- I = align_face(frame, landmarkpredictor)
- I = transform(I).unsqueeze(dim=0).to(device)
- s_w = pspencoder(I)
- s_w = vtoonify.zplus2wplus(s_w)
- if vtoonify.backbone == 'dualstylegan':
- if args.color_transfer:
- s_w = exstyle
- else:
- s_w[:,:7] = exstyle[:,:7]
-
- x = transform(frame).unsqueeze(dim=0).to(device)
- # parsing network works best on 512x512 images, so we predict parsing maps on upsmapled frames
- # followed by downsampling the parsing maps
- x_p = F.interpolate(parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- # we give parsing maps lower weight (1/16)
- inputs = torch.cat((x, x_p/16.), dim=1)
- # d_s has no effect when backbone is toonify
- y_tilde = vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), d_s = args.style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
-
- cv2.imwrite(cropname, cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
- save_image(y_tilde[0].cpu(), savename)
-
- print('Transfer style successfully!')
\ No newline at end of file
diff --git a/spaces/JennyS/text_generator/app.py b/spaces/JennyS/text_generator/app.py
deleted file mode 100644
index 39a779c9077966e52955e9f78f78eec465c37a16..0000000000000000000000000000000000000000
--- a/spaces/JennyS/text_generator/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/gpt2").launch()
diff --git a/spaces/JerryYou/ChatGPT-prompt-generator/README.md b/spaces/JerryYou/ChatGPT-prompt-generator/README.md
deleted file mode 100644
index 9765db2c80dd4c4b938060743922163b1718e003..0000000000000000000000000000000000000000
--- a/spaces/JerryYou/ChatGPT-prompt-generator/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChatGPT Prompt Generator
-emoji: 👨🏻🎤
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: merve/ChatGPT-prompt-generator
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KPCGD/bingo/tests/kblob.ts b/spaces/KPCGD/bingo/tests/kblob.ts
deleted file mode 100644
index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/tests/kblob.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import FormData from 'form-data'
-
-import { fetch } from '@/lib/isomorphic'
-
-const formData = new FormData()
-
-const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}}
-
-formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest))
-
-
-fetch('https://bing.vcanbb.top/images/kblob',
- {
- method: 'POST',
- body: formData.getBuffer(),
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referer": "https://bing.vcanbb.top/web/index.html",
- "Referrer-Policy": "origin-when-cross-origin",
- ...formData.getHeaders()
- }
-
- }
-).then(res => res.text())
-.then(res => console.log('res', res))
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/hifigan/inference.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/hifigan/inference.py
deleted file mode 100644
index 8caf3485226d259cb2179780d09fbf71fc2d356f..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/hifigan/inference.py
+++ /dev/null
@@ -1,74 +0,0 @@
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import os
-import json
-import torch
-from utils.util import AttrDict
-from vocoder.hifigan.models import Generator
-
-generator = None # type: Generator
-output_sample_rate = None
-_device = None
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def load_model(weights_fpath, config_fpath=None, verbose=True):
- global generator, _device, output_sample_rate
-
- if verbose:
- print("Building hifigan")
-
- if config_fpath == None:
- model_config_fpaths = list(weights_fpath.parent.rglob("*.json"))
- if len(model_config_fpaths) > 0:
- config_fpath = model_config_fpaths[0]
- else:
- config_fpath = "./vocoder/hifigan/config_16k_.json"
- with open(config_fpath) as f:
- data = f.read()
- json_config = json.loads(data)
- h = AttrDict(json_config)
- output_sample_rate = h.sampling_rate
- torch.manual_seed(h.seed)
-
- if torch.cuda.is_available():
- # _model = _model.cuda()
- _device = torch.device('cuda')
- else:
- _device = torch.device('cpu')
-
- generator = Generator(h).to(_device)
- state_dict_g = load_checkpoint(
- weights_fpath, _device
- )
- generator.load_state_dict(state_dict_g['generator'])
- generator.eval()
- generator.remove_weight_norm()
-
-
-def is_loaded():
- return generator is not None
-
-
-def infer_waveform(mel, progress_callback=None):
-
- if generator is None:
- raise Exception("Please load hifi-gan in memory before using it")
-
- mel = torch.FloatTensor(mel).to(_device)
- mel = mel.unsqueeze(0)
-
- with torch.no_grad():
- y_g_hat = generator(mel)
- audio = y_g_hat.squeeze()
- audio = audio.cpu().numpy()
-
- return audio, output_sample_rate
-
diff --git a/spaces/Kreaols/ChuanhuChatGPT/modules/models/minimax.py b/spaces/Kreaols/ChuanhuChatGPT/modules/models/minimax.py
deleted file mode 100644
index 2e1b50280fd2fbc43a69caaf660a0d64beaa405b..0000000000000000000000000000000000000000
--- a/spaces/Kreaols/ChuanhuChatGPT/modules/models/minimax.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import json
-import os
-
-import colorama
-import requests
-import logging
-
-from modules.models.base_model import BaseLLMModel
-from modules.presets import STANDARD_ERROR_MSG, GENERAL_ERROR_MSG, TIMEOUT_STREAMING, TIMEOUT_ALL, i18n
-
-group_id = os.environ.get("MINIMAX_GROUP_ID", "")
-
-
-class MiniMax_Client(BaseLLMModel):
- """
- MiniMax Client
- 接口文档见 https://api.minimax.chat/document/guides/chat
- """
-
- def __init__(self, model_name, api_key, user_name="", system_prompt=None):
- super().__init__(model_name=model_name, user=user_name)
- self.url = f'https://api.minimax.chat/v1/text/chatcompletion?GroupId={group_id}'
- self.history = []
- self.api_key = api_key
- self.system_prompt = system_prompt
- self.headers = {
- "Authorization": f"Bearer {api_key}",
- "Content-Type": "application/json"
- }
-
- def get_answer_at_once(self):
- # minimax temperature is (0,1] and base model temperature is [0,2], and yuan 0.9 == base 1 so need to convert
- temperature = self.temperature * 0.9 if self.temperature <= 1 else 0.9 + (self.temperature - 1) / 10
-
- request_body = {
- "model": self.model_name.replace('minimax-', ''),
- "temperature": temperature,
- "skip_info_mask": True,
- 'messages': [{"sender_type": "USER", "text": self.history[-1]['content']}]
- }
- if self.n_choices:
- request_body['beam_width'] = self.n_choices
- if self.system_prompt:
- request_body['prompt'] = self.system_prompt
- if self.max_generation_token:
- request_body['tokens_to_generate'] = self.max_generation_token
- if self.top_p:
- request_body['top_p'] = self.top_p
-
- response = requests.post(self.url, headers=self.headers, json=request_body)
-
- res = response.json()
- answer = res['reply']
- total_token_count = res["usage"]["total_tokens"]
- return answer, total_token_count
-
- def get_answer_stream_iter(self):
- response = self._get_response(stream=True)
- if response is not None:
- iter = self._decode_chat_response(response)
- partial_text = ""
- for i in iter:
- partial_text += i
- yield partial_text
- else:
- yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG
-
- def _get_response(self, stream=False):
- minimax_api_key = self.api_key
- history = self.history
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {minimax_api_key}",
- }
-
- temperature = self.temperature * 0.9 if self.temperature <= 1 else 0.9 + (self.temperature - 1) / 10
-
- messages = []
- for msg in self.history:
- if msg['role'] == 'user':
- messages.append({"sender_type": "USER", "text": msg['content']})
- else:
- messages.append({"sender_type": "BOT", "text": msg['content']})
-
- request_body = {
- "model": self.model_name.replace('minimax-', ''),
- "temperature": temperature,
- "skip_info_mask": True,
- 'messages': messages
- }
- if self.n_choices:
- request_body['beam_width'] = self.n_choices
- if self.system_prompt:
- lines = self.system_prompt.splitlines()
- if lines[0].find(":") != -1 and len(lines[0]) < 20:
- request_body["role_meta"] = {
- "user_name": lines[0].split(":")[0],
- "bot_name": lines[0].split(":")[1]
- }
- lines.pop()
- request_body["prompt"] = "\n".join(lines)
- if self.max_generation_token:
- request_body['tokens_to_generate'] = self.max_generation_token
- else:
- request_body['tokens_to_generate'] = 512
- if self.top_p:
- request_body['top_p'] = self.top_p
-
- if stream:
- timeout = TIMEOUT_STREAMING
- request_body['stream'] = True
- request_body['use_standard_sse'] = True
- else:
- timeout = TIMEOUT_ALL
- try:
- response = requests.post(
- self.url,
- headers=headers,
- json=request_body,
- stream=stream,
- timeout=timeout,
- )
- except:
- return None
-
- return response
-
- def _decode_chat_response(self, response):
- error_msg = ""
- for chunk in response.iter_lines():
- if chunk:
- chunk = chunk.decode()
- chunk_length = len(chunk)
- print(chunk)
- try:
- chunk = json.loads(chunk[6:])
- except json.JSONDecodeError:
- print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}")
- error_msg += chunk
- continue
- if chunk_length > 6 and "delta" in chunk["choices"][0]:
- if "finish_reason" in chunk["choices"][0] and chunk["choices"][0]["finish_reason"] == "stop":
- self.all_token_counts.append(chunk["usage"]["total_tokens"] - sum(self.all_token_counts))
- break
- try:
- yield chunk["choices"][0]["delta"]
- except Exception as e:
- logging.error(f"Error: {e}")
- continue
- if error_msg:
- try:
- error_msg = json.loads(error_msg)
- if 'base_resp' in error_msg:
- status_code = error_msg['base_resp']['status_code']
- status_msg = error_msg['base_resp']['status_msg']
- raise Exception(f"{status_code} - {status_msg}")
- except json.JSONDecodeError:
- pass
- raise Exception(error_msg)
diff --git a/spaces/Kvikontent/QrGen/app.py b/spaces/Kvikontent/QrGen/app.py
deleted file mode 100644
index 8e652e0ae4a76f2ee8d1b82e2883326af0279f93..0000000000000000000000000000000000000000
--- a/spaces/Kvikontent/QrGen/app.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import gradio as gr
-import qrcode
-from PIL import Image
-
-def generate_qr_code(url):
- # Create a QR code instance
- qr = qrcode.QRCode(
- version=1,
- error_correction=qrcode.constants.ERROR_CORRECT_L,
- box_size=10,
- border=4,
- )
- # Add data to the QR code
- qr.add_data(url)
- qr.make(fit=True)
- # Generate the QR code image
- qr_image = qr.make_image(fill_color="black", back_color="white")
- return qr_image
-
-iface = gr.Interface(
- fn=generate_qr_code,
- inputs="text",
- outputs="image",
- title="QR Code Generator",
- description="Generate a QR code from a URL",
- example="https://www.example.com",
-)
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Latryna/roop/README.md b/spaces/Latryna/roop/README.md
deleted file mode 100644
index 486c40cd0c954a0c5a4832648534d96ead565b09..0000000000000000000000000000000000000000
--- a/spaces/Latryna/roop/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Roop
-emoji: 📈
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: agpl-3.0
-duplicated_from: johnhelf/roop
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/utils/general.py b/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/utils/general.py
deleted file mode 100644
index 1c8e14f56a107ec3a4269c382cfc5168ad780ffc..0000000000000000000000000000000000000000
--- a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/utils/general.py
+++ /dev/null
@@ -1,271 +0,0 @@
-import math
-import time
-
-import numpy as np
-import torch
-import torchvision
-
-
-def check_img_size(img_size, s=32):
- # Verify img_size is a multiple of stride s
- new_size = make_divisible(img_size, int(s)) # ceil gs-multiple
- # if new_size != img_size:
- # print(f"WARNING: --img-size {img_size:g} must be multiple of max stride {s:g}, updating to {new_size:g}")
- return new_size
-
-
-def make_divisible(x, divisor):
- # Returns x evenly divisible by divisor
- return math.ceil(x / divisor) * divisor
-
-
-def xyxy2xywh(x):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
-
-def xywh2xyxy(x):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2]] -= pad[0] # x padding
- coords[:, [1, 3]] -= pad[1] # y padding
- coords[:, :4] /= gain
- clip_coords(coords, img0_shape)
- return coords
-
-
-def clip_coords(boxes, img_shape):
- # Clip bounding xyxy bounding boxes to image shape (height, width)
- boxes[:, 0].clamp_(0, img_shape[1]) # x1
- boxes[:, 1].clamp_(0, img_shape[0]) # y1
- boxes[:, 2].clamp_(0, img_shape[1]) # x2
- boxes[:, 3].clamp_(0, img_shape[0]) # y2
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- return inter / (area1[:, None] + area2 - inter)
-
-
-def non_max_suppression_face(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, labels=()):
- """Performs Non-Maximum Suppression (NMS) on inference results
- Returns:
- detections with shape: nx6 (x1, y1, x2, y2, conf, cls)
- """
-
- nc = prediction.shape[2] - 15 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- # (pixels) maximum box width and height
- max_wh = 4096
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 16), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- label = labels[xi]
- v = torch.zeros((len(label), nc + 15), device=x.device)
- v[:, :4] = label[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(label)), label[:, 0].long() + 15] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 15:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, landmarks, cls)
- if multi_label:
- i, j = (x[:, 15:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 15, None], x[:, 5:15], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 15:].max(1, keepdim=True)
- x = torch.cat((box, conf, x[:, 5:15], j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # If none remain process next image
- n = x.shape[0] # number of boxes
- if not n:
- continue
-
- # Batched NMS
- c = x[:, 15:16] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
-
- if merge and (1 < n < 3e3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- break # time limit exceeded
-
- return output
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, labels=()):
- """Performs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- detections with shape: nx6 (x1, y1, x2, y2, conf, cls)
- """
-
- nc = prediction.shape[2] - 5 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- # (pixels) maximum box width and height
- max_wh = 4096
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- label_id = labels[xi]
- v = torch.zeros((len(label_id), nc + 5), device=x.device)
- v[:, :4] = label_id[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(label_id)), label_id[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
-
- x = x[x[:, 4].argsort(descending=True)] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if merge and (1 < n < 3e3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f"WARNING: NMS time limit {time_limit}s exceeded")
- break # time limit exceeded
-
- return output
-
-
-def scale_coords_landmarks(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2, 4, 6, 8]] -= pad[0] # x padding
- coords[:, [1, 3, 5, 7, 9]] -= pad[1] # y padding
- coords[:, :10] /= gain
- coords[:, 0].clamp_(0, img0_shape[1]) # x1
- coords[:, 1].clamp_(0, img0_shape[0]) # y1
- coords[:, 2].clamp_(0, img0_shape[1]) # x2
- coords[:, 3].clamp_(0, img0_shape[0]) # y2
- coords[:, 4].clamp_(0, img0_shape[1]) # x3
- coords[:, 5].clamp_(0, img0_shape[0]) # y3
- coords[:, 6].clamp_(0, img0_shape[1]) # x4
- coords[:, 7].clamp_(0, img0_shape[0]) # y4
- coords[:, 8].clamp_(0, img0_shape[1]) # x5
- coords[:, 9].clamp_(0, img0_shape[0]) # y5
- return coords
diff --git a/spaces/LightSY/W2L-TD/facelib/parsing/resnet.py b/spaces/LightSY/W2L-TD/facelib/parsing/resnet.py
deleted file mode 100644
index fec8e82cf64469fb51be21ad5130217052addbda..0000000000000000000000000000000000000000
--- a/spaces/LightSY/W2L-TD/facelib/parsing/resnet.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- """3x3 convolution with padding"""
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
-
- def __init__(self, in_chan, out_chan, stride=1):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(in_chan, out_chan, stride)
- self.bn1 = nn.BatchNorm2d(out_chan)
- self.conv2 = conv3x3(out_chan, out_chan)
- self.bn2 = nn.BatchNorm2d(out_chan)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = None
- if in_chan != out_chan or stride != 1:
- self.downsample = nn.Sequential(
- nn.Conv2d(in_chan, out_chan, kernel_size=1, stride=stride, bias=False),
- nn.BatchNorm2d(out_chan),
- )
-
- def forward(self, x):
- residual = self.conv1(x)
- residual = F.relu(self.bn1(residual))
- residual = self.conv2(residual)
- residual = self.bn2(residual)
-
- shortcut = x
- if self.downsample is not None:
- shortcut = self.downsample(x)
-
- out = shortcut + residual
- out = self.relu(out)
- return out
-
-
-def create_layer_basic(in_chan, out_chan, bnum, stride=1):
- layers = [BasicBlock(in_chan, out_chan, stride=stride)]
- for i in range(bnum - 1):
- layers.append(BasicBlock(out_chan, out_chan, stride=1))
- return nn.Sequential(*layers)
-
-
-class ResNet18(nn.Module):
-
- def __init__(self):
- super(ResNet18, self).__init__()
- self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
- self.bn1 = nn.BatchNorm2d(64)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1)
- self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2)
- self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2)
- self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2)
-
- def forward(self, x):
- x = self.conv1(x)
- x = F.relu(self.bn1(x))
- x = self.maxpool(x)
-
- x = self.layer1(x)
- feat8 = self.layer2(x) # 1/8
- feat16 = self.layer3(feat8) # 1/16
- feat32 = self.layer4(feat16) # 1/32
- return feat8, feat16, feat32
diff --git a/spaces/Liu-LAB/GPT-academic/toolbox.py b/spaces/Liu-LAB/GPT-academic/toolbox.py
deleted file mode 100644
index fca5baa1f60557e3e049a97d6313aaa1a0d95424..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/toolbox.py
+++ /dev/null
@@ -1,1076 +0,0 @@
-import markdown
-import importlib
-import time
-import inspect
-import re
-import os
-import gradio
-from latex2mathml.converter import convert as tex2mathml
-from functools import wraps, lru_cache
-pj = os.path.join
-
-"""
-========================================================================
-第一部分
-函数插件输入输出接驳区
- - ChatBotWithCookies: 带Cookies的Chatbot类,为实现更多强大的功能做基础
- - ArgsGeneralWrapper: 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构
- - update_ui: 刷新界面用 yield from update_ui(chatbot, history)
- - CatchException: 将插件中出的所有问题显示在界面上
- - HotReload: 实现插件的热更新
- - trimmed_format_exc: 打印traceback,为了安全而隐藏绝对地址
-========================================================================
-"""
-
-class ChatBotWithCookies(list):
- def __init__(self, cookie):
- """
- cookies = {
- 'top_p': top_p,
- 'temperature': temperature,
- 'lock_plugin': bool,
- "files_to_promote": ["file1", "file2"],
- "most_recent_uploaded": {
- "path": "uploaded_path",
- "time": time.time(),
- "time_str": "timestr",
- }
- }
- """
- self._cookies = cookie
-
- def write_list(self, list):
- for t in list:
- self.append(t)
-
- def get_list(self):
- return [t for t in self]
-
- def get_cookies(self):
- return self._cookies
-
-
-def ArgsGeneralWrapper(f):
- """
- 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
- """
- def decorated(request: gradio.Request, cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg, *args):
- txt_passon = txt
- if txt == "" and txt2 != "": txt_passon = txt2
- # 引入一个有cookie的chatbot
- cookies.update({
- 'top_p':top_p,
- 'api_key': cookies['api_key'],
- 'llm_model': llm_model,
- 'temperature':temperature,
- })
- llm_kwargs = {
- 'api_key': cookies['api_key'],
- 'llm_model': llm_model,
- 'top_p':top_p,
- 'max_length': max_length,
- 'temperature':temperature,
- 'client_ip': request.client.host,
- }
- plugin_kwargs = {
- "advanced_arg": plugin_advanced_arg,
- }
- chatbot_with_cookie = ChatBotWithCookies(cookies)
- chatbot_with_cookie.write_list(chatbot)
- if cookies.get('lock_plugin', None) is None:
- # 正常状态
- yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
- else:
- # 处理个别特殊插件的锁定状态
- module, fn_name = cookies['lock_plugin'].split('->')
- f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name)
- yield from f_hot_reload(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, request)
- return decorated
-
-
-def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
- """
- 刷新用户界面
- """
- assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时, 可用clear将其清空, 然后用for+append循环重新赋值。"
- cookies = chatbot.get_cookies()
-
- # 解决插件锁定时的界面显示问题
- if cookies.get('lock_plugin', None):
- label = cookies.get('llm_model', "") + " | " + "正在锁定插件" + cookies.get('lock_plugin', None)
- chatbot_gr = gradio.update(value=chatbot, label=label)
- if cookies.get('label', "") != label: cookies['label'] = label # 记住当前的label
- elif cookies.get('label', None):
- chatbot_gr = gradio.update(value=chatbot, label=cookies.get('llm_model', ""))
- cookies['label'] = None # 清空label
- else:
- chatbot_gr = chatbot
-
- yield cookies, chatbot_gr, history, msg
-
-def update_ui_lastest_msg(lastmsg, chatbot, history, delay=1): # 刷新界面
- """
- 刷新用户界面
- """
- if len(chatbot) == 0: chatbot.append(["update_ui_last_msg", lastmsg])
- chatbot[-1] = list(chatbot[-1])
- chatbot[-1][-1] = lastmsg
- yield from update_ui(chatbot=chatbot, history=history)
- time.sleep(delay)
-
-
-def trimmed_format_exc():
- import os, traceback
- str = traceback.format_exc()
- current_path = os.getcwd()
- replace_path = "."
- return str.replace(current_path, replace_path)
-
-def CatchException(f):
- """
- 装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
- """
-
- @wraps(f)
- def decorated(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs):
- try:
- yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)
- except Exception as e:
- from check_proxy import check_proxy
- from toolbox import get_conf
- proxies, = get_conf('proxies')
- tb_str = '```\n' + trimmed_format_exc() + '```'
- if len(chatbot_with_cookie) == 0:
- chatbot_with_cookie.clear()
- chatbot_with_cookie.append(["插件调度异常", "异常原因"])
- chatbot_with_cookie[-1] = (chatbot_with_cookie[-1][0],
- f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}")
- yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常 {e}') # 刷新界面
- return decorated
-
-
-def HotReload(f):
- """
- HotReload的装饰器函数,用于实现Python函数插件的热更新。
- 函数热更新是指在不停止程序运行的情况下,更新函数代码,从而达到实时更新功能。
- 在装饰器内部,使用wraps(f)来保留函数的元信息,并定义了一个名为decorated的内部函数。
- 内部函数通过使用importlib模块的reload函数和inspect模块的getmodule函数来重新加载并获取函数模块,
- 然后通过getattr函数获取函数名,并在新模块中重新加载函数。
- 最后,使用yield from语句返回重新加载过的函数,并在被装饰的函数上执行。
- 最终,装饰器函数返回内部函数。这个内部函数可以将函数的原始定义更新为最新版本,并执行函数的新版本。
- """
- @wraps(f)
- def decorated(*args, **kwargs):
- fn_name = f.__name__
- f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name)
- yield from f_hot_reload(*args, **kwargs)
- return decorated
-
-
-"""
-========================================================================
-第二部分
-其他小工具:
- - write_results_to_file: 将结果写入markdown文件中
- - regular_txt_to_markdown: 将普通文本转换为Markdown格式的文本。
- - report_execption: 向chatbot中添加简单的意外错误信息
- - text_divide_paragraph: 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
- - markdown_convertion: 用多种方式组合,将markdown转化为好看的html
- - format_io: 接管gradio默认的markdown处理方式
- - on_file_uploaded: 处理文件的上传(自动解压)
- - on_report_generated: 将生成的报告自动投射到文件上传区
- - clip_history: 当历史上下文过长时,自动截断
- - get_conf: 获取设置
- - select_api_key: 根据当前的模型类别,抽取可用的api-key
-========================================================================
-"""
-
-def get_reduce_token_percent(text):
- """
- * 此函数未来将被弃用
- """
- try:
- # text = "maximum context length is 4097 tokens. However, your messages resulted in 4870 tokens"
- pattern = r"(\d+)\s+tokens\b"
- match = re.findall(pattern, text)
- EXCEED_ALLO = 500 # 稍微留一点余地,否则在回复时会因余量太少出问题
- max_limit = float(match[0]) - EXCEED_ALLO
- current_tokens = float(match[1])
- ratio = max_limit/current_tokens
- assert ratio > 0 and ratio < 1
- return ratio, str(int(current_tokens-max_limit))
- except:
- return 0.5, '不详'
-
-
-def write_results_to_file(history, file_name=None):
- """
- 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
- """
- import os
- import time
- if file_name is None:
- # file_name = time.strftime("chatGPT分析报告%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md'
- file_name = 'GPT-Report-' + gen_time_str() + '.md'
- os.makedirs('./gpt_log/', exist_ok=True)
- with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
- f.write('# GPT-Academic Report\n')
- for i, content in enumerate(history):
- try:
- if type(content) != str: content = str(content)
- except:
- continue
- if i % 2 == 0:
- f.write('## ')
- try:
- f.write(content)
- except:
- # remove everything that cannot be handled by utf8
- f.write(content.encode('utf-8', 'ignore').decode())
- f.write('\n\n')
- res = '以上材料已经被写入:\t' + os.path.abspath(f'./gpt_log/{file_name}')
- print(res)
- return res
-
-
-def write_history_to_file(history, file_basename=None, file_fullname=None):
- """
- 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
- """
- import os
- import time
- if file_fullname is None:
- if file_basename is not None:
- file_fullname = os.path.join(get_log_folder(), file_basename)
- else:
- file_fullname = os.path.join(get_log_folder(), f'GPT-Academic-{gen_time_str()}.md')
- os.makedirs(os.path.dirname(file_fullname), exist_ok=True)
- with open(file_fullname, 'w', encoding='utf8') as f:
- f.write('# GPT-Academic Report\n')
- for i, content in enumerate(history):
- try:
- if type(content) != str: content = str(content)
- except:
- continue
- if i % 2 == 0:
- f.write('## ')
- try:
- f.write(content)
- except:
- # remove everything that cannot be handled by utf8
- f.write(content.encode('utf-8', 'ignore').decode())
- f.write('\n\n')
- res = os.path.abspath(file_fullname)
- return res
-
-
-def regular_txt_to_markdown(text):
- """
- 将普通文本转换为Markdown格式的文本。
- """
- text = text.replace('\n', '\n\n')
- text = text.replace('\n\n\n', '\n\n')
- text = text.replace('\n\n\n', '\n\n')
- return text
-
-
-
-
-def report_execption(chatbot, history, a, b):
- """
- 向chatbot中添加错误信息
- """
- chatbot.append((a, b))
- history.append(a)
- history.append(b)
-
-
-def text_divide_paragraph(text):
- """
- 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
- """
- pre = '
'
- suf = '
'
- if text.startswith(pre) and text.endswith(suf):
- return text
-
- if '```' in text:
- # careful input
- return pre + text + suf
- else:
- # wtf input
- lines = text.split("\n")
- for i, line in enumerate(lines):
- lines[i] = lines[i].replace(" ", " ")
- text = "".join(lines)
- return pre + text + suf
-
-@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
-def markdown_convertion(txt):
- """
- 将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
- """
- pre = '
'
- suf = '
'
- if txt.startswith(pre) and txt.endswith(suf):
- # print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
- return txt # 已经被转化过,不需要再次转化
-
- markdown_extension_configs = {
- 'mdx_math': {
- 'enable_dollar_delimiter': True,
- 'use_gitlab_delimiters': False,
- },
- }
- find_equation_pattern = r'\n', '')
- return content
-
- def no_code(txt):
- if '```' not in txt:
- return True
- else:
- if '```reference' in txt: return True # newbing
- else: return False
-
- if ('$' in txt) and no_code(txt): # 有$标识的公式符号,且没有代码段```的标识
- # convert everything to html format
- split = markdown.markdown(text='---')
- convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs)
- convert_stage_1 = markdown_bug_hunt(convert_stage_1)
- # re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s).
- # 1. convert to easy-to-copy tex (do not render math)
- convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
- # 2. convert to rendered equation
- convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL)
- # cat them together
- return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
- else:
- return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf
-
-
-def close_up_code_segment_during_stream(gpt_reply):
- """
- 在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的```
-
- Args:
- gpt_reply (str): GPT模型返回的回复字符串。
-
- Returns:
- str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
-
- """
- if '```' not in gpt_reply:
- return gpt_reply
- if gpt_reply.endswith('```'):
- return gpt_reply
-
- # 排除了以上两个情况,我们
- segments = gpt_reply.split('```')
- n_mark = len(segments) - 1
- if n_mark % 2 == 1:
- # print('输出代码片段中!')
- return gpt_reply+'\n```'
- else:
- return gpt_reply
-
-
-def format_io(self, y):
- """
- 将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。
- """
- if y is None or y == []:
- return []
- i_ask, gpt_reply = y[-1]
- # 输入部分太自由,预处理一波
- if i_ask is not None: i_ask = text_divide_paragraph(i_ask)
- # 当代码输出半截的时候,试着补上后个```
- if gpt_reply is not None: gpt_reply = close_up_code_segment_during_stream(gpt_reply)
- # process
- y[-1] = (
- None if i_ask is None else markdown.markdown(i_ask, extensions=['fenced_code', 'tables']),
- None if gpt_reply is None else markdown_convertion(gpt_reply)
- )
- return y
-
-
-def find_free_port():
- """
- 返回当前系统中可用的未使用端口。
- """
- import socket
- from contextlib import closing
- with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
- s.bind(('', 0))
- s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
- return s.getsockname()[1]
-
-
-def extract_archive(file_path, dest_dir):
- import zipfile
- import tarfile
- import os
- # Get the file extension of the input file
- file_extension = os.path.splitext(file_path)[1]
-
- # Extract the archive based on its extension
- if file_extension == '.zip':
- with zipfile.ZipFile(file_path, 'r') as zipobj:
- zipobj.extractall(path=dest_dir)
- print("Successfully extracted zip archive to {}".format(dest_dir))
-
- elif file_extension in ['.tar', '.gz', '.bz2']:
- with tarfile.open(file_path, 'r:*') as tarobj:
- tarobj.extractall(path=dest_dir)
- print("Successfully extracted tar archive to {}".format(dest_dir))
-
- # 第三方库,需要预先pip install rarfile
- # 此外,Windows上还需要安装winrar软件,配置其Path环境变量,如"C:\Program Files\WinRAR"才可以
- elif file_extension == '.rar':
- try:
- import rarfile
- with rarfile.RarFile(file_path) as rf:
- rf.extractall(path=dest_dir)
- print("Successfully extracted rar archive to {}".format(dest_dir))
- except:
- print("Rar format requires additional dependencies to install")
- return '\n\n解压失败! 需要安装pip install rarfile来解压rar文件'
-
- # 第三方库,需要预先pip install py7zr
- elif file_extension == '.7z':
- try:
- import py7zr
- with py7zr.SevenZipFile(file_path, mode='r') as f:
- f.extractall(path=dest_dir)
- print("Successfully extracted 7z archive to {}".format(dest_dir))
- except:
- print("7z format requires additional dependencies to install")
- return '\n\n解压失败! 需要安装pip install py7zr来解压7z文件'
- else:
- return ''
- return ''
-
-
-def find_recent_files(directory):
- """
- me: find files that is created with in one minutes under a directory with python, write a function
- gpt: here it is!
- """
- import os
- import time
- current_time = time.time()
- one_minute_ago = current_time - 60
- recent_files = []
- if not os.path.exists(directory):
- os.makedirs(directory, exist_ok=True)
- for filename in os.listdir(directory):
- file_path = os.path.join(directory, filename)
- if file_path.endswith('.log'):
- continue
- created_time = os.path.getmtime(file_path)
- if created_time >= one_minute_ago:
- if os.path.isdir(file_path):
- continue
- recent_files.append(file_path)
-
- return recent_files
-
-def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
- # 将文件复制一份到下载区
- import shutil
- if rename_file is None: rename_file = f'{gen_time_str()}-{os.path.basename(file)}'
- new_path = os.path.join(get_log_folder(), rename_file)
- # 如果已经存在,先删除
- if os.path.exists(new_path) and not os.path.samefile(new_path, file): os.remove(new_path)
- # 把文件复制过去
- if not os.path.exists(new_path): shutil.copyfile(file, new_path)
- # 将文件添加到chatbot cookie中,避免多用户干扰
- if chatbot:
- if 'files_to_promote' in chatbot._cookies: current = chatbot._cookies['files_to_promote']
- else: current = []
- chatbot._cookies.update({'files_to_promote': [new_path] + current})
-
-def disable_auto_promotion(chatbot):
- chatbot._cookies.update({'files_to_promote': []})
- return
-
-def on_file_uploaded(files, chatbot, txt, txt2, checkboxes, cookies):
- """
- 当文件被上传时的回调函数
- """
- if len(files) == 0:
- return chatbot, txt
- import shutil
- import os
- import time
- import glob
- from toolbox import extract_archive
- try:
- shutil.rmtree('./private_upload/')
- except:
- pass
- time_tag = gen_time_str()
- os.makedirs(f'private_upload/{time_tag}', exist_ok=True)
- err_msg = ''
- for file in files:
- file_origin_name = os.path.basename(file.orig_name)
- shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
- err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
- dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
- moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)]
- if "底部输入区" in checkboxes:
- txt = ""
- txt2 = f'private_upload/{time_tag}'
- else:
- txt = f'private_upload/{time_tag}'
- txt2 = ""
- moved_files_str = '\t\n\n'.join(moved_files)
- chatbot.append(['我上传了文件,请查收',
- f'[Local Message] 收到以下文件: \n\n{moved_files_str}' +
- f'\n\n调用路径参数已自动修正到: \n\n{txt}' +
- f'\n\n现在您点击任意“红颜色”标识的函数插件时,以上文件将被作为输入参数'+err_msg])
- cookies.update({
- 'most_recent_uploaded': {
- 'path': f'private_upload/{time_tag}',
- 'time': time.time(),
- 'time_str': time_tag
- }})
- return chatbot, txt, txt2, cookies
-
-
-def on_report_generated(cookies, files, chatbot):
- from toolbox import find_recent_files
- if 'files_to_promote' in cookies:
- report_files = cookies['files_to_promote']
- cookies.pop('files_to_promote')
- else:
- report_files = find_recent_files('gpt_log')
- if len(report_files) == 0:
- return cookies, None, chatbot
- # files.extend(report_files)
- file_links = ''
- for f in report_files: file_links += f' {f}'
- chatbot.append(['报告如何远程获取?', f'报告已经添加到右侧“文件上传区”(可能处于折叠状态),请查收。{file_links}'])
- return cookies, report_files, chatbot
-
-def load_chat_cookies():
- API_KEY, LLM_MODEL, AZURE_API_KEY = get_conf('API_KEY', 'LLM_MODEL', 'AZURE_API_KEY')
- if is_any_api_key(AZURE_API_KEY):
- if is_any_api_key(API_KEY): API_KEY = API_KEY + ',' + AZURE_API_KEY
- else: API_KEY = AZURE_API_KEY
- return {'api_key': API_KEY, 'llm_model': LLM_MODEL}
-
-def is_openai_api_key(key):
- CUSTOM_API_KEY_PATTERN, = get_conf('CUSTOM_API_KEY_PATTERN')
- if len(CUSTOM_API_KEY_PATTERN) != 0:
- API_MATCH_ORIGINAL = re.match(CUSTOM_API_KEY_PATTERN, key)
- else:
- API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$", key)
- return bool(API_MATCH_ORIGINAL)
-
-def is_azure_api_key(key):
- API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{32}$", key)
- return bool(API_MATCH_AZURE)
-
-def is_api2d_key(key):
- API_MATCH_API2D = re.match(r"fk[a-zA-Z0-9]{6}-[a-zA-Z0-9]{32}$", key)
- return bool(API_MATCH_API2D)
-
-def is_freeai_api_key(key):#new add
- API_MATCH_FREEAI0 = re.match(r"pk-[a-zA-Z0-9-_]{43}$", key)
- API_MATCH_FREEAI1 = re.match(r"fk-[a-zA-Z0-9-_]{43}$", key)
- return bool(API_MATCH_FREEAI0) or bool(API_MATCH_FREEAI1)
-
-def is_any_api_key(key):
- if ',' in key:
- keys = key.split(',')
- for k in keys:
- if is_any_api_key(k): return True
- return False
- else:#new add
- return is_openai_api_key(key) or is_api2d_key(key) or is_azure_api_key(key) or is_freeai_api_key(key)
-
-def what_keys(keys):
- # new add
- avail_key_list = {'OpenAI Key':0, "Azure Key":0, "API2D Key":0, "FreeAI Key":0}
-
- key_list = keys.split(',')
-
- for k in key_list:
- if is_openai_api_key(k):
- avail_key_list['OpenAI Key'] += 1
-
- for k in key_list:
- if is_api2d_key(k):
- avail_key_list['API2D Key'] += 1
-
- for k in key_list:
- if is_azure_api_key(k):
- avail_key_list['Azure Key'] += 1
-
- for k in key_list: # new add
- if is_freeai_api_key(k):
- avail_key_list['FreeAI Key'] += 1
-
- # new add
- return f"检测到: OpenAI Key {avail_key_list['OpenAI Key']} 个, Azure Key {avail_key_list['Azure Key']} 个, API2D Key {avail_key_list['API2D Key']} 个, FreeAI Key {avail_key_list['FreeAI Key']} 个"
-
-def select_api_key(keys, llm_model):
- import random
- avail_key_list = []
- key_list = keys.split(',')
-
- if llm_model.startswith('gpt-'):
- for k in key_list:
- if is_openai_api_key(k): avail_key_list.append(k)
- for k in key_list:# new add
- if is_freeai_api_key(k): avail_key_list.append(k)
-
- if llm_model.startswith('api2d-'):
- for k in key_list:
- if is_api2d_key(k): avail_key_list.append(k)
-
- if llm_model.startswith('azure-'):
- for k in key_list:
- if is_azure_api_key(k): avail_key_list.append(k)
-
- if len(avail_key_list) == 0:
- raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源(右下角更换模型菜单中可切换openai,azure,claude,api2d等请求源)。")
-
- api_key = random.choice(avail_key_list) # 随机负载均衡
- return api_key
-
-def read_env_variable(arg, default_value):
- """
- 环境变量可以是 `GPT_ACADEMIC_CONFIG`(优先),也可以直接是`CONFIG`
- 例如在windows cmd中,既可以写:
- set USE_PROXY=True
- set API_KEY=sk-j7caBpkRoxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- set proxies={"http":"http://127.0.0.1:10085", "https":"http://127.0.0.1:10085",}
- set AVAIL_LLM_MODELS=["gpt-3.5-turbo", "chatglm"]
- set AUTHENTICATION=[("username", "password"), ("username2", "password2")]
- 也可以写:
- set GPT_ACADEMIC_USE_PROXY=True
- set GPT_ACADEMIC_API_KEY=sk-j7caBpkRoxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- set GPT_ACADEMIC_proxies={"http":"http://127.0.0.1:10085", "https":"http://127.0.0.1:10085",}
- set GPT_ACADEMIC_AVAIL_LLM_MODELS=["gpt-3.5-turbo", "chatglm"]
- set GPT_ACADEMIC_AUTHENTICATION=[("username", "password"), ("username2", "password2")]
- """
- from colorful import print亮红, print亮绿
- arg_with_prefix = "GPT_ACADEMIC_" + arg
- if arg_with_prefix in os.environ:
- env_arg = os.environ[arg_with_prefix]
- elif arg in os.environ:
- env_arg = os.environ[arg]
- else:
- raise KeyError
- print(f"[ENV_VAR] 尝试加载{arg},默认值:{default_value} --> 修正值:{env_arg}")
- try:
- if isinstance(default_value, bool):
- env_arg = env_arg.strip()
- if env_arg == 'True': r = True
- elif env_arg == 'False': r = False
- else: print('enter True or False, but have:', env_arg); r = default_value
- elif isinstance(default_value, int):
- r = int(env_arg)
- elif isinstance(default_value, float):
- r = float(env_arg)
- elif isinstance(default_value, str):
- r = env_arg.strip()
- elif isinstance(default_value, dict):
- r = eval(env_arg)
- elif isinstance(default_value, list):
- r = eval(env_arg)
- elif default_value is None:
- assert arg == "proxies"
- r = eval(env_arg)
- else:
- print亮红(f"[ENV_VAR] 环境变量{arg}不支持通过环境变量设置! ")
- raise KeyError
- except:
- print亮红(f"[ENV_VAR] 环境变量{arg}加载失败! ")
- raise KeyError(f"[ENV_VAR] 环境变量{arg}加载失败! ")
-
- print亮绿(f"[ENV_VAR] 成功读取环境变量{arg}")
- return r
-
-@lru_cache(maxsize=128)
-def read_single_conf_with_lru_cache(arg):
- from colorful import print亮红, print亮绿, print亮蓝
- try:
- # 优先级1. 获取环境变量作为配置
- default_ref = getattr(importlib.import_module('config'), arg) # 读取默认值作为数据类型转换的参考
- r = read_env_variable(arg, default_ref)
- except:
- try:
- # 优先级2. 获取config_private中的配置
- r = getattr(importlib.import_module('config_private'), arg)
- except:
- # 优先级3. 获取config中的配置
- r = getattr(importlib.import_module('config'), arg)
-
- # 在读取API_KEY时,检查一下是不是忘了改config
- if arg == 'API_KEY':
- print亮蓝(f"[API_KEY] 本项目现已支持OpenAI和Azure的api-key。也支持同时填写多个api-key,如API_KEY=\"openai-key1,openai-key2,azure-key3\"")
- print亮蓝(f"[API_KEY] 您既可以在config.py中修改api-key(s),也可以在问题输入区输入临时的api-key(s),然后回车键提交后即可生效。")
- if is_any_api_key(r):
- print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功")
- else:
- print亮红( "[API_KEY] 您的 API_KEY 不满足任何一种已知的密钥格式,请在config文件中修改API密钥之后再运行。")
- if arg == 'proxies':
- if not read_single_conf_with_lru_cache('USE_PROXY'): r = None # 检查USE_PROXY,防止proxies单独起作用
- if r is None:
- print亮红('[PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。')
- else:
- print亮绿('[PROXY] 网络代理状态:已配置。配置信息如下:', r)
- assert isinstance(r, dict), 'proxies格式错误,请注意proxies选项的格式,不要遗漏括号。'
- return r
-
-
-@lru_cache(maxsize=128)
-def get_conf(*args):
- # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
- res = []
- for arg in args:
- r = read_single_conf_with_lru_cache(arg)
- res.append(r)
- return res
-
-
-def clear_line_break(txt):
- txt = txt.replace('\n', ' ')
- txt = txt.replace(' ', ' ')
- txt = txt.replace(' ', ' ')
- return txt
-
-
-class DummyWith():
- """
- 这段代码定义了一个名为DummyWith的空上下文管理器,
- 它的作用是……额……就是不起作用,即在代码结构不变得情况下取代其他的上下文管理器。
- 上下文管理器是一种Python对象,用于与with语句一起使用,
- 以确保一些资源在代码块执行期间得到正确的初始化和清理。
- 上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
- 在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用,
- 而在上下文执行结束时,__exit__()方法则会被调用。
- """
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, traceback):
- return
-
-def run_gradio_in_subpath(demo, auth, port, custom_path):
- """
- 把gradio的运行地址更改到指定的二次路径上
- """
- def is_path_legal(path: str)->bool:
- '''
- check path for sub url
- path: path to check
- return value: do sub url wrap
- '''
- if path == "/": return True
- if len(path) == 0:
- print("ilegal custom path: {}\npath must not be empty\ndeploy on root url".format(path))
- return False
- if path[0] == '/':
- if path[1] != '/':
- print("deploy on sub-path {}".format(path))
- return True
- return False
- print("ilegal custom path: {}\npath should begin with \'/\'\ndeploy on root url".format(path))
- return False
-
- if not is_path_legal(custom_path): raise RuntimeError('Ilegal custom path')
- import uvicorn
- import gradio as gr
- from fastapi import FastAPI
- app = FastAPI()
- if custom_path != "/":
- @app.get("/")
- def read_main():
- return {"message": f"Gradio is running at: {custom_path}"}
- app = gr.mount_gradio_app(app, demo, path=custom_path)
- uvicorn.run(app, host="0.0.0.0", port=port) # , auth=auth
-
-
-def clip_history(inputs, history, tokenizer, max_token_limit):
- """
- reduce the length of history by clipping.
- this function search for the longest entries to clip, little by little,
- until the number of token of history is reduced under threshold.
- 通过裁剪来缩短历史记录的长度。
- 此函数逐渐地搜索最长的条目进行剪辑,
- 直到历史记录的标记数量降低到阈值以下。
- """
- import numpy as np
- from request_llm.bridge_all import model_info
- def get_token_num(txt):
- return len(tokenizer.encode(txt, disallowed_special=()))
- input_token_num = get_token_num(inputs)
- if input_token_num < max_token_limit * 3 / 4:
- # 当输入部分的token占比小于限制的3/4时,裁剪时
- # 1. 把input的余量留出来
- max_token_limit = max_token_limit - input_token_num
- # 2. 把输出用的余量留出来
- max_token_limit = max_token_limit - 128
- # 3. 如果余量太小了,直接清除历史
- if max_token_limit < 128:
- history = []
- return history
- else:
- # 当输入部分的token占比 > 限制的3/4时,直接清除历史
- history = []
- return history
-
- everything = ['']
- everything.extend(history)
- n_token = get_token_num('\n'.join(everything))
- everything_token = [get_token_num(e) for e in everything]
-
- # 截断时的颗粒度
- delta = max(everything_token) // 16
-
- while n_token > max_token_limit:
- where = np.argmax(everything_token)
- encoded = tokenizer.encode(everything[where], disallowed_special=())
- clipped_encoded = encoded[:len(encoded)-delta]
- everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
- everything_token[where] = get_token_num(everything[where])
- n_token = get_token_num('\n'.join(everything))
-
- history = everything[1:]
- return history
-
-"""
-========================================================================
-第三部分
-其他小工具:
- - zip_folder: 把某个路径下所有文件压缩,然后转移到指定的另一个路径中(gpt写的)
- - gen_time_str: 生成时间戳
- - ProxyNetworkActivate: 临时地启动代理网络(如果有)
- - objdump/objload: 快捷的调试函数
-========================================================================
-"""
-
-def zip_folder(source_folder, dest_folder, zip_name):
- import zipfile
- import os
- # Make sure the source folder exists
- if not os.path.exists(source_folder):
- print(f"{source_folder} does not exist")
- return
-
- # Make sure the destination folder exists
- if not os.path.exists(dest_folder):
- print(f"{dest_folder} does not exist")
- return
-
- # Create the name for the zip file
- zip_file = os.path.join(dest_folder, zip_name)
-
- # Create a ZipFile object
- with zipfile.ZipFile(zip_file, 'w', zipfile.ZIP_DEFLATED) as zipf:
- # Walk through the source folder and add files to the zip file
- for foldername, subfolders, filenames in os.walk(source_folder):
- for filename in filenames:
- filepath = os.path.join(foldername, filename)
- zipf.write(filepath, arcname=os.path.relpath(filepath, source_folder))
-
- # Move the zip file to the destination folder (if it wasn't already there)
- if os.path.dirname(zip_file) != dest_folder:
- os.rename(zip_file, os.path.join(dest_folder, os.path.basename(zip_file)))
- zip_file = os.path.join(dest_folder, os.path.basename(zip_file))
-
- print(f"Zip file created at {zip_file}")
-
-def zip_result(folder):
- t = gen_time_str()
- zip_folder(folder, './gpt_log/', f'{t}-result.zip')
- return pj('./gpt_log/', f'{t}-result.zip')
-
-def gen_time_str():
- import time
- return time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
-
-def get_log_folder(user='default', plugin_name='shared'):
- _dir = os.path.join(os.path.dirname(__file__), 'gpt_log', user, plugin_name)
- if not os.path.exists(_dir): os.makedirs(_dir)
- return _dir
-
-class ProxyNetworkActivate():
- """
- 这段代码定义了一个名为TempProxy的空上下文管理器, 用于给一小段代码上代理
- """
- def __enter__(self):
- from toolbox import get_conf
- proxies, = get_conf('proxies')
- if 'no_proxy' in os.environ: os.environ.pop('no_proxy')
- if proxies is not None:
- if 'http' in proxies: os.environ['HTTP_PROXY'] = proxies['http']
- if 'https' in proxies: os.environ['HTTPS_PROXY'] = proxies['https']
- return self
-
- def __exit__(self, exc_type, exc_value, traceback):
- os.environ['no_proxy'] = '*'
- if 'HTTP_PROXY' in os.environ: os.environ.pop('HTTP_PROXY')
- if 'HTTPS_PROXY' in os.environ: os.environ.pop('HTTPS_PROXY')
- return
-
-def objdump(obj, file='objdump.tmp'):
- import pickle
- with open(file, 'wb+') as f:
- pickle.dump(obj, f)
- return
-
-def objload(file='objdump.tmp'):
- import pickle, os
- if not os.path.exists(file):
- return
- with open(file, 'rb') as f:
- return pickle.load(f)
-
-def Singleton(cls):
- """
- 一个单实例装饰器
- """
- _instance = {}
-
- def _singleton(*args, **kargs):
- if cls not in _instance:
- _instance[cls] = cls(*args, **kargs)
- return _instance[cls]
-
- return _singleton
-
-"""
-========================================================================
-第四部分
-接驳虚空终端:
- - set_conf: 在运行过程中动态地修改配置
- - set_multi_conf: 在运行过程中动态地修改多个配置
- - get_plugin_handle: 获取插件的句柄
- - get_plugin_default_kwargs: 获取插件的默认参数
- - get_chat_handle: 获取简单聊天的句柄
- - get_chat_default_kwargs: 获取简单聊天的默认参数
-========================================================================
-"""
-
-def set_conf(key, value):
- from toolbox import read_single_conf_with_lru_cache, get_conf
- read_single_conf_with_lru_cache.cache_clear()
- get_conf.cache_clear()
- os.environ[key] = str(value)
- altered, = get_conf(key)
- return altered
-
-def set_multi_conf(dic):
- for k, v in dic.items(): set_conf(k, v)
- return
-
-def get_plugin_handle(plugin_name):
- """
- e.g. plugin_name = 'crazy_functions.批量Markdown翻译->Markdown翻译指定语言'
- """
- import importlib
- assert '->' in plugin_name, \
- "Example of plugin_name: crazy_functions.批量Markdown翻译->Markdown翻译指定语言"
- module, fn_name = plugin_name.split('->')
- f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name)
- return f_hot_reload
-
-def get_chat_handle():
- """
- """
- from request_llm.bridge_all import predict_no_ui_long_connection
- return predict_no_ui_long_connection
-
-def get_plugin_default_kwargs():
- """
- """
- from toolbox import get_conf, ChatBotWithCookies
-
- WEB_PORT, LLM_MODEL, API_KEY = \
- get_conf('WEB_PORT', 'LLM_MODEL', 'API_KEY')
-
- llm_kwargs = {
- 'api_key': API_KEY,
- 'llm_model': LLM_MODEL,
- 'top_p':1.0,
- 'max_length': None,
- 'temperature':1.0,
- }
- chatbot = ChatBotWithCookies(llm_kwargs)
-
- # txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port
- DEFAULT_FN_GROUPS_kwargs = {
- "main_input": "./README.md",
- "llm_kwargs": llm_kwargs,
- "plugin_kwargs": {},
- "chatbot_with_cookie": chatbot,
- "history": [],
- "system_prompt": "You are a good AI.",
- "web_port": WEB_PORT
- }
- return DEFAULT_FN_GROUPS_kwargs
-
-def get_chat_default_kwargs():
- """
- """
- from toolbox import get_conf
-
- LLM_MODEL, API_KEY = get_conf('LLM_MODEL', 'API_KEY')
-
- llm_kwargs = {
- 'api_key': API_KEY,
- 'llm_model': LLM_MODEL,
- 'top_p':1.0,
- 'max_length': None,
- 'temperature':1.0,
- }
-
- default_chat_kwargs = {
- "inputs": "Hello there, are you ready?",
- "llm_kwargs": llm_kwargs,
- "history": [],
- "sys_prompt": "You are AI assistant",
- "observe_window": None,
- "console_slience": False,
- }
-
- return default_chat_kwargs
-
diff --git a/spaces/LucasCodeBreak/MusicGen/audiocraft/quantization/core_vq.py b/spaces/LucasCodeBreak/MusicGen/audiocraft/quantization/core_vq.py
deleted file mode 100644
index e1896bb1788a945a1f7be6369abb255ecf72c7a0..0000000000000000000000000000000000000000
--- a/spaces/LucasCodeBreak/MusicGen/audiocraft/quantization/core_vq.py
+++ /dev/null
@@ -1,400 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-from einops import rearrange, repeat
-import flashy
-import torch
-from torch import nn, einsum
-import torch.nn.functional as F
-
-
-def exists(val: tp.Optional[tp.Any]) -> bool:
- return val is not None
-
-
-def default(val: tp.Any, d: tp.Any) -> tp.Any:
- return val if exists(val) else d
-
-
-def l2norm(t):
- return F.normalize(t, p=2, dim=-1)
-
-
-def ema_inplace(moving_avg, new, decay: float):
- moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay))
-
-
-def laplace_smoothing(x, n_categories: int, epsilon: float = 1e-5):
- return (x + epsilon) / (x.sum() + n_categories * epsilon)
-
-
-def uniform_init(*shape: int):
- t = torch.empty(shape)
- nn.init.kaiming_uniform_(t)
- return t
-
-
-def sample_vectors(samples, num: int):
- num_samples, device = samples.shape[0], samples.device
-
- if num_samples >= num:
- indices = torch.randperm(num_samples, device=device)[:num]
- else:
- indices = torch.randint(0, num_samples, (num,), device=device)
-
- return samples[indices]
-
-
-def kmeans(samples, num_clusters: int, num_iters: int = 10):
- dim, dtype = samples.shape[-1], samples.dtype
-
- means = sample_vectors(samples, num_clusters)
-
- for _ in range(num_iters):
- diffs = rearrange(samples, "n d -> n () d") - rearrange(
- means, "c d -> () c d"
- )
- dists = -(diffs ** 2).sum(dim=-1)
-
- buckets = dists.max(dim=-1).indices
- bins = torch.bincount(buckets, minlength=num_clusters)
- zero_mask = bins == 0
- bins_min_clamped = bins.masked_fill(zero_mask, 1)
-
- new_means = buckets.new_zeros(num_clusters, dim, dtype=dtype)
- new_means.scatter_add_(0, repeat(buckets, "n -> n d", d=dim), samples)
- new_means = new_means / bins_min_clamped[..., None]
-
- means = torch.where(zero_mask[..., None], means, new_means)
-
- return means, bins
-
-
-def orthgonal_loss_fn(t):
- # eq (2) from https://arxiv.org/abs/2112.00384
- n = t.shape[0]
- normed_codes = l2norm(t)
- identity = torch.eye(n, device=t.device)
- cosine_sim = einsum("i d, j d -> i j", normed_codes, normed_codes)
- return ((cosine_sim - identity) ** 2).sum() / (n ** 2)
-
-
-class EuclideanCodebook(nn.Module):
- """Codebook with Euclidean distance.
-
- Args:
- dim (int): Dimension.
- codebook_size (int): Codebook size.
- kmeans_init (bool): Whether to use k-means to initialize the codebooks.
- If set to true, run the k-means algorithm on the first training batch and use
- the learned centroids as initialization.
- kmeans_iters (int): Number of iterations used for k-means algorithm at initialization.
- decay (float): Decay for exponential moving average over the codebooks.
- epsilon (float): Epsilon value for numerical stability.
- threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes
- that have an exponential moving average cluster size less than the specified threshold with
- randomly selected vector from the current batch.
- """
- def __init__(
- self,
- dim: int,
- codebook_size: int,
- kmeans_init: int = False,
- kmeans_iters: int = 10,
- decay: float = 0.8,
- epsilon: float = 1e-5,
- threshold_ema_dead_code: int = 2,
- ):
- super().__init__()
- self.decay = decay
- init_fn: tp.Union[tp.Callable[..., torch.Tensor], tp.Any] = uniform_init if not kmeans_init else torch.zeros
- embed = init_fn(codebook_size, dim)
-
- self.codebook_size = codebook_size
-
- self.kmeans_iters = kmeans_iters
- self.epsilon = epsilon
- self.threshold_ema_dead_code = threshold_ema_dead_code
-
- self.register_buffer("inited", torch.Tensor([not kmeans_init]))
- self.register_buffer("cluster_size", torch.zeros(codebook_size))
- self.register_buffer("embed", embed)
- self.register_buffer("embed_avg", embed.clone())
-
- @torch.jit.ignore
- def init_embed_(self, data):
- if self.inited:
- return
-
- embed, cluster_size = kmeans(data, self.codebook_size, self.kmeans_iters)
- self.embed.data.copy_(embed)
- self.embed_avg.data.copy_(embed.clone())
- self.cluster_size.data.copy_(cluster_size)
- self.inited.data.copy_(torch.Tensor([True]))
- # Make sure all buffers across workers are in sync after initialization
- flashy.distrib.broadcast_tensors(self.buffers())
-
- def replace_(self, samples, mask):
- modified_codebook = torch.where(
- mask[..., None], sample_vectors(samples, self.codebook_size), self.embed
- )
- self.embed.data.copy_(modified_codebook)
-
- def expire_codes_(self, batch_samples):
- if self.threshold_ema_dead_code == 0:
- return
-
- expired_codes = self.cluster_size < self.threshold_ema_dead_code
- if not torch.any(expired_codes):
- return
-
- batch_samples = rearrange(batch_samples, "... d -> (...) d")
- self.replace_(batch_samples, mask=expired_codes)
- flashy.distrib.broadcast_tensors(self.buffers())
-
- def preprocess(self, x):
- x = rearrange(x, "... d -> (...) d")
- return x
-
- def quantize(self, x):
- embed = self.embed.t()
- dist = -(
- x.pow(2).sum(1, keepdim=True)
- - 2 * x @ embed
- + embed.pow(2).sum(0, keepdim=True)
- )
- embed_ind = dist.max(dim=-1).indices
- return embed_ind
-
- def postprocess_emb(self, embed_ind, shape):
- return embed_ind.view(*shape[:-1])
-
- def dequantize(self, embed_ind):
- quantize = F.embedding(embed_ind, self.embed)
- return quantize
-
- def encode(self, x):
- shape = x.shape
- # pre-process
- x = self.preprocess(x)
- # quantize
- embed_ind = self.quantize(x)
- # post-process
- embed_ind = self.postprocess_emb(embed_ind, shape)
- return embed_ind
-
- def decode(self, embed_ind):
- quantize = self.dequantize(embed_ind)
- return quantize
-
- def forward(self, x):
- shape, dtype = x.shape, x.dtype
- x = self.preprocess(x)
- self.init_embed_(x)
-
- embed_ind = self.quantize(x)
- embed_onehot = F.one_hot(embed_ind, self.codebook_size).type(dtype)
- embed_ind = self.postprocess_emb(embed_ind, shape)
- quantize = self.dequantize(embed_ind)
-
- if self.training:
- # We do the expiry of code at that point as buffers are in sync
- # and all the workers will take the same decision.
- self.expire_codes_(x)
- ema_inplace(self.cluster_size, embed_onehot.sum(0), self.decay)
- embed_sum = x.t() @ embed_onehot
- ema_inplace(self.embed_avg, embed_sum.t(), self.decay)
- cluster_size = (
- laplace_smoothing(self.cluster_size, self.codebook_size, self.epsilon)
- * self.cluster_size.sum()
- )
- embed_normalized = self.embed_avg / cluster_size.unsqueeze(1)
- self.embed.data.copy_(embed_normalized)
-
- return quantize, embed_ind
-
-
-class VectorQuantization(nn.Module):
- """Vector quantization implementation.
- Currently supports only euclidean distance.
-
- Args:
- dim (int): Dimension
- codebook_size (int): Codebook size
- codebook_dim (int): Codebook dimension. If not defined, uses the specified dimension in dim.
- decay (float): Decay for exponential moving average over the codebooks.
- epsilon (float): Epsilon value for numerical stability.
- kmeans_init (bool): Whether to use kmeans to initialize the codebooks.
- kmeans_iters (int): Number of iterations used for kmeans initialization.
- threshold_ema_dead_code (int):
- channels_last (bool): Channels are the last dimension in the input tensors.
- commitment_weight (float): Weight for commitment loss.
- orthogonal_reg_weight (float): Orthogonal regularization weights.
- orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes.
- orthogonal_reg_max_codes (optional int): Maximum number of codes to consider
- for orthogonal regulariation.
- threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes
- that have an exponential moving average cluster size less than the specified threshold with
- randomly selected vector from the current batch.
- """
- def __init__(
- self,
- dim: int,
- codebook_size: int,
- codebook_dim: tp.Optional[int] = None,
- decay: float = 0.8,
- epsilon: float = 1e-5,
- kmeans_init: bool = False,
- kmeans_iters: int = 10,
- threshold_ema_dead_code: int = 2,
- channels_last: bool = False,
- commitment_weight: float = 1.,
- orthogonal_reg_weight: float = 0.0,
- orthogonal_reg_active_codes_only: bool = False,
- orthogonal_reg_max_codes: tp.Optional[int] = None,
- ):
- super().__init__()
- _codebook_dim: int = default(codebook_dim, dim)
-
- requires_projection = _codebook_dim != dim
- self.project_in = (nn.Linear(dim, _codebook_dim) if requires_projection else nn.Identity())
- self.project_out = (nn.Linear(_codebook_dim, dim) if requires_projection else nn.Identity())
-
- self.epsilon = epsilon
- self.commitment_weight = commitment_weight
-
- self.orthogonal_reg_weight = orthogonal_reg_weight
- self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only
- self.orthogonal_reg_max_codes = orthogonal_reg_max_codes
-
- self._codebook = EuclideanCodebook(dim=_codebook_dim, codebook_size=codebook_size,
- kmeans_init=kmeans_init, kmeans_iters=kmeans_iters,
- decay=decay, epsilon=epsilon,
- threshold_ema_dead_code=threshold_ema_dead_code)
- self.codebook_size = codebook_size
-
- self.channels_last = channels_last
-
- @property
- def codebook(self):
- return self._codebook.embed
-
- @property
- def inited(self):
- return self._codebook.inited
-
- def _preprocess(self, x):
- if not self.channels_last:
- x = rearrange(x, "b d n -> b n d")
- return x
-
- def _postprocess(self, quantize):
- if not self.channels_last:
- quantize = rearrange(quantize, "b n d -> b d n")
- return quantize
-
- def encode(self, x):
- x = self._preprocess(x)
- x = self.project_in(x)
- embed_in = self._codebook.encode(x)
- return embed_in
-
- def decode(self, embed_ind):
- quantize = self._codebook.decode(embed_ind)
- quantize = self.project_out(quantize)
- quantize = self._postprocess(quantize)
- return quantize
-
- def forward(self, x):
- device = x.device
- x = self._preprocess(x)
-
- x = self.project_in(x)
- quantize, embed_ind = self._codebook(x)
-
- if self.training:
- quantize = x + (quantize - x).detach()
-
- loss = torch.tensor([0.0], device=device, requires_grad=self.training)
-
- if self.training:
- if self.commitment_weight > 0:
- commit_loss = F.mse_loss(quantize.detach(), x)
- loss = loss + commit_loss * self.commitment_weight
-
- if self.orthogonal_reg_weight > 0:
- codebook = self.codebook
-
- if self.orthogonal_reg_active_codes_only:
- # only calculate orthogonal loss for the activated codes for this batch
- unique_code_ids = torch.unique(embed_ind)
- codebook = codebook[unique_code_ids]
-
- num_codes = codebook.shape[0]
- if exists(self.orthogonal_reg_max_codes) and num_codes > self.orthogonal_reg_max_codes:
- rand_ids = torch.randperm(num_codes, device=device)[:self.orthogonal_reg_max_codes]
- codebook = codebook[rand_ids]
-
- orthogonal_reg_loss = orthgonal_loss_fn(codebook)
- loss = loss + orthogonal_reg_loss * self.orthogonal_reg_weight
-
- quantize = self.project_out(quantize)
- quantize = self._postprocess(quantize)
-
- return quantize, embed_ind, loss
-
-
-class ResidualVectorQuantization(nn.Module):
- """Residual vector quantization implementation.
-
- Follows Algorithm 1. in https://arxiv.org/pdf/2107.03312.pdf
- """
- def __init__(self, *, num_quantizers, **kwargs):
- super().__init__()
- self.layers = nn.ModuleList(
- [VectorQuantization(**kwargs) for _ in range(num_quantizers)]
- )
-
- def forward(self, x, n_q: tp.Optional[int] = None):
- quantized_out = 0.0
- residual = x
-
- all_losses = []
- all_indices = []
-
- n_q = n_q or len(self.layers)
-
- for i, layer in enumerate(self.layers[:n_q]):
- quantized, indices, loss = layer(residual)
- residual = residual - quantized
- quantized_out = quantized_out + quantized
- all_indices.append(indices)
- all_losses.append(loss)
-
- out_losses, out_indices = map(torch.stack, (all_losses, all_indices))
- return quantized_out, out_indices, out_losses
-
- def encode(self, x: torch.Tensor, n_q: tp.Optional[int] = None) -> torch.Tensor:
- residual = x
- all_indices = []
- n_q = n_q or len(self.layers)
- for layer in self.layers[:n_q]:
- indices = layer.encode(residual)
- quantized = layer.decode(indices)
- residual = residual - quantized
- all_indices.append(indices)
- out_indices = torch.stack(all_indices)
- return out_indices
-
- def decode(self, q_indices: torch.Tensor) -> torch.Tensor:
- quantized_out = torch.tensor(0.0, device=q_indices.device)
- for i, indices in enumerate(q_indices):
- layer = self.layers[i]
- quantized = layer.decode(indices)
- quantized_out = quantized_out + quantized
- return quantized_out
diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/options/__init__.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/options/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/transforms/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/transforms/__init__.py
deleted file mode 100644
index cbd54e38a2f84b3fef481672a7ceab070eb01b82..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/transforms/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .base import SigmoidForPred
-from .flip import AddHorizontalFlip
-from .zoom_in import ZoomIn
-from .limit_longest_side import LimitLongestSide
-from .crops import Crops
diff --git a/spaces/Makiing/coolb-in-gtest/src/components/toaster.tsx b/spaces/Makiing/coolb-in-gtest/src/components/toaster.tsx
deleted file mode 100644
index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/components/toaster.tsx
+++ /dev/null
@@ -1,3 +0,0 @@
-'use client'
-
-export { Toaster } from 'react-hot-toast'
diff --git a/spaces/Manjushri/MusicGen/CODE_OF_CONDUCT.md b/spaces/Manjushri/MusicGen/CODE_OF_CONDUCT.md
deleted file mode 100644
index 83f431e8feeb7e80d571f39c9f6c1b96857b5f85..0000000000000000000000000000000000000000
--- a/spaces/Manjushri/MusicGen/CODE_OF_CONDUCT.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Code of Conduct
-
-## Our Pledge
-
-In the interest of fostering an open and welcoming environment, we as
-contributors and maintainers pledge to make participation in our project and
-our community a harassment-free experience for everyone, regardless of age, body
-size, disability, ethnicity, sex characteristics, gender identity and expression,
-level of experience, education, socio-economic status, nationality, personal
-appearance, race, religion, or sexual identity and orientation.
-
-## Our Standards
-
-Examples of behavior that contributes to creating a positive environment
-include:
-
-* Using welcoming and inclusive language
-* Being respectful of differing viewpoints and experiences
-* Gracefully accepting constructive criticism
-* Focusing on what is best for the community
-* Showing empathy towards other community members
-
-Examples of unacceptable behavior by participants include:
-
-* The use of sexualized language or imagery and unwelcome sexual attention or
-advances
-* Trolling, insulting/derogatory comments, and personal or political attacks
-* Public or private harassment
-* Publishing others' private information, such as a physical or electronic
-address, without explicit permission
-* Other conduct which could reasonably be considered inappropriate in a
-professional setting
-
-## Our Responsibilities
-
-Project maintainers are responsible for clarifying the standards of acceptable
-behavior and are expected to take appropriate and fair corrective action in
-response to any instances of unacceptable behavior.
-
-Project maintainers have the right and responsibility to remove, edit, or
-reject comments, commits, code, wiki edits, issues, and other contributions
-that are not aligned to this Code of Conduct, or to ban temporarily or
-permanently any contributor for other behaviors that they deem inappropriate,
-threatening, offensive, or harmful.
-
-## Scope
-
-This Code of Conduct applies within all project spaces, and it also applies when
-an individual is representing the project or its community in public spaces.
-Examples of representing a project or community include using an official
-project e-mail address, posting via an official social media account, or acting
-as an appointed representative at an online or offline event. Representation of
-a project may be further defined and clarified by project maintainers.
-
-This Code of Conduct also applies outside the project spaces when there is a
-reasonable belief that an individual's behavior may have a negative impact on
-the project or its community.
-
-## Enforcement
-
-Instances of abusive, harassing, or otherwise unacceptable behavior may be
-reported by contacting the project team at . All
-complaints will be reviewed and investigated and will result in a response that
-is deemed necessary and appropriate to the circumstances. The project team is
-obligated to maintain confidentiality with regard to the reporter of an incident.
-Further details of specific enforcement policies may be posted separately.
-
-Project maintainers who do not follow or enforce the Code of Conduct in good
-faith may face temporary or permanent repercussions as determined by other
-members of the project's leadership.
-
-## Attribution
-
-This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
-available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
-
-[homepage]: https://www.contributor-covenant.org
-
-For answers to common questions about this code of conduct, see
-https://www.contributor-covenant.org/faq
diff --git a/spaces/MathysL/AutoGPT4/autogpt/workspace.py b/spaces/MathysL/AutoGPT4/autogpt/workspace.py
deleted file mode 100644
index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000
--- a/spaces/MathysL/AutoGPT4/autogpt/workspace.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from __future__ import annotations
-
-import os
-from pathlib import Path
-
-from autogpt.config import Config
-
-CFG = Config()
-
-# Set a dedicated folder for file I/O
-WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace"
-
-# Create the directory if it doesn't exist
-if not os.path.exists(WORKSPACE_PATH):
- os.makedirs(WORKSPACE_PATH)
-
-
-def path_in_workspace(relative_path: str | Path) -> Path:
- """Get full path for item in workspace
-
- Parameters:
- relative_path (str | Path): Path to translate into the workspace
-
- Returns:
- Path: Absolute path for the given path in the workspace
- """
- return safe_path_join(WORKSPACE_PATH, relative_path)
-
-
-def safe_path_join(base: Path, *paths: str | Path) -> Path:
- """Join one or more path components, asserting the resulting path is within the workspace.
-
- Args:
- base (Path): The base path
- *paths (str): The paths to join to the base path
-
- Returns:
- Path: The joined path
- """
- joined_path = base.joinpath(*paths).resolve()
-
- if CFG.restrict_to_workspace and not joined_path.is_relative_to(base):
- raise ValueError(
- f"Attempted to access path '{joined_path}' outside of workspace '{base}'."
- )
-
- return joined_path
diff --git a/spaces/Mbilal755/Rad_Summarizer/app.py b/spaces/Mbilal755/Rad_Summarizer/app.py
deleted file mode 100644
index b3d399dfcb1da6eb3a7b3903ee0f00f2c1757b6c..0000000000000000000000000000000000000000
--- a/spaces/Mbilal755/Rad_Summarizer/app.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import gradio as gr
-from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
-
-model_checkpoint = "Mbilal755/Radiology_Bart"
-model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
-tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
-
-from transformers import SummarizationPipeline
-summarizer = SummarizationPipeline(model=model, tokenizer=tokenizer)
-
-import gradio as gr
-
-examples = [
- "prevoid bladder volume cc postvoid bladder volume cc bladder grossly normal appearance ",
- "heart mediastinal contours normal left sided subclavian line position tip distal svc lungs remain clear active disease effusions",
- "heart size normal mediastinal hilar contours remain stable small right pneumothorax remains unchanged surgical lung staples overlying left upper lobe seen linear pattern consistent prior upper lobe resection soft tissue osseous structures appear unremarkable nasogastric endotracheal tubes remain satisfactory position atelectatic changes right lower lung field remain unchanged prior study"
-]
-
-description = """
-We fine-tuned the BioBart 442M parameter model on a dataset of 52,000 radiology reports for training and 8000 reports for evaluation scraped from MIMIC-III specifically for the task of summarization.
-The model is able to generate impressions summarizing key findings from the longer radiology reports.
-
-Enter a radiology report to see the generated impression summary!
-"""
-
-def summarize(radiology_report):
- summary = summarizer(radiology_report)[0]['summary_text']
- return summary
-
-iface = gr.Interface(fn=summarize,
- inputs=gr.inputs.Textbox(lines=5, label="Radiology Report"),
- outputs=gr.outputs.Textbox(label="Summary"),
- examples=examples,
- title="Radiology Report Summarization",
- description=description,
- theme="huggingface")
-
-if __name__ == "__main__":
- iface.launch(share=False)
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/necks/multilevel_neck.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/necks/multilevel_neck.py
deleted file mode 100644
index 766144d8136326a1fab5906a153a0c0df69b6b60..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/necks/multilevel_neck.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class MultiLevelNeck(nn.Module):
- """MultiLevelNeck.
-
- A neck structure connect vit backbone and decoder_heads.
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale).
- scales (List[int]): Scale factors for each input feature map.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (dict): Config dict for activation layer in ConvModule.
- Default: None.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- scales=[0.5, 1, 2, 4],
- norm_cfg=None,
- act_cfg=None):
- super(MultiLevelNeck, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.scales = scales
- self.num_outs = len(scales)
- self.lateral_convs = nn.ModuleList()
- self.convs = nn.ModuleList()
- for in_channel in in_channels:
- self.lateral_convs.append(
- ConvModule(
- in_channel,
- out_channels,
- kernel_size=1,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
- for _ in range(self.num_outs):
- self.convs.append(
- ConvModule(
- out_channels,
- out_channels,
- kernel_size=3,
- padding=1,
- stride=1,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
-
- def forward(self, inputs):
- assert len(inputs) == len(self.in_channels)
- print(inputs[0].shape)
- inputs = [
- lateral_conv(inputs[i])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
- # for len(inputs) not equal to self.num_outs
- if len(inputs) == 1:
- inputs = [inputs[0] for _ in range(self.num_outs)]
- outs = []
- for i in range(self.num_outs):
- x_resize = F.interpolate(
- inputs[i], scale_factor=self.scales[i], mode='bilinear')
- outs.append(self.convs[i](x_resize))
- return tuple(outs)
diff --git a/spaces/MirageML/sjc/sd1/ldm/modules/distributions/distributions.py b/spaces/MirageML/sjc/sd1/ldm/modules/distributions/distributions.py
deleted file mode 100644
index f2b8ef901130efc171aa69742ca0244d94d3f2e9..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/sd1/ldm/modules/distributions/distributions.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import torch
-import numpy as np
-
-
-class AbstractDistribution:
- def sample(self):
- raise NotImplementedError()
-
- def mode(self):
- raise NotImplementedError()
-
-
-class DiracDistribution(AbstractDistribution):
- def __init__(self, value):
- self.value = value
-
- def sample(self):
- return self.value
-
- def mode(self):
- return self.value
-
-
-class DiagonalGaussianDistribution(object):
- def __init__(self, parameters, deterministic=False):
- self.parameters = parameters
- self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)
- self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
- self.deterministic = deterministic
- self.std = torch.exp(0.5 * self.logvar)
- self.var = torch.exp(self.logvar)
- if self.deterministic:
- self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)
-
- def sample(self):
- x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device)
- return x
-
- def kl(self, other=None):
- if self.deterministic:
- return torch.Tensor([0.])
- else:
- if other is None:
- return 0.5 * torch.sum(torch.pow(self.mean, 2)
- + self.var - 1.0 - self.logvar,
- dim=[1, 2, 3])
- else:
- return 0.5 * torch.sum(
- torch.pow(self.mean - other.mean, 2) / other.var
- + self.var / other.var - 1.0 - self.logvar + other.logvar,
- dim=[1, 2, 3])
-
- def nll(self, sample, dims=[1,2,3]):
- if self.deterministic:
- return torch.Tensor([0.])
- logtwopi = np.log(2.0 * np.pi)
- return 0.5 * torch.sum(
- logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,
- dim=dims)
-
- def mode(self):
- return self.mean
-
-
-def normal_kl(mean1, logvar1, mean2, logvar2):
- """
- source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
- Compute the KL divergence between two gaussians.
- Shapes are automatically broadcasted, so batches can be compared to
- scalars, among other use cases.
- """
- tensor = None
- for obj in (mean1, logvar1, mean2, logvar2):
- if isinstance(obj, torch.Tensor):
- tensor = obj
- break
- assert tensor is not None, "at least one argument must be a Tensor"
-
- # Force variances to be Tensors. Broadcasting helps convert scalars to
- # Tensors, but it does not work for torch.exp().
- logvar1, logvar2 = [
- x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor)
- for x in (logvar1, logvar2)
- ]
-
- return 0.5 * (
- -1.0
- + logvar2
- - logvar1
- + torch.exp(logvar1 - logvar2)
- + ((mean1 - mean2) ** 2) * torch.exp(-logvar2)
- )
diff --git a/spaces/Miuzarte/SUI-svc-4.0/vdecoder/__init__.py b/spaces/Miuzarte/SUI-svc-4.0/vdecoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Motheatscrows/mmnsfww/README.md b/spaces/Motheatscrows/mmnsfww/README.md
deleted file mode 100644
index dc4d7141558753829896199eeecd135ec65649ff..0000000000000000000000000000000000000000
--- a/spaces/Motheatscrows/mmnsfww/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Mmnsfww
-emoji: 🐢
-colorFrom: green
-colorTo: gray
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/kie/sdmgr/sdmgr_novisual_60e_wildreceipt-openset.py b/spaces/Mountchicken/MAERec-Gradio/configs/kie/sdmgr/sdmgr_novisual_60e_wildreceipt-openset.py
deleted file mode 100644
index bc3d52a1ce93d4baf267edc923c71f2b9482e767..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/kie/sdmgr/sdmgr_novisual_60e_wildreceipt-openset.py
+++ /dev/null
@@ -1,71 +0,0 @@
-_base_ = [
- '../_base_/default_runtime.py',
- '../_base_/datasets/wildreceipt-openset.py',
- '../_base_/schedules/schedule_adam_60e.py',
- '_base_sdmgr_novisual.py',
-]
-
-node_num_classes = 4 # 4 classes: bg, key, value and other
-edge_num_classes = 2 # edge connectivity
-key_node_idx = 1
-value_node_idx = 2
-
-model = dict(
- type='SDMGR',
- kie_head=dict(
- num_classes=node_num_classes,
- postprocessor=dict(
- link_type='one-to-many',
- key_node_idx=key_node_idx,
- value_node_idx=value_node_idx)),
-)
-
-test_pipeline = [
- dict(
- type='LoadKIEAnnotations',
- key_node_idx=key_node_idx,
- value_node_idx=value_node_idx), # Keep key->value edges for evaluation
- dict(type='Resize', scale=(1024, 512), keep_ratio=True),
- dict(type='PackKIEInputs'),
-]
-
-wildreceipt_openset_train = _base_.wildreceipt_openset_train
-wildreceipt_openset_train.pipeline = _base_.train_pipeline
-wildreceipt_openset_test = _base_.wildreceipt_openset_test
-wildreceipt_openset_test.pipeline = test_pipeline
-
-train_dataloader = dict(
- batch_size=4,
- num_workers=1,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=True),
- dataset=wildreceipt_openset_train)
-val_dataloader = dict(
- batch_size=1,
- num_workers=1,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=wildreceipt_openset_test)
-test_dataloader = val_dataloader
-
-val_evaluator = [
- dict(
- type='F1Metric',
- prefix='node',
- key='labels',
- mode=['micro', 'macro'],
- num_classes=node_num_classes,
- cared_classes=[key_node_idx, value_node_idx]),
- dict(
- type='F1Metric',
- prefix='edge',
- mode='micro',
- key='edge_labels',
- cared_classes=[1], # Collapse to binary F1 score
- num_classes=edge_num_classes)
-]
-test_evaluator = val_evaluator
-
-visualizer = dict(
- type='KIELocalVisualizer', name='visualizer', is_openset=True)
-auto_scale_lr = dict(base_batch_size=4)
diff --git a/spaces/MuGeminorum/insecta/khandy/image/rotate.py b/spaces/MuGeminorum/insecta/khandy/image/rotate.py
deleted file mode 100644
index 6f905d47e2da05cc33c414a95c230197b8a81ad0..0000000000000000000000000000000000000000
--- a/spaces/MuGeminorum/insecta/khandy/image/rotate.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import cv2
-import khandy
-import numpy as np
-
-
-def get_2d_rotation_matrix(angle, cx=0, cy=0, scale=1,
- degrees=True, dtype=np.float32):
- """
- References:
- `cv2.getRotationMatrix2D` in OpenCV
- """
- if degrees:
- angle = np.deg2rad(angle)
- c = scale * np.cos(angle)
- s = scale * np.sin(angle)
-
- tx = cx - cx * c + cy * s
- ty = cy - cx * s - cy * c
- return np.array([[ c, -s, tx],
- [ s, c, ty],
- [ 0, 0, 1]], dtype=dtype)
-
-
-def rotate_image(image, angle, scale=1.0, center=None,
- degrees=True, border_value=0, auto_bound=False):
- """Rotate an image.
-
- Args:
- image : ndarray
- Image to be rotated.
- angle : float
- Rotation angle in degrees, positive values mean clockwise rotation.
- center : tuple
- Center of the rotation in the source image, by default
- it is the center of the image.
- scale : float
- Isotropic scale factor.
- degrees : bool
- border_value : int
- Border value.
- auto_bound : bool
- Whether to adjust the image size to cover the whole rotated image.
-
- Returns:
- ndarray: The rotated image.
-
- References:
- mmcv.imrotate
- """
- assert khandy.is_numpy_image(image)
- image_height, image_width = image.shape[:2]
- if auto_bound:
- center = None
- if center is None:
- center = ((image_width - 1) * 0.5, (image_height - 1) * 0.5)
- assert isinstance(center, tuple)
-
- rotation_matrix = get_2d_rotation_matrix(angle, center[0], center[1], scale, degrees)
- if auto_bound:
- scale_cos = np.abs(rotation_matrix[0, 0])
- scale_sin = np.abs(rotation_matrix[0, 1])
- new_width = image_width * scale_cos + image_height * scale_sin
- new_height = image_width * scale_sin + image_height * scale_cos
-
- rotation_matrix[0, 2] += (new_width - image_width) * 0.5
- rotation_matrix[1, 2] += (new_height - image_height) * 0.5
-
- image_width = int(np.round(new_width))
- image_height = int(np.round(new_height))
- rotated = cv2.warpAffine(image, rotation_matrix[:2,:], (image_width, image_height),
- borderValue=border_value)
- return rotated
diff --git a/spaces/NATSpeech/PortaSpeech/utils/commons/ddp_utils.py b/spaces/NATSpeech/PortaSpeech/utils/commons/ddp_utils.py
deleted file mode 100644
index 4b529198c13a1ffc622baea6e5178407b24aee8f..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/utils/commons/ddp_utils.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from torch.nn.parallel import DistributedDataParallel
-from torch.nn.parallel.distributed import _find_tensors
-import torch.optim
-import torch.utils.data
-import torch
-from packaging import version
-
-class DDP(DistributedDataParallel):
- """
- Override the forward call in lightning so it goes to training and validation step respectively
- """
-
- def forward(self, *inputs, **kwargs): # pragma: no cover
- if version.parse(torch.__version__[:6]) < version.parse("1.11"):
- self._sync_params()
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- assert len(self.device_ids) == 1
- if self.module.training:
- output = self.module.training_step(*inputs[0], **kwargs[0])
- elif self.module.testing:
- output = self.module.test_step(*inputs[0], **kwargs[0])
- else:
- output = self.module.validation_step(*inputs[0], **kwargs[0])
- if torch.is_grad_enabled():
- # We'll return the output object verbatim since it is a freeform
- # object. We need to find any tensors in this object, though,
- # because we need to figure out which parameters were used during
- # this forward pass, to ensure we short circuit reduction for any
- # unused parameters. Only if `find_unused_parameters` is set.
- if self.find_unused_parameters:
- self.reducer.prepare_for_backward(list(_find_tensors(output)))
- else:
- self.reducer.prepare_for_backward([])
- else:
- from torch.nn.parallel.distributed import \
- logging, Join, _DDPSink, _tree_flatten_with_rref, _tree_unflatten_with_rref
- with torch.autograd.profiler.record_function("DistributedDataParallel.forward"):
- if torch.is_grad_enabled() and self.require_backward_grad_sync:
- self.logger.set_runtime_stats_and_log()
- self.num_iterations += 1
- self.reducer.prepare_for_forward()
-
- # Notify the join context that this process has not joined, if
- # needed
- work = Join.notify_join_context(self)
- if work:
- self.reducer._set_forward_pass_work_handle(
- work, self._divide_by_initial_world_size
- )
-
- # Calling _rebuild_buckets before forward compuation,
- # It may allocate new buckets before deallocating old buckets
- # inside _rebuild_buckets. To save peak memory usage,
- # call _rebuild_buckets before the peak memory usage increases
- # during forward computation.
- # This should be called only once during whole training period.
- if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
- logging.info("Reducer buckets have been rebuilt in this iteration.")
- self._has_rebuilt_buckets = True
-
- # sync params according to location (before/after forward) user
- # specified as part of hook, if hook was specified.
- buffer_hook_registered = hasattr(self, 'buffer_hook')
- if self._check_sync_bufs_pre_fwd():
- self._sync_buffers()
-
- if self._join_config.enable:
- # Notify joined ranks whether they should sync in backwards pass or not.
- self._check_global_requires_backward_grad_sync(is_joined_rank=False)
-
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- if self.module.training:
- output = self.module.training_step(*inputs[0], **kwargs[0])
- elif self.module.testing:
- output = self.module.test_step(*inputs[0], **kwargs[0])
- else:
- output = self.module.validation_step(*inputs[0], **kwargs[0])
-
- # sync params according to location (before/after forward) user
- # specified as part of hook, if hook was specified.
- if self._check_sync_bufs_post_fwd():
- self._sync_buffers()
-
- if torch.is_grad_enabled() and self.require_backward_grad_sync:
- self.require_forward_param_sync = True
- # We'll return the output object verbatim since it is a freeform
- # object. We need to find any tensors in this object, though,
- # because we need to figure out which parameters were used during
- # this forward pass, to ensure we short circuit reduction for any
- # unused parameters. Only if `find_unused_parameters` is set.
- if self.find_unused_parameters and not self.static_graph:
- # Do not need to populate this for static graph.
- self.reducer.prepare_for_backward(list(_find_tensors(output)))
- else:
- self.reducer.prepare_for_backward([])
- else:
- self.require_forward_param_sync = False
-
- # TODO: DDPSink is currently enabled for unused parameter detection and
- # static graph training for first iteration.
- if (self.find_unused_parameters and not self.static_graph) or (
- self.static_graph and self.num_iterations == 1
- ):
- state_dict = {
- 'static_graph': self.static_graph,
- 'num_iterations': self.num_iterations,
- }
-
- output_tensor_list, treespec, output_is_rref = _tree_flatten_with_rref(
- output
- )
- output_placeholders = [None for _ in range(len(output_tensor_list))]
- # Do not touch tensors that have no grad_fn, which can cause issues
- # such as https://github.com/pytorch/pytorch/issues/60733
- for i, output in enumerate(output_tensor_list):
- if torch.is_tensor(output) and output.grad_fn is None:
- output_placeholders[i] = output
-
- # When find_unused_parameters=True, makes tensors which require grad
- # run through the DDPSink backward pass. When not all outputs are
- # used in loss, this makes those corresponding tensors receive
- # undefined gradient which the reducer then handles to ensure
- # param.grad field is not touched and we don't error out.
- passthrough_tensor_list = _DDPSink.apply(
- self.reducer,
- state_dict,
- *output_tensor_list,
- )
- for i in range(len(output_placeholders)):
- if output_placeholders[i] is None:
- output_placeholders[i] = passthrough_tensor_list[i]
-
- # Reconstruct output data structure.
- output = _tree_unflatten_with_rref(
- output_placeholders, treespec, output_is_rref
- )
- return output
diff --git a/spaces/NN520/AI/src/components/chat-suggestions.tsx b/spaces/NN520/AI/src/components/chat-suggestions.tsx
deleted file mode 100644
index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000
--- a/spaces/NN520/AI/src/components/chat-suggestions.tsx
+++ /dev/null
@@ -1,45 +0,0 @@
-import React, { useMemo } from 'react'
-import Image from 'next/image'
-import HelpIcon from '@/assets/images/help.svg'
-import { SuggestedResponse } from '@/lib/bots/bing/types'
-import { useBing } from '@/lib/hooks/use-bing'
-import { atom, useAtom } from 'jotai'
-
-type Suggestions = SuggestedResponse[]
-const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text }))
-const suggestionsAtom = atom([])
-
-type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions }
-
-export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) {
- const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom)
- const toggleSuggestions = (() => {
- if (currentSuggestions === helpSuggestions) {
- setSuggestions(suggestions)
- } else {
- setSuggestions(helpSuggestions)
- }
- })
-
- useMemo(() => {
- setSuggestions(suggestions)
- window.scrollBy(0, 2000)
- }, [suggestions.length])
-
- return currentSuggestions?.length ? (
-
Upload a .PDF click the "Load PDF to LangChain" after upload is complete ,
- when everything is ready, you can start asking questions about the pdf
- This version is set to store chat history
-
-"""
-
-
-with gr.Blocks(css=css) as demo:
- with gr.Column(elem_id="col-container"):
- gr.HTML(title)
-
- with gr.Column():
- openai_key = os.environ['OPENAI_API_KEY']
- pdf_doc = gr.File(label="Load a pdf", file_types=['.pdf'], type="file")
- with gr.Row():
- langchain_status = gr.Textbox(label="Status", placeholder="", interactive=False)
- load_pdf = gr.Button("Load pdf to langchain")
-
- chatbot = gr.Chatbot([], elem_id="chatbot").style(height=350)
- question = gr.Textbox(label="Question", placeholder="Type your question and hit Enter ")
- submit_btn = gr.Button("Send Message")
- load_pdf.click(loading_pdf, None, langchain_status, queue=False)
- load_pdf.click(pdf_changes, inputs=[pdf_doc], outputs=[langchain_status], queue=False)
- question.submit(add_text, [chatbot, question], [chatbot, question]).then(
- bot, chatbot, chatbot
- )
- submit_btn.click(add_text, [chatbot, question], [chatbot, question]).then(
- bot, chatbot, chatbot)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/andzhk/PNGInfo/README.md b/spaces/andzhk/PNGInfo/README.md
deleted file mode 100644
index 3414fdb74bdda67f5cfd723bf9872382c5755032..0000000000000000000000000000000000000000
--- a/spaces/andzhk/PNGInfo/README.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-title: PNG Info
-emoji: 🚀
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: wtfpl
----
-
-This project is published on [HuggingFace](https://huggingface.co/spaces/andzhk/PNGInfo)
-
-# PNG Info (png-params)
-
-This is Gradio project for reading and displaying an image and its metadata from url.
-
-Currently, only PNG is supported.
-
-## Usage
-
-- Copy image address
-- Paste it into the **url** field
-- or Drag and Drop/Upload image
-- Submit
-
-**Generation parameters** text can be directly used in AUTOMATIC1111 UI
-
-## Running locally
-
-### Prerequisites
-
-Python 3
-
-### Install requirements
-
-```bash
-pip install -r requirements.txt
-```
-
-### Run
-
-```bash
-python images.py
-```
-
-Use [nodemon](https://www.npmjs.com/package/nodemon) for development.
-
-```bash
-nodemon images.py
-```
-
-### Open UI
-
-Usually Gradio UI is running on http://127.0.0.1:7860
diff --git a/spaces/anhnv125/FRN/inference_onnx.py b/spaces/anhnv125/FRN/inference_onnx.py
deleted file mode 100644
index 093ec16eadb7b333ed19aabdccfce4546b150ae1..0000000000000000000000000000000000000000
--- a/spaces/anhnv125/FRN/inference_onnx.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import argparse
-import glob
-import os
-
-import librosa
-import numpy as np
-import onnx
-import onnxruntime
-import soundfile as sf
-import torch
-import tqdm
-
-from config import CONFIG
-
-parser = argparse.ArgumentParser()
-
-parser.add_argument('--onnx_path', default=None,
- help='path to onnx')
-args = parser.parse_args()
-
-if __name__ == '__main__':
- path = args.onnx_path
- window = CONFIG.DATA.window_size
- stride = CONFIG.DATA.stride
- onnx_model = onnx.load(path)
- options = onnxruntime.SessionOptions()
- options.intra_op_num_threads = 8
- options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
- session = onnxruntime.InferenceSession(path, options)
- input_names = [x.name for x in session.get_inputs()]
- output_names = [x.name for x in session.get_outputs()]
- print(input_names)
- print(output_names)
-
- audio_files = glob.glob(os.path.join(CONFIG.TEST.in_dir, '*.wav'))
- hann = torch.sqrt(torch.hann_window(window))
- os.makedirs(CONFIG.TEST.out_dir, exist_ok=True)
- for file in tqdm.tqdm(audio_files, total=len(audio_files)):
- sig, _ = librosa.load(file, sr=48000)
- sig = torch.tensor(sig)
- re_im = torch.stft(sig, window, stride, window=hann, return_complex=False).permute(1, 0, 2).unsqueeze(
- 1).numpy().astype(np.float32)
-
- inputs = {input_names[i]: np.zeros([d.dim_value for d in _input.type.tensor_type.shape.dim],
- dtype=np.float32)
- for i, _input in enumerate(onnx_model.graph.input)
- }
-
- output_audio = []
- for t in range(re_im.shape[0]):
- inputs[input_names[0]] = re_im[t]
- out, prev_mag, predictor_state, mlp_state = session.run(output_names, inputs)
- inputs[input_names[1]] = prev_mag
- inputs[input_names[2]] = predictor_state
- inputs[input_names[3]] = mlp_state
- output_audio.append(out)
-
- output_audio = torch.tensor(np.concatenate(output_audio, 0))
- output_audio = output_audio.permute(1, 0, 2).contiguous()
- output_audio = torch.view_as_complex(output_audio)
- output_audio = torch.istft(output_audio, window, stride, window=hann)
- sf.write(os.path.join(CONFIG.TEST.out_dir, os.path.basename(file)), output_audio, samplerate=48000,
- subtype='PCM_16')
diff --git a/spaces/anhnv125/FRN/models/blocks.py b/spaces/anhnv125/FRN/models/blocks.py
deleted file mode 100644
index fdfd57b3313c0449b365534018bc9876a7a3c4e6..0000000000000000000000000000000000000000
--- a/spaces/anhnv125/FRN/models/blocks.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import librosa
-import pytorch_lightning as pl
-import torch
-from einops.layers.torch import Rearrange
-from torch import nn
-
-
-class Aff(nn.Module):
- def __init__(self, dim):
- super().__init__()
-
- self.alpha = nn.Parameter(torch.ones([1, 1, dim]))
- self.beta = nn.Parameter(torch.zeros([1, 1, dim]))
-
- def forward(self, x):
- x = x * self.alpha + self.beta
- return x
-
-
-class FeedForward(nn.Module):
- def __init__(self, dim, hidden_dim, dropout=0.):
- super().__init__()
- self.net = nn.Sequential(
- nn.Linear(dim, hidden_dim),
- nn.GELU(),
- nn.Dropout(dropout),
- nn.Linear(hidden_dim, dim),
- nn.Dropout(dropout)
- )
-
- def forward(self, x):
- return self.net(x)
-
-
-class MLPBlock(nn.Module):
-
- def __init__(self, dim, mlp_dim, dropout=0., init_values=1e-4):
- super().__init__()
-
- self.pre_affine = Aff(dim)
- self.inter = nn.LSTM(input_size=dim, hidden_size=dim, num_layers=1,
- bidirectional=False, batch_first=True)
- self.ff = nn.Sequential(
- FeedForward(dim, mlp_dim, dropout),
- )
- self.post_affine = Aff(dim)
- self.gamma_1 = nn.Parameter(init_values * torch.ones(dim), requires_grad=True)
- self.gamma_2 = nn.Parameter(init_values * torch.ones(dim), requires_grad=True)
-
- def forward(self, x, state=None):
- x = self.pre_affine(x)
- if state is None:
- inter, _ = self.inter(x)
- else:
- inter, state = self.inter(x, (state[0], state[1]))
- x = x + self.gamma_1 * inter
- x = self.post_affine(x)
- x = x + self.gamma_2 * self.ff(x)
- if state is None:
- return x
- state = torch.stack(state, 0)
- return x, state
-
-
-class Encoder(nn.Module):
-
- def __init__(self, in_dim, dim, depth, mlp_dim):
- super().__init__()
- self.in_dim = in_dim
- self.dim = dim
- self.depth = depth
- self.mlp_dim = mlp_dim
- self.to_patch_embedding = nn.Sequential(
- Rearrange('b c f t -> b t (c f)'),
- nn.Linear(in_dim, dim),
- nn.GELU()
- )
-
- self.mlp_blocks = nn.ModuleList([])
-
- for _ in range(depth):
- self.mlp_blocks.append(MLPBlock(self.dim, mlp_dim, dropout=0.15))
-
- self.affine = nn.Sequential(
- Aff(self.dim),
- nn.Linear(dim, in_dim),
- Rearrange('b t (c f) -> b c f t', c=2),
- )
-
- def forward(self, x_in, states=None):
- x = self.to_patch_embedding(x_in)
- if states is not None:
- out_states = []
- for i, mlp_block in enumerate(self.mlp_blocks):
- if states is None:
- x = mlp_block(x)
- else:
- x, state = mlp_block(x, states[i])
- out_states.append(state)
- x = self.affine(x)
- x = x + x_in
- if states is None:
- return x
- else:
- return x, torch.stack(out_states, 0)
-
-
-class Predictor(pl.LightningModule): # mel
- def __init__(self, window_size=1536, sr=48000, lstm_dim=256, lstm_layers=3, n_mels=64):
- super(Predictor, self).__init__()
- self.window_size = window_size
- self.hop_size = window_size // 2
- self.lstm_dim = lstm_dim
- self.n_mels = n_mels
- self.lstm_layers = lstm_layers
-
- fb = librosa.filters.mel(sr=sr, n_fft=self.window_size, n_mels=self.n_mels)[:, 1:]
- self.fb = torch.from_numpy(fb).unsqueeze(0).unsqueeze(0)
- self.lstm = nn.LSTM(input_size=self.n_mels, hidden_size=self.lstm_dim, bidirectional=False,
- num_layers=self.lstm_layers, batch_first=True)
- self.expand_dim = nn.Linear(self.lstm_dim, self.n_mels)
- self.inv_mel = nn.Linear(self.n_mels, self.hop_size)
-
- def forward(self, x, state=None): # B, 2, F, T
-
- self.fb = self.fb.to(x.device)
- x = torch.log(torch.matmul(self.fb, x) + 1e-8)
- B, C, F, T = x.shape
- x = x.reshape(B, F * C, T)
- x = x.permute(0, 2, 1)
- if state is None:
- x, _ = self.lstm(x)
- else:
- x, state = self.lstm(x, (state[0], state[1]))
- x = self.expand_dim(x)
- x = torch.abs(self.inv_mel(torch.exp(x)))
- x = x.permute(0, 2, 1)
- x = x.reshape(B, C, -1, T)
- if state is None:
- return x
- else:
- return x, torch.stack(state, 0)
diff --git a/spaces/ankitinter9/my-draw-self-journey/model.py b/spaces/ankitinter9/my-draw-self-journey/model.py
deleted file mode 100644
index 62ca774f2f8fa923a823b58dd90af9c624fd8407..0000000000000000000000000000000000000000
--- a/spaces/ankitinter9/my-draw-self-journey/model.py
+++ /dev/null
@@ -1,309 +0,0 @@
-from __future__ import annotations
-
-import gc
-import json
-import tempfile
-from typing import Generator
-
-import numpy as np
-import PIL.Image
-import torch
-from diffusers import DiffusionPipeline, StableDiffusionUpscalePipeline
-from diffusers.pipelines.deepfloyd_if import (fast27_timesteps,
- smart27_timesteps,
- smart50_timesteps,
- smart100_timesteps,
- smart185_timesteps)
-
-from settings import (DISABLE_AUTOMATIC_CPU_OFFLOAD, DISABLE_SD_X4_UPSCALER,
- HF_TOKEN, MAX_NUM_IMAGES, MAX_NUM_STEPS, MAX_SEED,
- RUN_GARBAGE_COLLECTION)
-
-
-class Model:
- def __init__(self):
- self.device = torch.device(
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
- self.pipe = None
- self.super_res_1_pipe = None
- self.super_res_2_pipe = None
- self.watermark_image = None
-
- if torch.cuda.is_available():
- self.load_weights()
- self.watermark_image = PIL.Image.fromarray(
- self.pipe.watermarker.watermark_image.to(
- torch.uint8).cpu().numpy(),
- mode='RGBA')
-
- def load_weights(self) -> None:
- self.pipe = DiffusionPipeline.from_pretrained(
- 'DeepFloyd/IF-I-XL-v1.0',
- torch_dtype=torch.float16,
- variant='fp16',
- use_safetensors=True,
- use_auth_token=HF_TOKEN)
- self.super_res_1_pipe = DiffusionPipeline.from_pretrained(
- 'DeepFloyd/IF-II-L-v1.0',
- text_encoder=None,
- torch_dtype=torch.float16,
- variant='fp16',
- use_safetensors=True,
- use_auth_token=HF_TOKEN)
-
- if not DISABLE_SD_X4_UPSCALER:
- self.super_res_2_pipe = StableDiffusionUpscalePipeline.from_pretrained(
- 'stabilityai/stable-diffusion-x4-upscaler',
- torch_dtype=torch.float16)
-
- if DISABLE_AUTOMATIC_CPU_OFFLOAD:
- self.pipe.to(self.device)
- self.super_res_1_pipe.to(self.device)
- if not DISABLE_SD_X4_UPSCALER:
- self.super_res_2_pipe.to(self.device)
- else:
- self.pipe.enable_model_cpu_offload()
- self.super_res_1_pipe.enable_model_cpu_offload()
- if not DISABLE_SD_X4_UPSCALER:
- self.super_res_2_pipe.enable_model_cpu_offload()
-
- def apply_watermark_to_sd_x4_upscaler_results(
- self, images: list[PIL.Image.Image]) -> None:
- w, h = images[0].size
-
- stability_x4_upscaler_sample_size = 128
-
- coef = min(h / stability_x4_upscaler_sample_size,
- w / stability_x4_upscaler_sample_size)
- img_h, img_w = (int(h / coef), int(w / coef)) if coef < 1 else (h, w)
-
- S1, S2 = 1024**2, img_w * img_h
- K = (S2 / S1)**0.5
- watermark_size = int(K * 62)
- watermark_x = img_w - int(14 * K)
- watermark_y = img_h - int(14 * K)
-
- watermark_image = self.watermark_image.copy().resize(
- (watermark_size, watermark_size),
- PIL.Image.Resampling.BICUBIC,
- reducing_gap=None)
-
- for image in images:
- image.paste(watermark_image,
- box=(
- watermark_x - watermark_size,
- watermark_y - watermark_size,
- watermark_x,
- watermark_y,
- ),
- mask=watermark_image.split()[-1])
-
- @staticmethod
- def to_pil_images(images: torch.Tensor) -> list[PIL.Image.Image]:
- images = (images / 2 + 0.5).clamp(0, 1)
- images = images.cpu().permute(0, 2, 3, 1).float().numpy()
- images = np.round(images * 255).astype(np.uint8)
- return [PIL.Image.fromarray(image) for image in images]
-
- @staticmethod
- def check_seed(seed: int) -> None:
- if not 0 <= seed <= MAX_SEED:
- raise ValueError
-
- @staticmethod
- def check_num_images(num_images: int) -> None:
- if not 1 <= num_images <= MAX_NUM_IMAGES:
- raise ValueError
-
- @staticmethod
- def check_num_inference_steps(num_steps: int) -> None:
- if not 1 <= num_steps <= MAX_NUM_STEPS:
- raise ValueError
-
- @staticmethod
- def get_custom_timesteps(name: str) -> list[int] | None:
- if name == 'none':
- timesteps = None
- elif name == 'fast27':
- timesteps = fast27_timesteps
- elif name == 'smart27':
- timesteps = smart27_timesteps
- elif name == 'smart50':
- timesteps = smart50_timesteps
- elif name == 'smart100':
- timesteps = smart100_timesteps
- elif name == 'smart185':
- timesteps = smart185_timesteps
- else:
- raise ValueError
- return timesteps
-
- @staticmethod
- def run_garbage_collection():
- gc.collect()
- torch.cuda.empty_cache()
-
- def run_stage1(
- self,
- prompt: str,
- negative_prompt: str = '',
- seed: int = 0,
- num_images: int = 1,
- guidance_scale_1: float = 7.0,
- custom_timesteps_1: str = 'smart100',
- num_inference_steps_1: int = 100,
- ) -> tuple[list[PIL.Image.Image], str, str]:
- self.check_seed(seed)
- self.check_num_images(num_images)
- self.check_num_inference_steps(num_inference_steps_1)
-
- if RUN_GARBAGE_COLLECTION:
- self.run_garbage_collection()
-
- generator = torch.Generator(device=self.device).manual_seed(seed)
-
- prompt_embeds, negative_embeds = self.pipe.encode_prompt(
- prompt=prompt, negative_prompt=negative_prompt)
-
- timesteps = self.get_custom_timesteps(custom_timesteps_1)
-
- images = self.pipe(prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_embeds,
- num_images_per_prompt=num_images,
- guidance_scale=guidance_scale_1,
- timesteps=timesteps,
- num_inference_steps=num_inference_steps_1,
- generator=generator,
- output_type='pt').images
- pil_images = self.to_pil_images(images)
- self.pipe.watermarker.apply_watermark(
- pil_images, self.pipe.unet.config.sample_size)
-
- stage1_params = {
- 'prompt': prompt,
- 'negative_prompt': negative_prompt,
- 'seed': seed,
- 'num_images': num_images,
- 'guidance_scale_1': guidance_scale_1,
- 'custom_timesteps_1': custom_timesteps_1,
- 'num_inference_steps_1': num_inference_steps_1,
- }
- with tempfile.NamedTemporaryFile(mode='w', delete=False) as param_file:
- param_file.write(json.dumps(stage1_params))
- stage1_result = {
- 'prompt_embeds': prompt_embeds,
- 'negative_embeds': negative_embeds,
- 'images': images,
- 'pil_images': pil_images,
- }
- with tempfile.NamedTemporaryFile(delete=False) as result_file:
- torch.save(stage1_result, result_file.name)
- return pil_images, param_file.name, result_file.name
-
- def run_stage2(
- self,
- stage1_result_path: str,
- stage2_index: int,
- seed_2: int = 0,
- guidance_scale_2: float = 4.0,
- custom_timesteps_2: str = 'smart50',
- num_inference_steps_2: int = 50,
- disable_watermark: bool = False,
- ) -> PIL.Image.Image:
- self.check_seed(seed_2)
- self.check_num_inference_steps(num_inference_steps_2)
-
- if RUN_GARBAGE_COLLECTION:
- self.run_garbage_collection()
-
- generator = torch.Generator(device=self.device).manual_seed(seed_2)
-
- stage1_result = torch.load(stage1_result_path)
- prompt_embeds = stage1_result['prompt_embeds']
- negative_embeds = stage1_result['negative_embeds']
- images = stage1_result['images']
- images = images[[stage2_index]]
-
- timesteps = self.get_custom_timesteps(custom_timesteps_2)
-
- out = self.super_res_1_pipe(image=images,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_embeds,
- num_images_per_prompt=1,
- guidance_scale=guidance_scale_2,
- timesteps=timesteps,
- num_inference_steps=num_inference_steps_2,
- generator=generator,
- output_type='pt',
- noise_level=250).images
- pil_images = self.to_pil_images(out)
-
- if disable_watermark:
- return pil_images[0]
-
- self.super_res_1_pipe.watermarker.apply_watermark(
- pil_images, self.super_res_1_pipe.unet.config.sample_size)
- return pil_images[0]
-
- def run_stage3(
- self,
- image: PIL.Image.Image,
- prompt: str = '',
- negative_prompt: str = '',
- seed_3: int = 0,
- guidance_scale_3: float = 9.0,
- num_inference_steps_3: int = 75,
- ) -> PIL.Image.Image:
- self.check_seed(seed_3)
- self.check_num_inference_steps(num_inference_steps_3)
-
- if RUN_GARBAGE_COLLECTION:
- self.run_garbage_collection()
-
- generator = torch.Generator(device=self.device).manual_seed(seed_3)
- out = self.super_res_2_pipe(image=image,
- prompt=prompt,
- negative_prompt=negative_prompt,
- num_images_per_prompt=1,
- guidance_scale=guidance_scale_3,
- num_inference_steps=num_inference_steps_3,
- generator=generator,
- noise_level=100).images
- self.apply_watermark_to_sd_x4_upscaler_results(out)
- return out[0]
-
- def run_stage2_3(
- self,
- stage1_result_path: str,
- stage2_index: int,
- seed_2: int = 0,
- guidance_scale_2: float = 4.0,
- custom_timesteps_2: str = 'smart50',
- num_inference_steps_2: int = 50,
- prompt: str = '',
- negative_prompt: str = '',
- seed_3: int = 0,
- guidance_scale_3: float = 9.0,
- num_inference_steps_3: int = 75,
- ) -> Generator[PIL.Image.Image]:
- self.check_seed(seed_3)
- self.check_num_inference_steps(num_inference_steps_3)
-
- out_image = self.run_stage2(
- stage1_result_path=stage1_result_path,
- stage2_index=stage2_index,
- seed_2=seed_2,
- guidance_scale_2=guidance_scale_2,
- custom_timesteps_2=custom_timesteps_2,
- num_inference_steps_2=num_inference_steps_2,
- disable_watermark=True)
- temp_image = out_image.copy()
- self.super_res_1_pipe.watermarker.apply_watermark(
- [temp_image], self.super_res_1_pipe.unet.config.sample_size)
- yield temp_image
- yield self.run_stage3(image=out_image,
- prompt=prompt,
- negative_prompt=negative_prompt,
- seed_3=seed_3,
- guidance_scale_3=guidance_scale_3,
- num_inference_steps_3=num_inference_steps_3)
diff --git a/spaces/anonderpling/repo_uploader/README.md b/spaces/anonderpling/repo_uploader/README.md
deleted file mode 100644
index 084d012543490c6a6ce80786610d012ddc5eae40..0000000000000000000000000000000000000000
--- a/spaces/anonderpling/repo_uploader/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Repo Uploader
-emoji: 😈
-colorFrom: gray
-colorTo: blue
-sdk: gradio
-app_file: app.py
-pinned: true
-license: mit
-duplicated_from: osanseviero/repo_duplicator
-python_version: 3.11.2
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/datasets/pascal_zeroshot.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/datasets/pascal_zeroshot.py
deleted file mode 100644
index 3fa84de9049bf272538f97b408bed07a9e9b5478..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/datasets/pascal_zeroshot.py
+++ /dev/null
@@ -1,60 +0,0 @@
-from os.path import expanduser
-import torch
-import json
-import torchvision
-from general_utils import get_from_repository
-from general_utils import log
-from torchvision import transforms
-
-PASCAL_VOC_CLASSES_ZS = [['cattle.n.01', 'motorcycle.n.01'], ['aeroplane.n.01', 'sofa.n.01'],
- ['cat.n.01', 'television.n.03'], ['train.n.01', 'bottle.n.01'],
- ['chair.n.01', 'pot_plant.n.01']]
-
-
-class PascalZeroShot(object):
-
- def __init__(self, split, n_unseen, image_size=224) -> None:
- super().__init__()
-
- import sys
- sys.path.append('third_party/JoEm')
- from third_party.JoEm.data_loader.dataset import VOCSegmentation
- from third_party.JoEm.data_loader import get_seen_idx, get_unseen_idx, VOC
-
- self.pascal_classes = VOC
- self.image_size = image_size
-
- self.transform = transforms.Compose([
- transforms.Resize((image_size, image_size)),
- ])
-
- if split == 'train':
- self.voc = VOCSegmentation(get_unseen_idx(n_unseen), get_seen_idx(n_unseen),
- split=split, transform=True, transform_args=dict(base_size=312, crop_size=312),
- ignore_bg=False, ignore_unseen=False, remv_unseen_img=True)
- elif split == 'val':
- self.voc = VOCSegmentation(get_unseen_idx(n_unseen), get_seen_idx(n_unseen),
- split=split, transform=False,
- ignore_bg=False, ignore_unseen=False)
-
- self.unseen_idx = get_unseen_idx(n_unseen)
-
- def __len__(self):
- return len(self.voc)
-
- def __getitem__(self, i):
-
- sample = self.voc[i]
- label = sample['label'].long()
- all_labels = [l for l in torch.where(torch.bincount(label.flatten())>0)[0].numpy().tolist() if l != 255]
- class_indices = [l for l in all_labels]
- class_names = [self.pascal_classes[l] for l in all_labels]
-
- image = self.transform(sample['image'])
-
- label = transforms.Resize((self.image_size, self.image_size),
- interpolation=torchvision.transforms.InterpolationMode.NEAREST)(label.unsqueeze(0))[0]
-
- return (image,), (label, )
-
-
diff --git a/spaces/arbitrarygate/ayaka_sign/bin/unidbg-fetch-qsign.bat b/spaces/arbitrarygate/ayaka_sign/bin/unidbg-fetch-qsign.bat
deleted file mode 100644
index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000
--- a/spaces/arbitrarygate/ayaka_sign/bin/unidbg-fetch-qsign.bat
+++ /dev/null
@@ -1,89 +0,0 @@
-@rem
-@rem Copyright 2015 the original author or authors.
-@rem
-@rem Licensed under the Apache License, Version 2.0 (the "License");
-@rem you may not use this file except in compliance with the License.
-@rem You may obtain a copy of the License at
-@rem
-@rem https://www.apache.org/licenses/LICENSE-2.0
-@rem
-@rem Unless required by applicable law or agreed to in writing, software
-@rem distributed under the License is distributed on an "AS IS" BASIS,
-@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@rem See the License for the specific language governing permissions and
-@rem limitations under the License.
-@rem
-
-@if "%DEBUG%" == "" @echo off
-@rem ##########################################################################
-@rem
-@rem unidbg-fetch-qsign startup script for Windows
-@rem
-@rem ##########################################################################
-
-@rem Set local scope for the variables with windows NT shell
-if "%OS%"=="Windows_NT" setlocal
-
-set DIRNAME=%~dp0
-if "%DIRNAME%" == "" set DIRNAME=.
-set APP_BASE_NAME=%~n0
-set APP_HOME=%DIRNAME%..
-
-@rem Resolve any "." and ".." in APP_HOME to make it shorter.
-for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
-
-@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script.
-set DEFAULT_JVM_OPTS=
-
-@rem Find java.exe
-if defined JAVA_HOME goto findJavaFromJavaHome
-
-set JAVA_EXE=java.exe
-%JAVA_EXE% -version >NUL 2>&1
-if "%ERRORLEVEL%" == "0" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:findJavaFromJavaHome
-set JAVA_HOME=%JAVA_HOME:"=%
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe
-
-if exist "%JAVA_EXE%" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:execute
-@rem Setup the command line
-
-set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar
-
-
-@rem Execute unidbg-fetch-qsign
-"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %*
-
-:end
-@rem End local scope for the variables with windows NT shell
-if "%ERRORLEVEL%"=="0" goto mainEnd
-
-:fail
-rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of
-rem the _cmd.exe /c_ return code!
-if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1
-exit /b 1
-
-:mainEnd
-if "%OS%"=="Windows_NT" endlocal
-
-:omega
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/audio_utils.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/audio_utils.py
deleted file mode 100644
index 70711ed7a485ecd4a8c8eb8ab6c338aa79871de7..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/audio_utils.py
+++ /dev/null
@@ -1,177 +0,0 @@
-import os
-from glob import glob
-from typing import Dict, List
-
-import librosa
-import numpy as np
-import torch
-import torchaudio
-from scipy.io.wavfile import read
-
-from TTS.utils.audio.torch_transforms import TorchSTFT
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- if data.dtype == np.int32:
- norm_fix = 2**31
- elif data.dtype == np.int16:
- norm_fix = 2**15
- elif data.dtype == np.float16 or data.dtype == np.float32:
- norm_fix = 1.0
- else:
- raise NotImplementedError(f"Provided data dtype not supported: {data.dtype}")
- return (torch.FloatTensor(data.astype(np.float32)) / norm_fix, sampling_rate)
-
-
-def check_audio(audio, audiopath: str):
- # Check some assumptions about audio range. This should be automatically fixed in load_wav_to_torch, but might not be in some edge cases, where we should squawk.
- # '2' is arbitrarily chosen since it seems like audio will often "overdrive" the [-1,1] bounds.
- if torch.any(audio > 2) or not torch.any(audio < 0):
- print(f"Error with {audiopath}. Max={audio.max()} min={audio.min()}")
- audio.clip_(-1, 1)
-
-
-def read_audio_file(audiopath: str):
- if audiopath[-4:] == ".wav":
- audio, lsr = load_wav_to_torch(audiopath)
- elif audiopath[-4:] == ".mp3":
- audio, lsr = librosa.load(audiopath, sr=None)
- audio = torch.FloatTensor(audio)
- else:
- assert False, f"Unsupported audio format provided: {audiopath[-4:]}"
-
- # Remove any channel data.
- if len(audio.shape) > 1:
- if audio.shape[0] < 5:
- audio = audio[0]
- else:
- assert audio.shape[1] < 5
- audio = audio[:, 0]
-
- return audio, lsr
-
-
-def load_required_audio(audiopath: str):
- audio, lsr = read_audio_file(audiopath)
-
- audios = [torchaudio.functional.resample(audio, lsr, sampling_rate) for sampling_rate in (22050, 24000)]
- for audio in audios:
- check_audio(audio, audiopath)
-
- return [audio.unsqueeze(0) for audio in audios]
-
-
-def load_audio(audiopath, sampling_rate):
- audio, lsr = read_audio_file(audiopath)
-
- if lsr != sampling_rate:
- audio = torchaudio.functional.resample(audio, lsr, sampling_rate)
- check_audio(audio, audiopath)
-
- return audio.unsqueeze(0)
-
-
-TACOTRON_MEL_MAX = 2.3143386840820312
-TACOTRON_MEL_MIN = -11.512925148010254
-
-
-def denormalize_tacotron_mel(norm_mel):
- return ((norm_mel + 1) / 2) * (TACOTRON_MEL_MAX - TACOTRON_MEL_MIN) + TACOTRON_MEL_MIN
-
-
-def normalize_tacotron_mel(mel):
- return 2 * ((mel - TACOTRON_MEL_MIN) / (TACOTRON_MEL_MAX - TACOTRON_MEL_MIN)) - 1
-
-
-def dynamic_range_compression(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def get_voices(extra_voice_dirs: List[str] = []):
- dirs = extra_voice_dirs
- voices: Dict[str, List[str]] = {}
- for d in dirs:
- subs = os.listdir(d)
- for sub in subs:
- subj = os.path.join(d, sub)
- if os.path.isdir(subj):
- voices[sub] = list(glob(f"{subj}/*.wav")) + list(glob(f"{subj}/*.mp3")) + list(glob(f"{subj}/*.pth"))
- return voices
-
-
-def load_voice(voice: str, extra_voice_dirs: List[str] = []):
- if voice == "random":
- return None, None
-
- voices = get_voices(extra_voice_dirs)
- paths = voices[voice]
- if len(paths) == 1 and paths[0].endswith(".pth"):
- return None, torch.load(paths[0])
- else:
- conds = []
- for cond_path in paths:
- c = load_required_audio(cond_path)
- conds.append(c)
- return conds, None
-
-
-def load_voices(voices: List[str], extra_voice_dirs: List[str] = []):
- latents = []
- clips = []
- for voice in voices:
- if voice == "random":
- if len(voices) > 1:
- print("Cannot combine a random voice with a non-random voice. Just using a random voice.")
- return None, None
- clip, latent = load_voice(voice, extra_voice_dirs)
- if latent is None:
- assert (
- len(latents) == 0
- ), "Can only combine raw audio voices or latent voices, not both. Do it yourself if you want this."
- clips.extend(clip)
- elif clip is None:
- assert (
- len(clips) == 0
- ), "Can only combine raw audio voices or latent voices, not both. Do it yourself if you want this."
- latents.append(latent)
- if len(latents) == 0:
- return clips, None
- else:
- latents_0 = torch.stack([l[0] for l in latents], dim=0).mean(dim=0)
- latents_1 = torch.stack([l[1] for l in latents], dim=0).mean(dim=0)
- latents = (latents_0, latents_1)
- return None, latents
-
-
-def wav_to_univnet_mel(wav, do_normalization=False, device="cuda"):
- stft = TorchSTFT(
- n_fft=1024,
- hop_length=256,
- win_length=1024,
- use_mel=True,
- n_mels=100,
- sample_rate=24000,
- mel_fmin=0,
- mel_fmax=12000,
- )
- stft = stft.to(device)
- mel = stft(wav)
- mel = dynamic_range_compression(mel)
- if do_normalization:
- mel = normalize_tacotron_mel(mel)
- return mel
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/models/align_tts.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/models/align_tts.py
deleted file mode 100644
index b2e51de7d6ab37951e3838e6804ca8e9b71338cf..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/models/align_tts.py
+++ /dev/null
@@ -1,448 +0,0 @@
-from dataclasses import dataclass, field
-from typing import Dict, List, Union
-
-import torch
-from coqpit import Coqpit
-from torch import nn
-
-from TTS.tts.layers.align_tts.mdn import MDNBlock
-from TTS.tts.layers.feed_forward.decoder import Decoder
-from TTS.tts.layers.feed_forward.duration_predictor import DurationPredictor
-from TTS.tts.layers.feed_forward.encoder import Encoder
-from TTS.tts.layers.generic.pos_encoding import PositionalEncoding
-from TTS.tts.models.base_tts import BaseTTS
-from TTS.tts.utils.helpers import generate_path, maximum_path, sequence_mask
-from TTS.tts.utils.speakers import SpeakerManager
-from TTS.tts.utils.text.tokenizer import TTSTokenizer
-from TTS.tts.utils.visual import plot_alignment, plot_spectrogram
-from TTS.utils.io import load_fsspec
-
-
-@dataclass
-class AlignTTSArgs(Coqpit):
- """
- Args:
- num_chars (int):
- number of unique input to characters
- out_channels (int):
- number of output tensor channels. It is equal to the expected spectrogram size.
- hidden_channels (int):
- number of channels in all the model layers.
- hidden_channels_ffn (int):
- number of channels in transformer's conv layers.
- hidden_channels_dp (int):
- number of channels in duration predictor network.
- num_heads (int):
- number of attention heads in transformer networks.
- num_transformer_layers (int):
- number of layers in encoder and decoder transformer blocks.
- dropout_p (int):
- dropout rate in transformer layers.
- length_scale (int, optional):
- coefficient to set the speech speed. <1 slower, >1 faster. Defaults to 1.
- num_speakers (int, optional):
- number of speakers for multi-speaker training. Defaults to 0.
- external_c (bool, optional):
- enable external speaker embeddings. Defaults to False.
- c_in_channels (int, optional):
- number of channels in speaker embedding vectors. Defaults to 0.
- """
-
- num_chars: int = None
- out_channels: int = 80
- hidden_channels: int = 256
- hidden_channels_dp: int = 256
- encoder_type: str = "fftransformer"
- encoder_params: dict = field(
- default_factory=lambda: {"hidden_channels_ffn": 1024, "num_heads": 2, "num_layers": 6, "dropout_p": 0.1}
- )
- decoder_type: str = "fftransformer"
- decoder_params: dict = field(
- default_factory=lambda: {"hidden_channels_ffn": 1024, "num_heads": 2, "num_layers": 6, "dropout_p": 0.1}
- )
- length_scale: float = 1.0
- num_speakers: int = 0
- use_speaker_embedding: bool = False
- use_d_vector_file: bool = False
- d_vector_dim: int = 0
-
-
-class AlignTTS(BaseTTS):
- """AlignTTS with modified duration predictor.
- https://arxiv.org/pdf/2003.01950.pdf
-
- Encoder -> DurationPredictor -> Decoder
-
- Check :class:`AlignTTSArgs` for the class arguments.
-
- Paper Abstract:
- Targeting at both high efficiency and performance, we propose AlignTTS to predict the
- mel-spectrum in parallel. AlignTTS is based on a Feed-Forward Transformer which generates mel-spectrum from a
- sequence of characters, and the duration of each character is determined by a duration predictor.Instead of
- adopting the attention mechanism in Transformer TTS to align text to mel-spectrum, the alignment loss is presented
- to consider all possible alignments in training by use of dynamic programming. Experiments on the LJSpeech dataset s
- how that our model achieves not only state-of-the-art performance which outperforms Transformer TTS by 0.03 in mean
- option score (MOS), but also a high efficiency which is more than 50 times faster than real-time.
-
- Note:
- Original model uses a separate character embedding layer for duration predictor. However, it causes the
- duration predictor to overfit and prevents learning higher level interactions among characters. Therefore,
- we predict durations based on encoder outputs which has higher level information about input characters. This
- enables training without phases as in the original paper.
-
- Original model uses Transormers in encoder and decoder layers. However, here you can set the architecture
- differently based on your requirements using ```encoder_type``` and ```decoder_type``` parameters.
-
- Examples:
- >>> from TTS.tts.configs.align_tts_config import AlignTTSConfig
- >>> config = AlignTTSConfig()
- >>> model = AlignTTS(config)
-
- """
-
- # pylint: disable=dangerous-default-value
-
- def __init__(
- self,
- config: "AlignTTSConfig",
- ap: "AudioProcessor" = None,
- tokenizer: "TTSTokenizer" = None,
- speaker_manager: SpeakerManager = None,
- ):
- super().__init__(config, ap, tokenizer, speaker_manager)
- self.speaker_manager = speaker_manager
- self.phase = -1
- self.length_scale = (
- float(config.model_args.length_scale)
- if isinstance(config.model_args.length_scale, int)
- else config.model_args.length_scale
- )
-
- self.emb = nn.Embedding(self.config.model_args.num_chars, self.config.model_args.hidden_channels)
-
- self.embedded_speaker_dim = 0
- self.init_multispeaker(config)
-
- self.pos_encoder = PositionalEncoding(config.model_args.hidden_channels)
- self.encoder = Encoder(
- config.model_args.hidden_channels,
- config.model_args.hidden_channels,
- config.model_args.encoder_type,
- config.model_args.encoder_params,
- self.embedded_speaker_dim,
- )
- self.decoder = Decoder(
- config.model_args.out_channels,
- config.model_args.hidden_channels,
- config.model_args.decoder_type,
- config.model_args.decoder_params,
- )
- self.duration_predictor = DurationPredictor(config.model_args.hidden_channels_dp)
-
- self.mod_layer = nn.Conv1d(config.model_args.hidden_channels, config.model_args.hidden_channels, 1)
-
- self.mdn_block = MDNBlock(config.model_args.hidden_channels, 2 * config.model_args.out_channels)
-
- if self.embedded_speaker_dim > 0 and self.embedded_speaker_dim != config.model_args.hidden_channels:
- self.proj_g = nn.Conv1d(self.embedded_speaker_dim, config.model_args.hidden_channels, 1)
-
- @staticmethod
- def compute_log_probs(mu, log_sigma, y):
- # pylint: disable=protected-access, c-extension-no-member
- y = y.transpose(1, 2).unsqueeze(1) # [B, 1, T1, D]
- mu = mu.transpose(1, 2).unsqueeze(2) # [B, T2, 1, D]
- log_sigma = log_sigma.transpose(1, 2).unsqueeze(2) # [B, T2, 1, D]
- expanded_y, expanded_mu = torch.broadcast_tensors(y, mu)
- exponential = -0.5 * torch.mean(
- torch._C._nn.mse_loss(expanded_y, expanded_mu, 0) / torch.pow(log_sigma.exp(), 2), dim=-1
- ) # B, L, T
- logp = exponential - 0.5 * log_sigma.mean(dim=-1)
- return logp
-
- def compute_align_path(self, mu, log_sigma, y, x_mask, y_mask):
- # find the max alignment path
- attn_mask = torch.unsqueeze(x_mask, -1) * torch.unsqueeze(y_mask, 2)
- log_p = self.compute_log_probs(mu, log_sigma, y)
- # [B, T_en, T_dec]
- attn = maximum_path(log_p, attn_mask.squeeze(1)).unsqueeze(1)
- dr_mas = torch.sum(attn, -1)
- return dr_mas.squeeze(1), log_p
-
- @staticmethod
- def generate_attn(dr, x_mask, y_mask=None):
- # compute decode mask from the durations
- if y_mask is None:
- y_lengths = dr.sum(1).long()
- y_lengths[y_lengths < 1] = 1
- y_mask = torch.unsqueeze(sequence_mask(y_lengths, None), 1).to(dr.dtype)
- attn_mask = torch.unsqueeze(x_mask, -1) * torch.unsqueeze(y_mask, 2)
- attn = generate_path(dr, attn_mask.squeeze(1)).to(dr.dtype)
- return attn
-
- def expand_encoder_outputs(self, en, dr, x_mask, y_mask):
- """Generate attention alignment map from durations and
- expand encoder outputs
-
- Examples::
- - encoder output: [a,b,c,d]
- - durations: [1, 3, 2, 1]
-
- - expanded: [a, b, b, b, c, c, d]
- - attention map: [[0, 0, 0, 0, 0, 0, 1],
- [0, 0, 0, 0, 1, 1, 0],
- [0, 1, 1, 1, 0, 0, 0],
- [1, 0, 0, 0, 0, 0, 0]]
- """
- attn = self.generate_attn(dr, x_mask, y_mask)
- o_en_ex = torch.matmul(attn.squeeze(1).transpose(1, 2), en.transpose(1, 2)).transpose(1, 2)
- return o_en_ex, attn
-
- def format_durations(self, o_dr_log, x_mask):
- o_dr = (torch.exp(o_dr_log) - 1) * x_mask * self.length_scale
- o_dr[o_dr < 1] = 1.0
- o_dr = torch.round(o_dr)
- return o_dr
-
- @staticmethod
- def _concat_speaker_embedding(o_en, g):
- g_exp = g.expand(-1, -1, o_en.size(-1)) # [B, C, T_en]
- o_en = torch.cat([o_en, g_exp], 1)
- return o_en
-
- def _sum_speaker_embedding(self, x, g):
- # project g to decoder dim.
- if hasattr(self, "proj_g"):
- g = self.proj_g(g)
-
- return x + g
-
- def _forward_encoder(self, x, x_lengths, g=None):
- if hasattr(self, "emb_g"):
- g = nn.functional.normalize(self.speaker_embedding(g)) # [B, C, 1]
-
- if g is not None:
- g = g.unsqueeze(-1)
-
- # [B, T, C]
- x_emb = self.emb(x)
- # [B, C, T]
- x_emb = torch.transpose(x_emb, 1, -1)
-
- # compute sequence masks
- x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.shape[1]), 1).to(x.dtype)
-
- # encoder pass
- o_en = self.encoder(x_emb, x_mask)
-
- # speaker conditioning for duration predictor
- if g is not None:
- o_en_dp = self._concat_speaker_embedding(o_en, g)
- else:
- o_en_dp = o_en
- return o_en, o_en_dp, x_mask, g
-
- def _forward_decoder(self, o_en, o_en_dp, dr, x_mask, y_lengths, g):
- y_mask = torch.unsqueeze(sequence_mask(y_lengths, None), 1).to(o_en_dp.dtype)
- # expand o_en with durations
- o_en_ex, attn = self.expand_encoder_outputs(o_en, dr, x_mask, y_mask)
- # positional encoding
- if hasattr(self, "pos_encoder"):
- o_en_ex = self.pos_encoder(o_en_ex, y_mask)
- # speaker embedding
- if g is not None:
- o_en_ex = self._sum_speaker_embedding(o_en_ex, g)
- # decoder pass
- o_de = self.decoder(o_en_ex, y_mask, g=g)
- return o_de, attn.transpose(1, 2)
-
- def _forward_mdn(self, o_en, y, y_lengths, x_mask):
- # MAS potentials and alignment
- mu, log_sigma = self.mdn_block(o_en)
- y_mask = torch.unsqueeze(sequence_mask(y_lengths, None), 1).to(o_en.dtype)
- dr_mas, logp = self.compute_align_path(mu, log_sigma, y, x_mask, y_mask)
- return dr_mas, mu, log_sigma, logp
-
- def forward(
- self, x, x_lengths, y, y_lengths, aux_input={"d_vectors": None}, phase=None
- ): # pylint: disable=unused-argument
- """
- Shapes:
- - x: :math:`[B, T_max]`
- - x_lengths: :math:`[B]`
- - y_lengths: :math:`[B]`
- - dr: :math:`[B, T_max]`
- - g: :math:`[B, C]`
- """
- y = y.transpose(1, 2)
- g = aux_input["d_vectors"] if "d_vectors" in aux_input else None
- o_de, o_dr_log, dr_mas_log, attn, mu, log_sigma, logp = None, None, None, None, None, None, None
- if phase == 0:
- # train encoder and MDN
- o_en, o_en_dp, x_mask, g = self._forward_encoder(x, x_lengths, g)
- dr_mas, mu, log_sigma, logp = self._forward_mdn(o_en, y, y_lengths, x_mask)
- y_mask = torch.unsqueeze(sequence_mask(y_lengths, None), 1).to(o_en_dp.dtype)
- attn = self.generate_attn(dr_mas, x_mask, y_mask)
- elif phase == 1:
- # train decoder
- o_en, o_en_dp, x_mask, g = self._forward_encoder(x, x_lengths, g)
- dr_mas, _, _, _ = self._forward_mdn(o_en, y, y_lengths, x_mask)
- o_de, attn = self._forward_decoder(o_en.detach(), o_en_dp.detach(), dr_mas.detach(), x_mask, y_lengths, g=g)
- elif phase == 2:
- # train the whole except duration predictor
- o_en, o_en_dp, x_mask, g = self._forward_encoder(x, x_lengths, g)
- dr_mas, mu, log_sigma, logp = self._forward_mdn(o_en, y, y_lengths, x_mask)
- o_de, attn = self._forward_decoder(o_en, o_en_dp, dr_mas, x_mask, y_lengths, g=g)
- elif phase == 3:
- # train duration predictor
- o_en, o_en_dp, x_mask, g = self._forward_encoder(x, x_lengths, g)
- o_dr_log = self.duration_predictor(x, x_mask)
- dr_mas, mu, log_sigma, logp = self._forward_mdn(o_en, y, y_lengths, x_mask)
- o_de, attn = self._forward_decoder(o_en, o_en_dp, dr_mas, x_mask, y_lengths, g=g)
- o_dr_log = o_dr_log.squeeze(1)
- else:
- o_en, o_en_dp, x_mask, g = self._forward_encoder(x, x_lengths, g)
- o_dr_log = self.duration_predictor(o_en_dp.detach(), x_mask)
- dr_mas, mu, log_sigma, logp = self._forward_mdn(o_en, y, y_lengths, x_mask)
- o_de, attn = self._forward_decoder(o_en, o_en_dp, dr_mas, x_mask, y_lengths, g=g)
- o_dr_log = o_dr_log.squeeze(1)
- dr_mas_log = torch.log(dr_mas + 1).squeeze(1)
- outputs = {
- "model_outputs": o_de.transpose(1, 2),
- "alignments": attn,
- "durations_log": o_dr_log,
- "durations_mas_log": dr_mas_log,
- "mu": mu,
- "log_sigma": log_sigma,
- "logp": logp,
- }
- return outputs
-
- @torch.no_grad()
- def inference(self, x, aux_input={"d_vectors": None}): # pylint: disable=unused-argument
- """
- Shapes:
- - x: :math:`[B, T_max]`
- - x_lengths: :math:`[B]`
- - g: :math:`[B, C]`
- """
- g = aux_input["d_vectors"] if "d_vectors" in aux_input else None
- x_lengths = torch.tensor(x.shape[1:2]).to(x.device)
- # pad input to prevent dropping the last word
- # x = torch.nn.functional.pad(x, pad=(0, 5), mode='constant', value=0)
- o_en, o_en_dp, x_mask, g = self._forward_encoder(x, x_lengths, g)
- # o_dr_log = self.duration_predictor(x, x_mask)
- o_dr_log = self.duration_predictor(o_en_dp, x_mask)
- # duration predictor pass
- o_dr = self.format_durations(o_dr_log, x_mask).squeeze(1)
- y_lengths = o_dr.sum(1)
- o_de, attn = self._forward_decoder(o_en, o_en_dp, o_dr, x_mask, y_lengths, g=g)
- outputs = {"model_outputs": o_de.transpose(1, 2), "alignments": attn}
- return outputs
-
- def train_step(self, batch: dict, criterion: nn.Module):
- text_input = batch["text_input"]
- text_lengths = batch["text_lengths"]
- mel_input = batch["mel_input"]
- mel_lengths = batch["mel_lengths"]
- d_vectors = batch["d_vectors"]
- speaker_ids = batch["speaker_ids"]
-
- aux_input = {"d_vectors": d_vectors, "speaker_ids": speaker_ids}
- outputs = self.forward(text_input, text_lengths, mel_input, mel_lengths, aux_input, self.phase)
- loss_dict = criterion(
- outputs["logp"],
- outputs["model_outputs"],
- mel_input,
- mel_lengths,
- outputs["durations_log"],
- outputs["durations_mas_log"],
- text_lengths,
- phase=self.phase,
- )
-
- return outputs, loss_dict
-
- def _create_logs(self, batch, outputs, ap): # pylint: disable=no-self-use
- model_outputs = outputs["model_outputs"]
- alignments = outputs["alignments"]
- mel_input = batch["mel_input"]
-
- pred_spec = model_outputs[0].data.cpu().numpy()
- gt_spec = mel_input[0].data.cpu().numpy()
- align_img = alignments[0].data.cpu().numpy()
-
- figures = {
- "prediction": plot_spectrogram(pred_spec, ap, output_fig=False),
- "ground_truth": plot_spectrogram(gt_spec, ap, output_fig=False),
- "alignment": plot_alignment(align_img, output_fig=False),
- }
-
- # Sample audio
- train_audio = ap.inv_melspectrogram(pred_spec.T)
- return figures, {"audio": train_audio}
-
- def train_log(
- self, batch: dict, outputs: dict, logger: "Logger", assets: dict, steps: int
- ) -> None: # pylint: disable=no-self-use
- figures, audios = self._create_logs(batch, outputs, self.ap)
- logger.train_figures(steps, figures)
- logger.train_audios(steps, audios, self.ap.sample_rate)
-
- def eval_step(self, batch: dict, criterion: nn.Module):
- return self.train_step(batch, criterion)
-
- def eval_log(self, batch: dict, outputs: dict, logger: "Logger", assets: dict, steps: int) -> None:
- figures, audios = self._create_logs(batch, outputs, self.ap)
- logger.eval_figures(steps, figures)
- logger.eval_audios(steps, audios, self.ap.sample_rate)
-
- def load_checkpoint(
- self, config, checkpoint_path, eval=False, cache=False
- ): # pylint: disable=unused-argument, redefined-builtin
- state = load_fsspec(checkpoint_path, map_location=torch.device("cpu"), cache=cache)
- self.load_state_dict(state["model"])
- if eval:
- self.eval()
- assert not self.training
-
- def get_criterion(self):
- from TTS.tts.layers.losses import AlignTTSLoss # pylint: disable=import-outside-toplevel
-
- return AlignTTSLoss(self.config)
-
- @staticmethod
- def _set_phase(config, global_step):
- """Decide AlignTTS training phase"""
- if isinstance(config.phase_start_steps, list):
- vals = [i < global_step for i in config.phase_start_steps]
- if not True in vals:
- phase = 0
- else:
- phase = (
- len(config.phase_start_steps)
- - [i < global_step for i in config.phase_start_steps][::-1].index(True)
- - 1
- )
- else:
- phase = None
- return phase
-
- def on_epoch_start(self, trainer):
- """Set AlignTTS training phase on epoch start."""
- self.phase = self._set_phase(trainer.config, trainer.total_steps_done)
-
- @staticmethod
- def init_from_config(config: "AlignTTSConfig", samples: Union[List[List], List[Dict]] = None):
- """Initiate model from config
-
- Args:
- config (AlignTTSConfig): Model config.
- samples (Union[List[List], List[Dict]]): Training samples to parse speaker ids for training.
- Defaults to None.
- """
- from TTS.utils.audio import AudioProcessor
-
- ap = AudioProcessor.init_from_config(config)
- tokenizer, new_config = TTSTokenizer.init_from_config(config)
- speaker_manager = SpeakerManager.init_from_config(config, samples)
- return AlignTTS(new_config, ap, tokenizer, speaker_manager)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Debugger/Tests/test_libpython_in_gdb.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Debugger/Tests/test_libpython_in_gdb.py
deleted file mode 100644
index 6f34cee47b34770d43b31cf4762a6ff97ff07b80..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Debugger/Tests/test_libpython_in_gdb.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# -*- coding: UTF-8 -*-
-
-"""
-Test libpython.py. This is already partly tested by test_libcython_in_gdb and
-Lib/test/test_gdb.py in the Python source. These tests are run in gdb and
-called from test_libcython_in_gdb.main()
-"""
-
-import os
-import sys
-
-import gdb
-
-from Cython.Debugger import libcython
-from Cython.Debugger import libpython
-
-from . import test_libcython_in_gdb
-from .test_libcython_in_gdb import _debug, inferior_python_version
-
-
-class TestPrettyPrinters(test_libcython_in_gdb.DebugTestCase):
- """
- Test whether types of Python objects are correctly inferred and that
- the right libpython.PySomeTypeObjectPtr classes are instantiated.
-
- Also test whether values are appropriately formatted (don't be too
- laborious as Lib/test/test_gdb.py already covers this extensively).
-
- Don't take care of decreffing newly allocated objects as a new
- interpreter is started for every test anyway.
- """
-
- def setUp(self):
- super(TestPrettyPrinters, self).setUp()
- self.break_and_run('b = c = d = 0')
-
- def get_pyobject(self, code):
- value = gdb.parse_and_eval(code)
- assert libpython.pointervalue(value) != 0
- return value
-
- def pyobject_fromcode(self, code, gdbvar=None):
- if gdbvar is not None:
- d = {'varname':gdbvar, 'code':code}
- gdb.execute('set $%(varname)s = %(code)s' % d)
- code = '$' + gdbvar
-
- return libpython.PyObjectPtr.from_pyobject_ptr(self.get_pyobject(code))
-
- def get_repr(self, pyobject):
- return pyobject.get_truncated_repr(libpython.MAX_OUTPUT_LEN)
-
- def alloc_bytestring(self, string, gdbvar=None):
- if inferior_python_version < (3, 0):
- funcname = 'PyString_FromStringAndSize'
- else:
- funcname = 'PyBytes_FromStringAndSize'
-
- assert b'"' not in string
-
- # ensure double quotes
- code = '(PyObject *) %s("%s", %d)' % (funcname, string.decode('iso8859-1'), len(string))
- return self.pyobject_fromcode(code, gdbvar=gdbvar)
-
- def alloc_unicodestring(self, string, gdbvar=None):
- postfix = libpython.get_inferior_unicode_postfix()
- funcname = 'PyUnicode%s_DecodeUnicodeEscape' % (postfix,)
-
- data = string.encode("unicode_escape").decode('iso8859-1')
- return self.pyobject_fromcode(
- '(PyObject *) %s("%s", %d, "strict")' % (
- funcname, data.replace('"', r'\"').replace('\\', r'\\'), len(data)),
- gdbvar=gdbvar)
-
- def test_bytestring(self):
- bytestring = self.alloc_bytestring(b"spam")
-
- if inferior_python_version < (3, 0):
- bytestring_class = libpython.PyStringObjectPtr
- expected = repr(b"spam")
- else:
- bytestring_class = libpython.PyBytesObjectPtr
- expected = "b'spam'"
-
- self.assertEqual(type(bytestring), bytestring_class)
- self.assertEqual(self.get_repr(bytestring), expected)
-
- def test_unicode(self):
- unicode_string = self.alloc_unicodestring(u"spam ἄλφα")
-
- expected = u"'spam ἄλφα'"
- if inferior_python_version < (3, 0):
- expected = 'u' + expected
-
- self.assertEqual(type(unicode_string), libpython.PyUnicodeObjectPtr)
- self.assertEqual(self.get_repr(unicode_string), expected)
-
- def test_int(self):
- if inferior_python_version < (3, 0):
- intval = self.pyobject_fromcode('PyInt_FromLong(100)')
- self.assertEqual(type(intval), libpython.PyIntObjectPtr)
- self.assertEqual(self.get_repr(intval), '100')
-
- def test_long(self):
- longval = self.pyobject_fromcode('PyLong_FromLong(200)',
- gdbvar='longval')
- assert gdb.parse_and_eval('$longval->ob_type == &PyLong_Type')
-
- self.assertEqual(type(longval), libpython.PyLongObjectPtr)
- self.assertEqual(self.get_repr(longval), '200')
-
- def test_frame_type(self):
- frame = self.pyobject_fromcode('PyEval_GetFrame()')
-
- self.assertEqual(type(frame), libpython.PyFrameObjectPtr)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/BdfFontFile.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/BdfFontFile.py
deleted file mode 100644
index 102b72e1d5aef65054d75a958656347193c671dd..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/BdfFontFile.py
+++ /dev/null
@@ -1,110 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# bitmap distribution font (bdf) file parser
-#
-# history:
-# 1996-05-16 fl created (as bdf2pil)
-# 1997-08-25 fl converted to FontFile driver
-# 2001-05-25 fl removed bogus __init__ call
-# 2002-11-20 fl robustification (from Kevin Cazabon, Dmitry Vasiliev)
-# 2003-04-22 fl more robustification (from Graham Dumpleton)
-#
-# Copyright (c) 1997-2003 by Secret Labs AB.
-# Copyright (c) 1997-2003 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-"""
-Parse X Bitmap Distribution Format (BDF)
-"""
-
-
-from . import FontFile, Image
-
-bdf_slant = {
- "R": "Roman",
- "I": "Italic",
- "O": "Oblique",
- "RI": "Reverse Italic",
- "RO": "Reverse Oblique",
- "OT": "Other",
-}
-
-bdf_spacing = {"P": "Proportional", "M": "Monospaced", "C": "Cell"}
-
-
-def bdf_char(f):
- # skip to STARTCHAR
- while True:
- s = f.readline()
- if not s:
- return None
- if s[:9] == b"STARTCHAR":
- break
- id = s[9:].strip().decode("ascii")
-
- # load symbol properties
- props = {}
- while True:
- s = f.readline()
- if not s or s[:6] == b"BITMAP":
- break
- i = s.find(b" ")
- props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii")
-
- # load bitmap
- bitmap = []
- while True:
- s = f.readline()
- if not s or s[:7] == b"ENDCHAR":
- break
- bitmap.append(s[:-1])
- bitmap = b"".join(bitmap)
-
- [x, y, l, d] = [int(p) for p in props["BBX"].split()]
- [dx, dy] = [int(p) for p in props["DWIDTH"].split()]
-
- bbox = (dx, dy), (l, -d - y, x + l, -d), (0, 0, x, y)
-
- try:
- im = Image.frombytes("1", (x, y), bitmap, "hex", "1")
- except ValueError:
- # deal with zero-width characters
- im = Image.new("1", (x, y))
-
- return id, int(props["ENCODING"]), bbox, im
-
-
-class BdfFontFile(FontFile.FontFile):
- """Font file plugin for the X11 BDF format."""
-
- def __init__(self, fp):
- super().__init__()
-
- s = fp.readline()
- if s[:13] != b"STARTFONT 2.1":
- raise SyntaxError("not a valid BDF file")
-
- props = {}
- comments = []
-
- while True:
- s = fp.readline()
- if not s or s[:13] == b"ENDPROPERTIES":
- break
- i = s.find(b" ")
- props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii")
- if s[:i] in [b"COMMENT", b"COPYRIGHT"]:
- if s.find(b"LogicalFontDescription") < 0:
- comments.append(s[i + 1 : -1].decode("ascii"))
-
- while True:
- c = bdf_char(fp)
- if not c:
- break
- id, ch, (xy, dst, src), im = c
- if 0 <= ch < len(self.glyph):
- self.glyph[ch] = xy, dst, src, im
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/absl/flags/_validators.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/absl/flags/_validators.py
deleted file mode 100644
index 2161284a8e284fdfbe42d0f0128e7068cb1ba85f..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/absl/flags/_validators.py
+++ /dev/null
@@ -1,352 +0,0 @@
-# Copyright 2017 The Abseil Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Module to enforce different constraints on flags.
-
-Flags validators can be registered using following functions / decorators::
-
- flags.register_validator
- @flags.validator
- flags.register_multi_flags_validator
- @flags.multi_flags_validator
-
-Three convenience functions are also provided for common flag constraints::
-
- flags.mark_flag_as_required
- flags.mark_flags_as_required
- flags.mark_flags_as_mutual_exclusive
- flags.mark_bool_flags_as_mutual_exclusive
-
-See their docstring in this module for a usage manual.
-
-Do NOT import this module directly. Import the flags package and use the
-aliases defined at the package level instead.
-"""
-
-import warnings
-
-from absl.flags import _exceptions
-from absl.flags import _flagvalues
-from absl.flags import _validators_classes
-
-
-def register_validator(flag_name,
- checker,
- message='Flag validation failed',
- flag_values=_flagvalues.FLAGS):
- """Adds a constraint, which will be enforced during program execution.
-
- The constraint is validated when flags are initially parsed, and after each
- change of the corresponding flag's value.
-
- Args:
- flag_name: str | FlagHolder, name or holder of the flag to be checked.
- Positional-only parameter.
- checker: callable, a function to validate the flag.
-
- * input - A single positional argument: The value of the corresponding
- flag (string, boolean, etc. This value will be passed to checker
- by the library).
- * output - bool, True if validator constraint is satisfied.
- If constraint is not satisfied, it should either ``return False`` or
- ``raise flags.ValidationError(desired_error_message)``.
-
- message: str, error text to be shown to the user if checker returns False.
- If checker raises flags.ValidationError, message from the raised
- error will be shown.
- flag_values: flags.FlagValues, optional FlagValues instance to validate
- against.
-
- Raises:
- AttributeError: Raised when flag_name is not registered as a valid flag
- name.
- ValueError: Raised when flag_values is non-default and does not match the
- FlagValues of the provided FlagHolder instance.
- """
- flag_name, flag_values = _flagvalues.resolve_flag_ref(flag_name, flag_values)
- v = _validators_classes.SingleFlagValidator(flag_name, checker, message)
- _add_validator(flag_values, v)
-
-
-def validator(flag_name, message='Flag validation failed',
- flag_values=_flagvalues.FLAGS):
- """A function decorator for defining a flag validator.
-
- Registers the decorated function as a validator for flag_name, e.g.::
-
- @flags.validator('foo')
- def _CheckFoo(foo):
- ...
-
- See :func:`register_validator` for the specification of checker function.
-
- Args:
- flag_name: str | FlagHolder, name or holder of the flag to be checked.
- Positional-only parameter.
- message: str, error text to be shown to the user if checker returns False.
- If checker raises flags.ValidationError, message from the raised
- error will be shown.
- flag_values: flags.FlagValues, optional FlagValues instance to validate
- against.
- Returns:
- A function decorator that registers its function argument as a validator.
- Raises:
- AttributeError: Raised when flag_name is not registered as a valid flag
- name.
- """
-
- def decorate(function):
- register_validator(flag_name, function,
- message=message,
- flag_values=flag_values)
- return function
- return decorate
-
-
-def register_multi_flags_validator(flag_names,
- multi_flags_checker,
- message='Flags validation failed',
- flag_values=_flagvalues.FLAGS):
- """Adds a constraint to multiple flags.
-
- The constraint is validated when flags are initially parsed, and after each
- change of the corresponding flag's value.
-
- Args:
- flag_names: [str | FlagHolder], a list of the flag names or holders to be
- checked. Positional-only parameter.
- multi_flags_checker: callable, a function to validate the flag.
-
- * input - dict, with keys() being flag_names, and value for each key
- being the value of the corresponding flag (string, boolean, etc).
- * output - bool, True if validator constraint is satisfied.
- If constraint is not satisfied, it should either return False or
- raise flags.ValidationError.
-
- message: str, error text to be shown to the user if checker returns False.
- If checker raises flags.ValidationError, message from the raised
- error will be shown.
- flag_values: flags.FlagValues, optional FlagValues instance to validate
- against.
-
- Raises:
- AttributeError: Raised when a flag is not registered as a valid flag name.
- ValueError: Raised when multiple FlagValues are used in the same
- invocation. This can occur when FlagHolders have different `_flagvalues`
- or when str-type flag_names entries are present and the `flag_values`
- argument does not match that of provided FlagHolder(s).
- """
- flag_names, flag_values = _flagvalues.resolve_flag_refs(
- flag_names, flag_values)
- v = _validators_classes.MultiFlagsValidator(
- flag_names, multi_flags_checker, message)
- _add_validator(flag_values, v)
-
-
-def multi_flags_validator(flag_names,
- message='Flag validation failed',
- flag_values=_flagvalues.FLAGS):
- """A function decorator for defining a multi-flag validator.
-
- Registers the decorated function as a validator for flag_names, e.g.::
-
- @flags.multi_flags_validator(['foo', 'bar'])
- def _CheckFooBar(flags_dict):
- ...
-
- See :func:`register_multi_flags_validator` for the specification of checker
- function.
-
- Args:
- flag_names: [str | FlagHolder], a list of the flag names or holders to be
- checked. Positional-only parameter.
- message: str, error text to be shown to the user if checker returns False.
- If checker raises flags.ValidationError, message from the raised
- error will be shown.
- flag_values: flags.FlagValues, optional FlagValues instance to validate
- against.
-
- Returns:
- A function decorator that registers its function argument as a validator.
-
- Raises:
- AttributeError: Raised when a flag is not registered as a valid flag name.
- """
-
- def decorate(function):
- register_multi_flags_validator(flag_names,
- function,
- message=message,
- flag_values=flag_values)
- return function
-
- return decorate
-
-
-def mark_flag_as_required(flag_name, flag_values=_flagvalues.FLAGS):
- """Ensures that flag is not None during program execution.
-
- Registers a flag validator, which will follow usual validator rules.
- Important note: validator will pass for any non-``None`` value, such as
- ``False``, ``0`` (zero), ``''`` (empty string) and so on.
-
- If your module might be imported by others, and you only wish to make the flag
- required when the module is directly executed, call this method like this::
-
- if __name__ == '__main__':
- flags.mark_flag_as_required('your_flag_name')
- app.run()
-
- Args:
- flag_name: str | FlagHolder, name or holder of the flag.
- Positional-only parameter.
- flag_values: flags.FlagValues, optional :class:`~absl.flags.FlagValues`
- instance where the flag is defined.
- Raises:
- AttributeError: Raised when flag_name is not registered as a valid flag
- name.
- ValueError: Raised when flag_values is non-default and does not match the
- FlagValues of the provided FlagHolder instance.
- """
- flag_name, flag_values = _flagvalues.resolve_flag_ref(flag_name, flag_values)
- if flag_values[flag_name].default is not None:
- warnings.warn(
- 'Flag --%s has a non-None default value; therefore, '
- 'mark_flag_as_required will pass even if flag is not specified in the '
- 'command line!' % flag_name,
- stacklevel=2)
- register_validator(
- flag_name,
- lambda value: value is not None,
- message='Flag --{} must have a value other than None.'.format(flag_name),
- flag_values=flag_values)
-
-
-def mark_flags_as_required(flag_names, flag_values=_flagvalues.FLAGS):
- """Ensures that flags are not None during program execution.
-
- If your module might be imported by others, and you only wish to make the flag
- required when the module is directly executed, call this method like this::
-
- if __name__ == '__main__':
- flags.mark_flags_as_required(['flag1', 'flag2', 'flag3'])
- app.run()
-
- Args:
- flag_names: Sequence[str | FlagHolder], names or holders of the flags.
- flag_values: flags.FlagValues, optional FlagValues instance where the flags
- are defined.
- Raises:
- AttributeError: If any of flag name has not already been defined as a flag.
- """
- for flag_name in flag_names:
- mark_flag_as_required(flag_name, flag_values)
-
-
-def mark_flags_as_mutual_exclusive(flag_names, required=False,
- flag_values=_flagvalues.FLAGS):
- """Ensures that only one flag among flag_names is not None.
-
- Important note: This validator checks if flag values are ``None``, and it does
- not distinguish between default and explicit values. Therefore, this validator
- does not make sense when applied to flags with default values other than None,
- including other false values (e.g. ``False``, ``0``, ``''``, ``[]``). That
- includes multi flags with a default value of ``[]`` instead of None.
-
- Args:
- flag_names: [str | FlagHolder], names or holders of flags.
- Positional-only parameter.
- required: bool. If true, exactly one of the flags must have a value other
- than None. Otherwise, at most one of the flags can have a value other
- than None, and it is valid for all of the flags to be None.
- flag_values: flags.FlagValues, optional FlagValues instance where the flags
- are defined.
-
- Raises:
- ValueError: Raised when multiple FlagValues are used in the same
- invocation. This can occur when FlagHolders have different `_flagvalues`
- or when str-type flag_names entries are present and the `flag_values`
- argument does not match that of provided FlagHolder(s).
- """
- flag_names, flag_values = _flagvalues.resolve_flag_refs(
- flag_names, flag_values)
- for flag_name in flag_names:
- if flag_values[flag_name].default is not None:
- warnings.warn(
- 'Flag --{} has a non-None default value. That does not make sense '
- 'with mark_flags_as_mutual_exclusive, which checks whether the '
- 'listed flags have a value other than None.'.format(flag_name),
- stacklevel=2)
-
- def validate_mutual_exclusion(flags_dict):
- flag_count = sum(1 for val in flags_dict.values() if val is not None)
- if flag_count == 1 or (not required and flag_count == 0):
- return True
- raise _exceptions.ValidationError(
- '{} one of ({}) must have a value other than None.'.format(
- 'Exactly' if required else 'At most', ', '.join(flag_names)))
-
- register_multi_flags_validator(
- flag_names, validate_mutual_exclusion, flag_values=flag_values)
-
-
-def mark_bool_flags_as_mutual_exclusive(flag_names, required=False,
- flag_values=_flagvalues.FLAGS):
- """Ensures that only one flag among flag_names is True.
-
- Args:
- flag_names: [str | FlagHolder], names or holders of flags.
- Positional-only parameter.
- required: bool. If true, exactly one flag must be True. Otherwise, at most
- one flag can be True, and it is valid for all flags to be False.
- flag_values: flags.FlagValues, optional FlagValues instance where the flags
- are defined.
-
- Raises:
- ValueError: Raised when multiple FlagValues are used in the same
- invocation. This can occur when FlagHolders have different `_flagvalues`
- or when str-type flag_names entries are present and the `flag_values`
- argument does not match that of provided FlagHolder(s).
- """
- flag_names, flag_values = _flagvalues.resolve_flag_refs(
- flag_names, flag_values)
- for flag_name in flag_names:
- if not flag_values[flag_name].boolean:
- raise _exceptions.ValidationError(
- 'Flag --{} is not Boolean, which is required for flags used in '
- 'mark_bool_flags_as_mutual_exclusive.'.format(flag_name))
-
- def validate_boolean_mutual_exclusion(flags_dict):
- flag_count = sum(bool(val) for val in flags_dict.values())
- if flag_count == 1 or (not required and flag_count == 0):
- return True
- raise _exceptions.ValidationError(
- '{} one of ({}) must be True.'.format(
- 'Exactly' if required else 'At most', ', '.join(flag_names)))
-
- register_multi_flags_validator(
- flag_names, validate_boolean_mutual_exclusion, flag_values=flag_values)
-
-
-def _add_validator(fv, validator_instance):
- """Register new flags validator to be checked.
-
- Args:
- fv: flags.FlagValues, the FlagValues instance to add the validator.
- validator_instance: validators.Validator, the validator to add.
- Raises:
- KeyError: Raised when validators work with a non-existing flag.
- """
- for flag_name in validator_instance.get_flags_names():
- fv[flag_name].validators.append(validator_instance)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/_core/_fileio.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/_core/_fileio.py
deleted file mode 100644
index 19c1e8344c1dac85312fab04d44c191e6b19cdc7..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/_core/_fileio.py
+++ /dev/null
@@ -1,607 +0,0 @@
-import os
-import pathlib
-import sys
-from dataclasses import dataclass
-from functools import partial
-from os import PathLike
-from typing import (
- IO,
- TYPE_CHECKING,
- Any,
- AnyStr,
- AsyncIterator,
- Callable,
- Generic,
- Iterable,
- Iterator,
- List,
- Optional,
- Sequence,
- Tuple,
- Union,
- cast,
- overload,
-)
-
-from .. import to_thread
-from ..abc import AsyncResource
-
-if sys.version_info >= (3, 8):
- from typing import Final
-else:
- from typing_extensions import Final
-
-if TYPE_CHECKING:
- from _typeshed import OpenBinaryMode, OpenTextMode, ReadableBuffer, WriteableBuffer
-else:
- ReadableBuffer = OpenBinaryMode = OpenTextMode = WriteableBuffer = object
-
-
-class AsyncFile(AsyncResource, Generic[AnyStr]):
- """
- An asynchronous file object.
-
- This class wraps a standard file object and provides async friendly versions of the following
- blocking methods (where available on the original file object):
-
- * read
- * read1
- * readline
- * readlines
- * readinto
- * readinto1
- * write
- * writelines
- * truncate
- * seek
- * tell
- * flush
-
- All other methods are directly passed through.
-
- This class supports the asynchronous context manager protocol which closes the underlying file
- at the end of the context block.
-
- This class also supports asynchronous iteration::
-
- async with await open_file(...) as f:
- async for line in f:
- print(line)
- """
-
- def __init__(self, fp: IO[AnyStr]) -> None:
- self._fp: Any = fp
-
- def __getattr__(self, name: str) -> object:
- return getattr(self._fp, name)
-
- @property
- def wrapped(self) -> IO[AnyStr]:
- """The wrapped file object."""
- return self._fp
-
- async def __aiter__(self) -> AsyncIterator[AnyStr]:
- while True:
- line = await self.readline()
- if line:
- yield line
- else:
- break
-
- async def aclose(self) -> None:
- return await to_thread.run_sync(self._fp.close)
-
- async def read(self, size: int = -1) -> AnyStr:
- return await to_thread.run_sync(self._fp.read, size)
-
- async def read1(self: "AsyncFile[bytes]", size: int = -1) -> bytes:
- return await to_thread.run_sync(self._fp.read1, size)
-
- async def readline(self) -> AnyStr:
- return await to_thread.run_sync(self._fp.readline)
-
- async def readlines(self) -> List[AnyStr]:
- return await to_thread.run_sync(self._fp.readlines)
-
- async def readinto(self: "AsyncFile[bytes]", b: WriteableBuffer) -> bytes:
- return await to_thread.run_sync(self._fp.readinto, b)
-
- async def readinto1(self: "AsyncFile[bytes]", b: WriteableBuffer) -> bytes:
- return await to_thread.run_sync(self._fp.readinto1, b)
-
- @overload
- async def write(self: "AsyncFile[bytes]", b: ReadableBuffer) -> int:
- ...
-
- @overload
- async def write(self: "AsyncFile[str]", b: str) -> int:
- ...
-
- async def write(self, b: Union[ReadableBuffer, str]) -> int:
- return await to_thread.run_sync(self._fp.write, b)
-
- @overload
- async def writelines(
- self: "AsyncFile[bytes]", lines: Iterable[ReadableBuffer]
- ) -> None:
- ...
-
- @overload
- async def writelines(self: "AsyncFile[str]", lines: Iterable[str]) -> None:
- ...
-
- async def writelines(
- self, lines: Union[Iterable[ReadableBuffer], Iterable[str]]
- ) -> None:
- return await to_thread.run_sync(self._fp.writelines, lines)
-
- async def truncate(self, size: Optional[int] = None) -> int:
- return await to_thread.run_sync(self._fp.truncate, size)
-
- async def seek(self, offset: int, whence: Optional[int] = os.SEEK_SET) -> int:
- return await to_thread.run_sync(self._fp.seek, offset, whence)
-
- async def tell(self) -> int:
- return await to_thread.run_sync(self._fp.tell)
-
- async def flush(self) -> None:
- return await to_thread.run_sync(self._fp.flush)
-
-
-@overload
-async def open_file(
- file: Union[str, "PathLike[str]", int],
- mode: OpenBinaryMode,
- buffering: int = ...,
- encoding: Optional[str] = ...,
- errors: Optional[str] = ...,
- newline: Optional[str] = ...,
- closefd: bool = ...,
- opener: Optional[Callable[[str, int], int]] = ...,
-) -> AsyncFile[bytes]:
- ...
-
-
-@overload
-async def open_file(
- file: Union[str, "PathLike[str]", int],
- mode: OpenTextMode = ...,
- buffering: int = ...,
- encoding: Optional[str] = ...,
- errors: Optional[str] = ...,
- newline: Optional[str] = ...,
- closefd: bool = ...,
- opener: Optional[Callable[[str, int], int]] = ...,
-) -> AsyncFile[str]:
- ...
-
-
-async def open_file(
- file: Union[str, "PathLike[str]", int],
- mode: str = "r",
- buffering: int = -1,
- encoding: Optional[str] = None,
- errors: Optional[str] = None,
- newline: Optional[str] = None,
- closefd: bool = True,
- opener: Optional[Callable[[str, int], int]] = None,
-) -> AsyncFile[Any]:
- """
- Open a file asynchronously.
-
- The arguments are exactly the same as for the builtin :func:`open`.
-
- :return: an asynchronous file object
-
- """
- fp = await to_thread.run_sync(
- open, file, mode, buffering, encoding, errors, newline, closefd, opener
- )
- return AsyncFile(fp)
-
-
-def wrap_file(file: IO[AnyStr]) -> AsyncFile[AnyStr]:
- """
- Wrap an existing file as an asynchronous file.
-
- :param file: an existing file-like object
- :return: an asynchronous file object
-
- """
- return AsyncFile(file)
-
-
-@dataclass(eq=False)
-class _PathIterator(AsyncIterator["Path"]):
- iterator: Iterator["PathLike[str]"]
-
- async def __anext__(self) -> "Path":
- nextval = await to_thread.run_sync(next, self.iterator, None, cancellable=True)
- if nextval is None:
- raise StopAsyncIteration from None
-
- return Path(cast("PathLike[str]", nextval))
-
-
-class Path:
- """
- An asynchronous version of :class:`pathlib.Path`.
-
- This class cannot be substituted for :class:`pathlib.Path` or :class:`pathlib.PurePath`, but
- it is compatible with the :class:`os.PathLike` interface.
-
- It implements the Python 3.10 version of :class:`pathlib.Path` interface, except for the
- deprecated :meth:`~pathlib.Path.link_to` method.
-
- Any methods that do disk I/O need to be awaited on. These methods are:
-
- * :meth:`~pathlib.Path.absolute`
- * :meth:`~pathlib.Path.chmod`
- * :meth:`~pathlib.Path.cwd`
- * :meth:`~pathlib.Path.exists`
- * :meth:`~pathlib.Path.expanduser`
- * :meth:`~pathlib.Path.group`
- * :meth:`~pathlib.Path.hardlink_to`
- * :meth:`~pathlib.Path.home`
- * :meth:`~pathlib.Path.is_block_device`
- * :meth:`~pathlib.Path.is_char_device`
- * :meth:`~pathlib.Path.is_dir`
- * :meth:`~pathlib.Path.is_fifo`
- * :meth:`~pathlib.Path.is_file`
- * :meth:`~pathlib.Path.is_mount`
- * :meth:`~pathlib.Path.lchmod`
- * :meth:`~pathlib.Path.lstat`
- * :meth:`~pathlib.Path.mkdir`
- * :meth:`~pathlib.Path.open`
- * :meth:`~pathlib.Path.owner`
- * :meth:`~pathlib.Path.read_bytes`
- * :meth:`~pathlib.Path.read_text`
- * :meth:`~pathlib.Path.readlink`
- * :meth:`~pathlib.Path.rename`
- * :meth:`~pathlib.Path.replace`
- * :meth:`~pathlib.Path.rmdir`
- * :meth:`~pathlib.Path.samefile`
- * :meth:`~pathlib.Path.stat`
- * :meth:`~pathlib.Path.touch`
- * :meth:`~pathlib.Path.unlink`
- * :meth:`~pathlib.Path.write_bytes`
- * :meth:`~pathlib.Path.write_text`
-
- Additionally, the following methods return an async iterator yielding :class:`~.Path` objects:
-
- * :meth:`~pathlib.Path.glob`
- * :meth:`~pathlib.Path.iterdir`
- * :meth:`~pathlib.Path.rglob`
- """
-
- __slots__ = "_path", "__weakref__"
-
- __weakref__: Any
-
- def __init__(self, *args: Union[str, "PathLike[str]"]) -> None:
- self._path: Final[pathlib.Path] = pathlib.Path(*args)
-
- def __fspath__(self) -> str:
- return self._path.__fspath__()
-
- def __str__(self) -> str:
- return self._path.__str__()
-
- def __repr__(self) -> str:
- return f"{self.__class__.__name__}({self.as_posix()!r})"
-
- def __bytes__(self) -> bytes:
- return self._path.__bytes__()
-
- def __hash__(self) -> int:
- return self._path.__hash__()
-
- def __eq__(self, other: object) -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__eq__(target)
-
- def __lt__(self, other: "Path") -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__lt__(target)
-
- def __le__(self, other: "Path") -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__le__(target)
-
- def __gt__(self, other: "Path") -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__gt__(target)
-
- def __ge__(self, other: "Path") -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__ge__(target)
-
- def __truediv__(self, other: Any) -> "Path":
- return Path(self._path / other)
-
- def __rtruediv__(self, other: Any) -> "Path":
- return Path(other) / self
-
- @property
- def parts(self) -> Tuple[str, ...]:
- return self._path.parts
-
- @property
- def drive(self) -> str:
- return self._path.drive
-
- @property
- def root(self) -> str:
- return self._path.root
-
- @property
- def anchor(self) -> str:
- return self._path.anchor
-
- @property
- def parents(self) -> Sequence["Path"]:
- return tuple(Path(p) for p in self._path.parents)
-
- @property
- def parent(self) -> "Path":
- return Path(self._path.parent)
-
- @property
- def name(self) -> str:
- return self._path.name
-
- @property
- def suffix(self) -> str:
- return self._path.suffix
-
- @property
- def suffixes(self) -> List[str]:
- return self._path.suffixes
-
- @property
- def stem(self) -> str:
- return self._path.stem
-
- async def absolute(self) -> "Path":
- path = await to_thread.run_sync(self._path.absolute)
- return Path(path)
-
- def as_posix(self) -> str:
- return self._path.as_posix()
-
- def as_uri(self) -> str:
- return self._path.as_uri()
-
- def match(self, path_pattern: str) -> bool:
- return self._path.match(path_pattern)
-
- def is_relative_to(self, *other: Union[str, "PathLike[str]"]) -> bool:
- try:
- self.relative_to(*other)
- return True
- except ValueError:
- return False
-
- async def chmod(self, mode: int, *, follow_symlinks: bool = True) -> None:
- func = partial(os.chmod, follow_symlinks=follow_symlinks)
- return await to_thread.run_sync(func, self._path, mode)
-
- @classmethod
- async def cwd(cls) -> "Path":
- path = await to_thread.run_sync(pathlib.Path.cwd)
- return cls(path)
-
- async def exists(self) -> bool:
- return await to_thread.run_sync(self._path.exists, cancellable=True)
-
- async def expanduser(self) -> "Path":
- return Path(await to_thread.run_sync(self._path.expanduser, cancellable=True))
-
- def glob(self, pattern: str) -> AsyncIterator["Path"]:
- gen = self._path.glob(pattern)
- return _PathIterator(gen)
-
- async def group(self) -> str:
- return await to_thread.run_sync(self._path.group, cancellable=True)
-
- async def hardlink_to(self, target: Union[str, pathlib.Path, "Path"]) -> None:
- if isinstance(target, Path):
- target = target._path
-
- await to_thread.run_sync(os.link, target, self)
-
- @classmethod
- async def home(cls) -> "Path":
- home_path = await to_thread.run_sync(pathlib.Path.home)
- return cls(home_path)
-
- def is_absolute(self) -> bool:
- return self._path.is_absolute()
-
- async def is_block_device(self) -> bool:
- return await to_thread.run_sync(self._path.is_block_device, cancellable=True)
-
- async def is_char_device(self) -> bool:
- return await to_thread.run_sync(self._path.is_char_device, cancellable=True)
-
- async def is_dir(self) -> bool:
- return await to_thread.run_sync(self._path.is_dir, cancellable=True)
-
- async def is_fifo(self) -> bool:
- return await to_thread.run_sync(self._path.is_fifo, cancellable=True)
-
- async def is_file(self) -> bool:
- return await to_thread.run_sync(self._path.is_file, cancellable=True)
-
- async def is_mount(self) -> bool:
- return await to_thread.run_sync(os.path.ismount, self._path, cancellable=True)
-
- def is_reserved(self) -> bool:
- return self._path.is_reserved()
-
- async def is_socket(self) -> bool:
- return await to_thread.run_sync(self._path.is_socket, cancellable=True)
-
- async def is_symlink(self) -> bool:
- return await to_thread.run_sync(self._path.is_symlink, cancellable=True)
-
- def iterdir(self) -> AsyncIterator["Path"]:
- gen = self._path.iterdir()
- return _PathIterator(gen)
-
- def joinpath(self, *args: Union[str, "PathLike[str]"]) -> "Path":
- return Path(self._path.joinpath(*args))
-
- async def lchmod(self, mode: int) -> None:
- await to_thread.run_sync(self._path.lchmod, mode)
-
- async def lstat(self) -> os.stat_result:
- return await to_thread.run_sync(self._path.lstat, cancellable=True)
-
- async def mkdir(
- self, mode: int = 0o777, parents: bool = False, exist_ok: bool = False
- ) -> None:
- await to_thread.run_sync(self._path.mkdir, mode, parents, exist_ok)
-
- @overload
- async def open(
- self,
- mode: OpenBinaryMode,
- buffering: int = ...,
- encoding: Optional[str] = ...,
- errors: Optional[str] = ...,
- newline: Optional[str] = ...,
- ) -> AsyncFile[bytes]:
- ...
-
- @overload
- async def open(
- self,
- mode: OpenTextMode = ...,
- buffering: int = ...,
- encoding: Optional[str] = ...,
- errors: Optional[str] = ...,
- newline: Optional[str] = ...,
- ) -> AsyncFile[str]:
- ...
-
- async def open(
- self,
- mode: str = "r",
- buffering: int = -1,
- encoding: Optional[str] = None,
- errors: Optional[str] = None,
- newline: Optional[str] = None,
- ) -> AsyncFile[Any]:
- fp = await to_thread.run_sync(
- self._path.open, mode, buffering, encoding, errors, newline
- )
- return AsyncFile(fp)
-
- async def owner(self) -> str:
- return await to_thread.run_sync(self._path.owner, cancellable=True)
-
- async def read_bytes(self) -> bytes:
- return await to_thread.run_sync(self._path.read_bytes)
-
- async def read_text(
- self, encoding: Optional[str] = None, errors: Optional[str] = None
- ) -> str:
- return await to_thread.run_sync(self._path.read_text, encoding, errors)
-
- def relative_to(self, *other: Union[str, "PathLike[str]"]) -> "Path":
- return Path(self._path.relative_to(*other))
-
- async def readlink(self) -> "Path":
- target = await to_thread.run_sync(os.readlink, self._path)
- return Path(cast(str, target))
-
- async def rename(self, target: Union[str, pathlib.PurePath, "Path"]) -> "Path":
- if isinstance(target, Path):
- target = target._path
-
- await to_thread.run_sync(self._path.rename, target)
- return Path(target)
-
- async def replace(self, target: Union[str, pathlib.PurePath, "Path"]) -> "Path":
- if isinstance(target, Path):
- target = target._path
-
- await to_thread.run_sync(self._path.replace, target)
- return Path(target)
-
- async def resolve(self, strict: bool = False) -> "Path":
- func = partial(self._path.resolve, strict=strict)
- return Path(await to_thread.run_sync(func, cancellable=True))
-
- def rglob(self, pattern: str) -> AsyncIterator["Path"]:
- gen = self._path.rglob(pattern)
- return _PathIterator(gen)
-
- async def rmdir(self) -> None:
- await to_thread.run_sync(self._path.rmdir)
-
- async def samefile(
- self, other_path: Union[str, bytes, int, pathlib.Path, "Path"]
- ) -> bool:
- if isinstance(other_path, Path):
- other_path = other_path._path
-
- return await to_thread.run_sync(
- self._path.samefile, other_path, cancellable=True
- )
-
- async def stat(self, *, follow_symlinks: bool = True) -> os.stat_result:
- func = partial(os.stat, follow_symlinks=follow_symlinks)
- return await to_thread.run_sync(func, self._path, cancellable=True)
-
- async def symlink_to(
- self,
- target: Union[str, pathlib.Path, "Path"],
- target_is_directory: bool = False,
- ) -> None:
- if isinstance(target, Path):
- target = target._path
-
- await to_thread.run_sync(self._path.symlink_to, target, target_is_directory)
-
- async def touch(self, mode: int = 0o666, exist_ok: bool = True) -> None:
- await to_thread.run_sync(self._path.touch, mode, exist_ok)
-
- async def unlink(self, missing_ok: bool = False) -> None:
- try:
- await to_thread.run_sync(self._path.unlink)
- except FileNotFoundError:
- if not missing_ok:
- raise
-
- def with_name(self, name: str) -> "Path":
- return Path(self._path.with_name(name))
-
- def with_stem(self, stem: str) -> "Path":
- return Path(self._path.with_name(stem + self._path.suffix))
-
- def with_suffix(self, suffix: str) -> "Path":
- return Path(self._path.with_suffix(suffix))
-
- async def write_bytes(self, data: bytes) -> int:
- return await to_thread.run_sync(self._path.write_bytes, data)
-
- async def write_text(
- self,
- data: str,
- encoding: Optional[str] = None,
- errors: Optional[str] = None,
- newline: Optional[str] = None,
- ) -> int:
- # Path.write_text() does not support the "newline" parameter before Python 3.10
- def sync_write_text() -> int:
- with self._path.open(
- "w", encoding=encoding, errors=errors, newline=newline
- ) as fp:
- return fp.write(data)
-
- return await to_thread.run_sync(sync_write_text)
-
-
-PathLike.register(Path)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attrs/setters.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attrs/setters.py
deleted file mode 100644
index 9b50770804e4187f0c935ef17bddf2d9a61120ff..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attrs/setters.py
+++ /dev/null
@@ -1,3 +0,0 @@
-# SPDX-License-Identifier: MIT
-
-from attr.setters import * # noqa
diff --git a/spaces/asd998877/TsGpt/run_Linux.sh b/spaces/asd998877/TsGpt/run_Linux.sh
deleted file mode 100644
index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000
--- a/spaces/asd998877/TsGpt/run_Linux.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-
-# 获取脚本所在目录
-script_dir=$(dirname "$(readlink -f "$0")")
-
-# 将工作目录更改为脚本所在目录
-cd "$script_dir" || exit
-
-# 检查Git仓库是否有更新
-git remote update
-pwd
-
-if ! git status -uno | grep 'up to date' > /dev/null; then
- # 如果有更新,关闭当前运行的服务器
- pkill -f ChuanhuChatbot.py
-
- # 拉取最新更改
- git pull
-
- # 安装依赖
- pip3 install -r requirements.txt
-
- # 重新启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
-
-# 检查ChuanhuChatbot.py是否在运行
-if ! pgrep -f ChuanhuChatbot.py > /dev/null; then
- # 如果没有运行,启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Rishab.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Rishab.html
deleted file mode 100644
index 7f97d1ff786a3bb9d8c1777acf2108b71e0aa31a..0000000000000000000000000000000000000000
--- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Rishab.html
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
-
- Rishab
-
-
-
-
-
-
Rishab
-
-
-
How did you hear about SM?
Early adopter from pre-pivot
"I remember how hard it was. And it would have been easier with someone guiding me"
Wants to give back to the community
Brief background
Working on an Ml team and
helping build models, and also productizing
can help ppl with end-to-end data science (both fundamental and applied)
start the problem and production
No MLOps,
Mentorship exp
1:1 mentorship only with his brother
What do beginners need and how can you help?
Not setting up the problem/data right
Not setting up evaluation metrics correctly
using the wrong metrics - garbage in / garbage out
Going into a complex solution first
experimental setup
seen with interns and PhD students
train/validation/test are not similar enough
leaking
"unless you have a model in production already you're going to find a whole ton of errors in the data/pipelines/process"
complex models only when necessary
make sure the early data processing/pipeline is setup right
make sure the problem is important for the business
the modeling part people can pick up pretty easily - lots of libraries and OSS code
Get the data, do EDA
If necessary, teach SWE skills - git, code review, best practices
-
-
Questions about SM:
Read my email and it seems straightforward
Do we have a job board?
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/auto-academic/auto-draft/utils/prompts.py b/spaces/auto-academic/auto-draft/utils/prompts.py
deleted file mode 100644
index 71c0bf5389982c7bf971c56c018a23a302f87098..0000000000000000000000000000000000000000
--- a/spaces/auto-academic/auto-draft/utils/prompts.py
+++ /dev/null
@@ -1,280 +0,0 @@
-import logging
-from langchain import PromptTemplate
-import os, json
-
-
-log = logging.getLogger(__name__)
-
-# todo: load prompts from configurations
-######################################################################################################################
-# System Message
-######################################################################################################################
-
-# two parameters: min_refs_num, max_refs_num
-keywords_system_template = """You are an assistant designed to provide accurate and informative keywords of searching academic papers.
-The user will input the title of a paper. You need to return three to five most related fields. \n
-Instructions:\n
-- Assign numbers to each field to present the importance. The larger, the more important. \n
-- {max_refs_num} is the most important and {min_refs_num} is the least important. \n
-- Your response should follow the following format: {{"field1": 5, "field2": 7, "field3": 8, "field4": 5}}\n
-- Ensure the response can be parsed by Python json.loads"""
-
-keywords_system_prompt_str = """You are an assistant designed to provide accurate and informative keywords of searching academic papers.
-The user will input the title of a paper. You need to return three to five most related fields. \n
-Instructions:\n
-- Assign numbers to each field to present the importance. The larger, the more important. \n
-- 10 is the most important and 1 is the least important. \n
-- Your response should follow the following format: {"field 1": 5, "field 2": 7, "field 3": 8, "field 4": 5}\n
-- Ensure the response can be parsed by Python json.loads"""
-
-# two parameters: min_refs_num, max_refs_num
-exp_methods_system_template = """You are an assistant designed to provide most related algorithms or methods to a given paper title.
-Instructions
-- Your response should always be a Python list; e.g. ["method_name_1", "method_name_2", "method_name_3"]
-- The length of list should between {min_exps_num} and {max_exps_num}
-- Use abbreviation to make each method's name have 5 characters or less."""
-
-contribution_system_prompt_str = '''You are an assistant designed to propose potential contributions of a given title of the paper. Ensure follow the following instructions:
-Instruction:
-- Your response should follow the JSON format.
-- Your response should have the following structure: {"contribution1": {"statement": "briefly describe what the contribution is", "reason": "reason why this contribution has not been made by other literatures"}, "contribution2": {"statement": "briefly describe what the contribution is", "reason": "reason why this contribution has not been made by other literatures"}, ...}'''
-
-media_system_prompt_str = '''
-You are an assistant designed to propose necessary components of an academic papers. You need to decide which components should be included to achieve this paper's contributions.
-
-Available components: Figure, Table, Definition, Algorithm.
-
-Instruction:
-- Your response should follow the JSON format.
-- Your response should have the following structure: {"Figure 1": {"description": "breifly describe what the figure is", "reason": "why this figure is necessary to show the contribution of this paper"}, "Figure 2": {"description": "breifly describe what the figure is", "reason": "why this figure is necessary to show the contribution of this pape"}, "Table 1": {"description": "breifly describe what the table is", "reason": "why this table is necessary to show the contribution of this pape"}, ...}
-
-Example:
-Input:
-"Title: Playing Atari game using De-Centralized PPO
-Contributions: The main contributions of this paper are threefold: (1) We propose a novel adaptation of PPO for de-centralized multi-agent Atari gameplay, building upon the existing PPO framework (Wijmans et al.,2020). (2) We provide a comprehensive evaluation of our decentralized PPO approach, comparing its performance to state-of-the-art centralized methods in the Atari domain. (3) We identify key factors influencing the performance of decentralized PPO in Atari games and provide insights into potential avenues for future research in decentralized DRL."
-Response:
-{
- "Figure 1": {
- "description": "Architecture of the proposed decentralized PPO adaptation",
- "reason": "To visually present the novel adaptation of PPO for decentralized multi-agent Atari gameplay and highlight the differences from the existing PPO framework"
- },
- "Figure 2": {
- "description": "Performance comparison of decentralized PPO with state-of-the-art centralized methods",
- "reason": "To depict the effectiveness of our proposed approach by comparing its performance to existing centralized methods in the Atari domain"
- },
- "Figure 3": {
- "description": "Factors and hyperparameters affecting the performance of decentralized PPO",
- "reason": "To illustrate the key factors influencing the performance of decentralized PPO and their impact on various Atari games"
- },
- "Definition 1":{
- "description": "the novel evaluation metric for decentralized PPO approach",
- "reason": "To highlight the difference from other existing literatures"
- },
- "Table 1": {
- "description": "Summary of the experimental results from the evaluation of our decentralized PPO approach",
- "reason": "To show the comprehensive evaluation of our approach and its performance on multiple Atari games compared with state-of-the-art centralized methods"
- },
- "Algorithm 1": {
- "description": "Pseudocode of the proposed decentralized PPO algorithm",
- "reason": "To provide a clear and concise representation of our novel adaptation of PPO for decentralized multi-agent Atari gameplay"
- }
-}'''
-
-preliminaries_system_prompt_str = '''You are an assistant designed to propose preliminary concepts for a paper given its title and contributions. Ensure follow the following instructions:
-Instruction:
-- Your response should follow the JSON format.
-- Your response should have the following structure: {"name of the concept": 1, {"name of the concept": 2, ...}
-- Smaller number means the concept is more fundamental and should be introduced earlier. '''
-
-
-# one parameter: research_field
-section_generation_system_template = r"""You are an assistant designed to write academic papers in the field of {research_field} using LaTeX.
-Instructions
-- Your response should be professional and in academic tone.
-- Always give a high-level overview at the beginning of each section or subsection.
-"""
-
-KEYWORDS_SYSTEM = PromptTemplate(input_variables=["min_refs_num", "max_refs_num"],
- template=keywords_system_template)
-EXP_METHODS_SYSTEM = PromptTemplate(input_variables=["min_exps_num", "max_exps_num"],
- template=exp_methods_system_template)
-SECTION_GENERATION_SYSTEM = PromptTemplate(input_variables=["research_field"],
- template=section_generation_system_template)
-CONTRIBUTION = contribution_system_prompt_str
-COMPONENTS = media_system_prompt_str
-PRELIMINARIES = preliminaries_system_prompt_str
-KEYWORDS = keywords_system_prompt_str
-
-SYSTEM = {"keywords": KEYWORDS, "experiment_methods": EXP_METHODS_SYSTEM,
- "contributions": CONTRIBUTION, "components": COMPONENTS,
- "preliminaries": PRELIMINARIES}
-
-
-######################################################################################################################
-# Prompts for Generating Academic Paper
-######################################################################################################################
-
-cur_path = os.path.dirname(__file__)
-prompts_path = os.path.join(cur_path, '../prompts/instructions.json')
-with open(prompts_path, "r") as f:
- INSTRUCTIONS = json.load(f)
-# f = open(file_path)
-# When generating Academic Paper. Load instructions.
-# with open("../prompts/instructions.json", "r") as f:
-# INSTRUCTIONS = json.load(f)
-#
-# INSTRUCTIONS = {"introduction":
-# "- Include five paragraph: Establishing the motivation for the research. Explaining its importance and relevance to the AI community. Clearly state the problem you're addressing, your proposed solution, and the specific research questions or objectives. Briefly mention key related works for context and explain the main differences from this work. List three novel contributions of this paper.",
-# "results":
-# "Write the theoretical results section using LaTeX. Include theorem and corollary to support this paper (with formulas). Explain what assumptions are used and why they are standard and necessary. Do not include \section{...}. ",
-# "conclusion":
-# "- Read the existing parts of paper and write the conclusion section.",
-# "abstract":
-# "- Read the existing parts of paper and write the abstract."}
-#
-#
-# INSTRUCTIONS["backgrounds"] = "- Start from one high-level paragraph to state the central problem in this field with detailed examples in industrial applications and theoretical challenges. \n" \
-# "- Followed by two to three subsections: Explain the foundational concepts and notations that underpin your research using as many as mathematical formulas (written in LaTeX). " \
-# "Introduce more necessary mathematical notations, equations, or algorithms that are connected to this work. Present detailed discussions on how these concepts are applied in this paper."
-#
-#
-# INSTRUCTIONS["related works"] = r"- Discuss three to five main related fields to this paper. " \
-# r"For each field, select five to ten key publications from references. " \
-# r"For each reference, analyze its strengths and weaknesses in one or two sentences. " \
-# r"Present the related works in a logical manner, often chronologically. " \
-# r"Consider using a taxonomy or categorization to structure the discussion. " \
-# r"Do not use \section{...} or \subsection{...}; use \paragraph{...} to list related fields. "
-#
-# INSTRUCTIONS["methodology"] = "- Provide a high-level overview of the proposed method at the beginning of this section. \n " \
-# "- Assume you have some figures ('fig1.png', 'fig2.png', ...); they can be any figures you need (e.g. flow chart, model architecture, sample output, simulation result, or others you need). Insert figures you need with informative caption. \n" \
-# "- Use one subsection to give a detailed formulation of the proposed method and explain how it overcomes the weakness of existing methods mentioned in this paper. " \
-# " If necessary, write pseudo codes wrapped by \\begin{{algorithm}} ... \\end{{algorithm}} to explain the detailed steps instead of simply listing them. \n" \
-# "- Use one follow-up subsection to highlight the key concepts in the proposed method. " \
-# " Elaborate the novelty of these key concepts using formulas and inserting appropriate figures. \n" \
-# "- Ensure the name of each subsection to be specific. \n"
-#
-# INSTRUCTIONS["experiments"] = "- Provide a high-level overview at the beginning of this section.\n " \
-# "- If necessary, include a table to compare with other methods and bold our method.\n" \
-# "- Assume you have some figures ('exp1.png', 'exp2.png', ...); they can be any figures you need (e.g. loss curves, comparison with other methods, visualization, or others you need). Insert figures you need with informative caption. \n" \
-# "- If necessary, use different subsections to distinguish different experimental setup."
-
-
-def generate_paper_prompts(paper_info, section):
- title = paper_info["title"]
- description = paper_info["description"]
- references = paper_info["references"]
- paper = paper_info["body"]
-
- # fundamental_subprompt - describe the basic information of paper
- # instruction_subprompt - tell AI what to do
- # ref_instruction_subprompt - give AI references
- # self_subprompt - give AI existing written parts
- # output_subprompt - tell AI how to output
- fundamental_subprompt = "Your task is to write the {section} section of the paper with the title '{title}'. This paper has the following contributions: {description}\n"
- instruction_subprompt = "\n" \
- "Your response should follow the following instructions:\n" \
- "{instruction}\n" \
- "- Start with \section{{{section}}}\n"
-
- abstract_instruction_subprompt = "\n" \
- "Your response should follow the following instructions:\n" \
- "{instruction}\n"
- ref_instruction_subprompt = "- Read references. " \
- "Every time you use information from the references, you need to appropriately cite it (using \citep or \citet)." \
- "For example of \citep, the sentence where you use information from lei2022adaptive \citep{{lei2022adaptive}}. " \
- "For example of \citet, \citet{{lei2022adaptive}} claims some information.\n" \
- "- Avoid citing the same reference in a same paragraph.\n" \
- "\n" \
- "References:\n" \
- "{references}"
- self_subprompt = "The existing parts of this paper is provided here: {paper}.\n"
- output_subprompt = "Your response should start with \section{{{section}}}. Ensure that it can be directly compiled by LeTaX."
- abstract_output_subprompt = "Your response should start with \\begin{{abstract}} and should end with \\end{{abstract}}. Ensure that it can be directly compiled by LeTaX."
-
- reivew_prompts = PromptTemplate(
- input_variables=["title", "description", "instruction", "section", "references"],
- template=fundamental_subprompt + instruction_subprompt + ref_instruction_subprompt + output_subprompt)
- summarization_prompts = PromptTemplate(
- input_variables=["title", "description", "instruction", "section", "paper"],
- template=fundamental_subprompt + instruction_subprompt + self_subprompt + output_subprompt)
- abstract_prompts = PromptTemplate(
- input_variables=["title", "description", "instruction", "section", "paper"],
- template=fundamental_subprompt + abstract_instruction_subprompt + self_subprompt + abstract_output_subprompt)
-
- if section in ["introduction", "related works", "backgrounds"]:
- # title + references + instruction
- prompts = reivew_prompts.format(title=title,
- description=description,
- instruction=INSTRUCTIONS[section],
- section=section,
- references=references)
- elif section in ["abstract"]:
- # title + instruction + paper
- prompts = abstract_prompts.format(title=title,
- description=description,
- instruction=INSTRUCTIONS[section],
- section=section,
- paper=paper)
- elif section in ["methodology", "experiments", "conclusion"]:
- # title + instruction + paper
- prompts = summarization_prompts.format(title=title,
- description=description,
- instruction=INSTRUCTIONS[section],
- section=section,
- paper=paper)
- else:
- raise NotImplementedError
-
- log.info(f"Generated prompts for {section}: {prompts}")
- return prompts
-
-
-######################################################################################################################
-# Literature Review
-######################################################################################################################
-
-BG_INSTRUCTIONS = {"introduction": "Please include four paragraph: Establishing the motivation for this survey. Explaining its importance and relevance to the AI community. Clearly state the coverage of this survey and the specific research questions or objectives. Briefly mention key related work for context. ",
- "related works": r"Please discuss key publications, methods, and techniques in related research area. Analyze the strengths and weaknesses of existing methods, and present the related works in a logical manner, often chronologically. Consider using a taxonomy or categorization to structure the discussion. Do not use \section{...} or \subsection{...}; use \paragraph{...} instead. ",
- "backgrounds": r"Please clearly state the central problem in this field. Explain the foundational theories, concepts, and principles that underpin your research using as many as mathematical formulas or equations (written in LaTeX). Introduce any necessary mathematical notations, equations, or algorithms that are central to this field (written them in LaTeX). Do not include \section{...} but you can have \subsection{...}. ",}
-
-
-
-def generate_bg_summary_prompts(paper_info, section):
- title = paper_info["title"]
- description = paper_info["description"]
- references = paper_info["references"]
- paper = paper_info["body"]
-
- # fundamental_subprompt - describe the basic information of paper
- # instruction_subprompt - tell AI what to do
- # references_subprompt - give AI references
- # self_subprompt - give AI existing written parts
- # output_subprompt - tell AI how to output
-
- fundamental_subprompt = f"I am writing a machine learning survey about '{title}'. {description}\n"
- instruction_subprompt = f"You need to write the {section} section. {INSTRUCTIONS[section]}\n"
- references_subprompt = f"Please read the following references: \n{references}\n"\
- f"Every time you use information from the references, you need to cite its id after the sentence; " \
- f"for example, the sentence where you use information from 1905.09788 \cite{{1905.09788}}. " \
- f"Please avoid citing the same reference in the same paragraph. \n"
- self_subprompt = f"Here is the paper that I have written: {paper}.\n"
- output_subprompt = r"Put your response (do not include \section{...}) in the following Python script:" \
- f"with open(\"{section}.tex\", \"w\") as f: f.write(r'''your_response''')"
-
- if section in ["introduction", "related works", "backgrounds"]:
- # title + references + instruction
- prompts = fundamental_subprompt + instruction_subprompt + references_subprompt + output_subprompt
- else:
- raise NotImplementedError
-
- log.info(f"Generated prompts for {section}: {prompts}")
- return prompts
-
-if __name__ == "__main__":
- import json
- with open("../prompts/instructions.json", "w") as f:
- json.dump(INSTRUCTIONS, f)
- import json
- with open("../prompts/instructions.json", "r") as f:
- ins = json.load(f)
- print(ins == INSTRUCTIONS)
diff --git a/spaces/avivdm1/AutoGPT/autogpt/json_utils/json_fix_llm.py b/spaces/avivdm1/AutoGPT/autogpt/json_utils/json_fix_llm.py
deleted file mode 100644
index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000
--- a/spaces/avivdm1/AutoGPT/autogpt/json_utils/json_fix_llm.py
+++ /dev/null
@@ -1,220 +0,0 @@
-"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance
-of the ChatGPT API or LLM models."""
-from __future__ import annotations
-
-import contextlib
-import json
-from typing import Any, Dict
-
-from colorama import Fore
-from regex import regex
-
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_general import correct_json
-from autogpt.llm_utils import call_ai_function
-from autogpt.logs import logger
-from autogpt.speech import say_text
-
-JSON_SCHEMA = """
-{
- "command": {
- "name": "command name",
- "args": {
- "arg name": "value"
- }
- },
- "thoughts":
- {
- "text": "thought",
- "reasoning": "reasoning",
- "plan": "- short bulleted\n- list that conveys\n- long-term plan",
- "criticism": "constructive self-criticism",
- "speak": "thoughts summary to say to user"
- }
-}
-"""
-
-CFG = Config()
-
-
-def auto_fix_json(json_string: str, schema: str) -> str:
- """Fix the given JSON string to make it parseable and fully compliant with
- the provided schema using GPT-3.
-
- Args:
- json_string (str): The JSON string to fix.
- schema (str): The schema to use to fix the JSON.
- Returns:
- str: The fixed JSON string.
- """
- # Try to fix the JSON using GPT:
- function_string = "def fix_json(json_string: str, schema:str=None) -> str:"
- args = [f"'''{json_string}'''", f"'''{schema}'''"]
- description_string = (
- "This function takes a JSON string and ensures that it"
- " is parseable and fully compliant with the provided schema. If an object"
- " or field specified in the schema isn't contained within the correct JSON,"
- " it is omitted. The function also escapes any double quotes within JSON"
- " string values to ensure that they are valid. If the JSON string contains"
- " any None or NaN values, they are replaced with null before being parsed."
- )
-
- # If it doesn't already start with a "`", add one:
- if not json_string.startswith("`"):
- json_string = "```json\n" + json_string + "\n```"
- result_string = call_ai_function(
- function_string, args, description_string, model=CFG.fast_llm_model
- )
- logger.debug("------------ JSON FIX ATTEMPT ---------------")
- logger.debug(f"Original JSON: {json_string}")
- logger.debug("-----------")
- logger.debug(f"Fixed JSON: {result_string}")
- logger.debug("----------- END OF FIX ATTEMPT ----------------")
-
- try:
- json.loads(result_string) # just check the validity
- return result_string
- except json.JSONDecodeError: # noqa: E722
- # Get the call stack:
- # import traceback
- # call_stack = traceback.format_exc()
- # print(f"Failed to fix JSON: '{json_string}' "+call_stack)
- return "failed"
-
-
-def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]:
- """Fix the given JSON string to make it parseable and fully compliant with two techniques.
-
- Args:
- json_string (str): The JSON string to fix.
-
- Returns:
- str: The fixed JSON string.
- """
-
- # Parse and print Assistant response
- assistant_reply_json = fix_and_parse_json(assistant_reply)
- if assistant_reply_json == {}:
- assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(
- assistant_reply
- )
-
- if assistant_reply_json != {}:
- return assistant_reply_json
-
- logger.error(
- "Error: The following AI output couldn't be converted to a JSON:\n",
- assistant_reply,
- )
- if CFG.speak_mode:
- say_text("I have received an invalid JSON response from the OpenAI API.")
-
- return {}
-
-
-def fix_and_parse_json(
- json_to_load: str, try_to_fix_with_gpt: bool = True
-) -> Dict[Any, Any]:
- """Fix and parse JSON string
-
- Args:
- json_to_load (str): The JSON string.
- try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT.
- Defaults to True.
-
- Returns:
- str or dict[Any, Any]: The parsed JSON.
- """
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = json_to_load.replace("\t", "")
- return json.loads(json_to_load)
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = correct_json(json_to_load)
- return json.loads(json_to_load)
- # Let's do something manually:
- # sometimes GPT responds with something BEFORE the braces:
- # "I'm sorry, I don't understand. Please try again."
- # {"text": "I'm sorry, I don't understand. Please try again.",
- # "confidence": 0.0}
- # So let's try to find the first brace and then parse the rest
- # of the string
- try:
- brace_index = json_to_load.index("{")
- maybe_fixed_json = json_to_load[brace_index:]
- last_brace_index = maybe_fixed_json.rindex("}")
- maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1]
- return json.loads(maybe_fixed_json)
- except (json.JSONDecodeError, ValueError) as e:
- return try_ai_fix(try_to_fix_with_gpt, e, json_to_load)
-
-
-def try_ai_fix(
- try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str
-) -> Dict[Any, Any]:
- """Try to fix the JSON with the AI
-
- Args:
- try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI.
- exception (Exception): The exception that was raised.
- json_to_load (str): The JSON string to load.
-
- Raises:
- exception: If try_to_fix_with_gpt is False.
-
- Returns:
- str or dict[Any, Any]: The JSON string or dictionary.
- """
- if not try_to_fix_with_gpt:
- raise exception
- if CFG.debug_mode:
- logger.warn(
- "Warning: Failed to parse AI output, attempting to fix."
- "\n If you see this warning frequently, it's likely that"
- " your prompt is confusing the AI. Try changing it up"
- " slightly."
- )
- # Now try to fix this up using the ai_functions
- ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA)
-
- if ai_fixed_json != "failed":
- return json.loads(ai_fixed_json)
- # This allows the AI to react to the error message,
- # which usually results in it correcting its ways.
- # logger.error("Failed to fix AI output, telling the AI.")
- return {}
-
-
-def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str):
- if CFG.speak_mode and CFG.debug_mode:
- say_text(
- "I have received an invalid JSON response from the OpenAI API. "
- "Trying to fix it now."
- )
- logger.error("Attempting to fix JSON by finding outermost brackets\n")
-
- try:
- json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}")
- json_match = json_pattern.search(json_string)
-
- if json_match:
- # Extract the valid JSON object from the string
- json_string = json_match.group(0)
- logger.typewriter_log(
- title="Apparently json was fixed.", title_color=Fore.GREEN
- )
- if CFG.speak_mode and CFG.debug_mode:
- say_text("Apparently json was fixed.")
- else:
- return {}
-
- except (json.JSONDecodeError, ValueError):
- if CFG.debug_mode:
- logger.error(f"Error: Invalid JSON: {json_string}\n")
- if CFG.speak_mode:
- say_text("Didn't work. I will have to ignore this response then.")
- logger.error("Error: Invalid JSON, setting it to empty JSON now.\n")
- json_string = {}
-
- return fix_and_parse_json(json_string)
diff --git a/spaces/awacke1/AI-Quantum/README.md b/spaces/awacke1/AI-Quantum/README.md
deleted file mode 100644
index d1e5e41aafb6e47f286eb13d1f0c3219bced5395..0000000000000000000000000000000000000000
--- a/spaces/awacke1/AI-Quantum/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ✨Quantum-Programming-Streamlit-AI🧠
-emoji: ✨🧠
-colorFrom: yellow
-colorTo: green
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git "a/spaces/awacke1/CardWriterPro/pages/5_\360\237\217\213\357\270\217\342\200\215\342\231\200\357\270\217_Model_training.py" "b/spaces/awacke1/CardWriterPro/pages/5_\360\237\217\213\357\270\217\342\200\215\342\231\200\357\270\217_Model_training.py"
deleted file mode 100644
index b329c57a7c0f759c4fa6885f4cda5238a9fdfe80..0000000000000000000000000000000000000000
--- "a/spaces/awacke1/CardWriterPro/pages/5_\360\237\217\213\357\270\217\342\200\215\342\231\200\357\270\217_Model_training.py"
+++ /dev/null
@@ -1,87 +0,0 @@
-import streamlit as st
-from persist import persist, load_widget_state
-
-global variable_output
-
-def main():
-
- cs_body()
-
-
-def cs_body():
-
- st.markdown('# Training Details')
- st.write("Provide an overview of the Training Data and Training Procedure for this model")
- left, middle, right = st.columns([2,1,7])
-
- with left:
- st.write("\n")
- st.write("\n")
- st.markdown('## Training Data:')
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- with middle:
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.markdown(' \n ## Training Procedure')
- with left:
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
-
- st.markdown('#### Preprocessing:')
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.markdown('#### Speeds, Sizes, Time:')
-
- with right:
- #soutput_jinja = parse_into_jinja_markdown()
-
- st.text_area("", help ="Ideally this links to a Dataset Card.", key=persist("training_Data"))
- #st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
-
- st.text_area("",key=persist("model_preprocessing"))
- st.text_area("", help = "This section provides information about throughput, start/end time, checkpoint size if relevant, etc.", key=persist("Speeds_Sizes_Times"))
-
-
-
-
-
-
-if __name__ == '__main__':
- load_widget_state()
- main()
\ No newline at end of file
diff --git a/spaces/awacke1/Docker.Jupyterlab.Integration.HF/on_startup.sh b/spaces/awacke1/Docker.Jupyterlab.Integration.HF/on_startup.sh
deleted file mode 100644
index 448000271bbc7142681947fd1a447772f12ecfff..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Docker.Jupyterlab.Integration.HF/on_startup.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/bash
-# Write some commands here that will run on root user before startup.
-# For example, to clone transformers and install it in dev mode:
-# git clone https://github.com/huggingface/transformers.git
-# cd transformers && pip install -e ".[dev]"
\ No newline at end of file
diff --git a/spaces/awacke1/MultiRhymeLyricSmith/rhyme-with-ai/token_weighter.py b/spaces/awacke1/MultiRhymeLyricSmith/rhyme-with-ai/token_weighter.py
deleted file mode 100644
index 69f2ec2ab515cb580acfa3c6ebcc83a6ab70db88..0000000000000000000000000000000000000000
--- a/spaces/awacke1/MultiRhymeLyricSmith/rhyme-with-ai/token_weighter.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import numpy as np
-
-
-class TokenWeighter:
- def __init__(self, tokenizer):
- self.tokenizer_ = tokenizer
- self.proba = self.get_token_proba()
-
- def get_token_proba(self):
- valid_token_mask = self._filter_short_partial(self.tokenizer_.vocab)
- return valid_token_mask
-
- def _filter_short_partial(self, vocab):
- valid_token_ids = [v for k, v in vocab.items() if len(k) > 1 and "#" not in k]
- is_valid = np.zeros(len(vocab.keys()))
- is_valid[valid_token_ids] = 1
- return is_valid
\ No newline at end of file
diff --git a/spaces/awacke1/PlantFractalsMathGameWithJuliaSetnStrangeAttractors/app.py b/spaces/awacke1/PlantFractalsMathGameWithJuliaSetnStrangeAttractors/app.py
deleted file mode 100644
index 51d585ff61112dc4bf410ce874909f18a5a9408f..0000000000000000000000000000000000000000
--- a/spaces/awacke1/PlantFractalsMathGameWithJuliaSetnStrangeAttractors/app.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import streamlit as st
-import numpy as np
-import matplotlib.pyplot as plt
-
-st.title('Plant Fractal')
-
-def generate_strange_attractor(num_points, a, b, c, d):
- x, y, z = 0.1, 0.0, 0.0
- points = []
- for i in range(num_points):
- x_dot = np.sin(y * a) - np.cos(x * b)
- y_dot = np.sin(z * c) - np.cos(y * a)
- z_dot = np.sin(x * d) - np.cos(z * c)
- x += 0.1 * x_dot
- y += 0.1 * y_dot
- z += 0.1 * z_dot
- points.append((x, y, z))
- x, y, z = zip(*points)
- return (x, y, z)
-
-def generate_fractal_1(num_points, a, b, c, d):
- x, y, z = 0.0, 0.0, 0.0
- points = []
- for i in range(num_points):
- x_dot = np.sin(a * y) - np.sin(b * x)
- y_dot = np.sin(c * x) - np.sin(d * y)
- z_dot = 0.2
- x += x_dot
- y += y_dot
- z += z_dot
- points.append((x, y, z))
- x, y, z = zip(*points)
- return (x, y, z)
-
-def generate_fractal_2(num_points, a, b, c, d):
- x, y, z = 0.1, 0.0, 0.0
- points = []
- for i in range(num_points):
- x_dot = np.sin(y * a) - np.cos(x * b)
- y_dot = np.sin(z * c) - np.cos(y * a)
- z_dot = np.sin(x * d) - np.cos(z * c)
- x += 0.2 * x_dot
- y += 0.2 * y_dot
- z += 0.2 * z_dot
- points.append((x, y, z))
- x, y, z = zip(*points)
- return (x, y, z)
-
-num_points = st.slider('How many points do you want to generate?', 1000, 100000, 10000)
-fractal_type = st.selectbox('Select a fractal type', ('Strange Attractor', 'Fractal 1', 'Fractal 2'))
-
-if fractal_type == 'Strange Attractor':
- a = st.slider('a', 0.0, 2.0, 1.2)
- b = st.slider('b', 0.0, 2.0, 0.6)
- c = st.slider('c', 0.0, 2.0, 1.7)
- d = st.slider('d', 0.0, 2.0, 1.5)
- x, y, z = generate_strange_attractor(num_points, a, b, c, d)
- fig = plt.figure()
- ax = fig.add_subplot(111, projection='3d')
- ax.plot(x, y, z, linewidth=1)
- ax.set_title('Strange Attractor Fractal')
- ax.set_xlabel('X')
- ax.set_ylabel('Y')
- ax.set_zlabel('Z')
- st.pyplot(fig)
-
-if fractal_type == 'Fractal 1':
- a = st.slider('a', 0.0, 2.0, 1.2)
- b = st.slider('b', 0.0, 2.0, 0.6)
- c = st.slider('c', 0.0, 2.0, 1.7)
- d = st.slider('d', 0.0, 2.0, 1.5)
- x, y, z = generate_fractal_1(num_points, a, b, c, d)
- fig = plt.figure()
- ax = fig.add_subplot(111, projection='3d')
- ax.plot(x, y, z, linewidth=1)
- ax.set_title('Fractal 1')
- ax.set_xlabel('X')
- ax.set_ylabel('Y')
- ax.set_zlabel('Z')
- st.pyplot(fig)
-
-if fractal_type == 'Fractal 2':
- a = st.slider('a', 0.0, 2.0, 1.2)
- b = st.slider('b', 0.0, 2.0, 0.6)
- c = st.slider('c', 0.0, 2.0, 1.7)
- d = st.slider('d', 0.0, 2.0, 1.5)
- x, y, z = generate_fractal_2(num_points, a, b, c, d)
- fig = plt.figure()
- ax = fig.add_subplot(111, projection='3d')
- ax.plot(x, y, z, linewidth=1)
- ax.set_title('Fractal 2')
- ax.set_xlabel('X')
- ax.set_ylabel('Y')
- ax.set_zlabel('Z')
- st.pyplot(fig)
-
-
-
diff --git a/spaces/awacke1/Token-Classification-NER-dslim-bert-base-NER/app.py b/spaces/awacke1/Token-Classification-NER-dslim-bert-base-NER/app.py
deleted file mode 100644
index 6df30f74bda732e50d16b5a5e9cabbc5690577d7..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Token-Classification-NER-dslim-bert-base-NER/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/dslim/bert-base-NER").launch()
\ No newline at end of file
diff --git a/spaces/ayush5710/Codellama-13b-integratable-chatbot/style.css b/spaces/ayush5710/Codellama-13b-integratable-chatbot/style.css
deleted file mode 100644
index c56b88671339bdf0c805ddea8ec742dede3e86a9..0000000000000000000000000000000000000000
--- a/spaces/ayush5710/Codellama-13b-integratable-chatbot/style.css
+++ /dev/null
@@ -1,38 +0,0 @@
-.iframe-container {
- position: relative;
- position: fixed;
- width: 500px; /* Set the desired width for your iframe */
- height: 700px; /* Set the desired height for your visible iframe content (450 - 100) */
- overflow: hidden;
- display: none;
- right: 50px;
- z-index: 999;
- top: 3.2em;
- transition: display 0.5s ease-in-out;
- border-radius: 5%;
- }
-
- .iframe-container iframe {
- position: absolute;
- top: -410px;
- width: 100%;
- height: 985px;
- /* height: 1200px; */
- overflow: hidden;
- border-radius: 5%;
- }
-
- .click {
- position: fixed;
- bottom: 35px; /* Adjust the distance from the bottom as needed */
- right: 20px; /* Adjust the distance from the right as needed */
- padding: 10px 20px;
- background-color: rgba(233, 113, 33, 0.7);
- color: #fff;
- border: none;
- border-radius: 6px;
- cursor: pointer;
- z-index: 9999;
- }
-
-
diff --git a/spaces/badayvedat/AudioSep/models/CLAP/training/params.py b/spaces/badayvedat/AudioSep/models/CLAP/training/params.py
deleted file mode 100644
index 0cc1a0e2d982e900988cf5a4b24b2e59b093537b..0000000000000000000000000000000000000000
--- a/spaces/badayvedat/AudioSep/models/CLAP/training/params.py
+++ /dev/null
@@ -1,563 +0,0 @@
-import argparse
-
-
-def get_default_params(model_name):
- # Params from paper (https://arxiv.org/pdf/2103.00020.pdf)
- model_name = model_name.lower()
- if "vit" in model_name:
- return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.98, "eps": 1.0e-6}
- else:
- return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.999, "eps": 1.0e-8}
-
-
-def parse_args():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--train-data",
- type=str,
- default=None,
- help="Path to h5 filewith training data",
- )
- parser.add_argument(
- "--val-data",
- type=str,
- default=None,
- help="Path to h5 file with validation data",
- )
- parser.add_argument(
- "--freeze-text",
- default=False,
- action="store_true",
- help="if you need to freeze the text encoder, make this True",
- )
- parser.add_argument(
- "--freeze-text-after",
- type=int,
- default=-1,
- help="if you need to freeze the text encoder after (include) epoch x, set this param to x. Set -1 to disable it",
- )
- parser.add_argument(
- "--train-ipc",
- type=str,
- default=None,
- help="Path to npy file of the number of instance per class in training data",
- )
- parser.add_argument(
- "--val-ipc",
- type=str,
- default=None,
- help="Path to npy file of the number of instance per class in validation data",
- )
- parser.add_argument(
- "--train-num-samples",
- type=int,
- default=None,
- help="Number of samples in dataset. Required for webdataset if not available in info file.",
- )
- parser.add_argument(
- "--val-num-samples",
- type=int,
- default=None,
- help="Number of samples in dataset. Useful for webdataset if not available in info file.",
- )
- parser.add_argument(
- "--dataset-type",
- choices=["webdataset", "csv", "auto", "toy"],
- default="auto",
- help="Which type of dataset to process.",
- )
- parser.add_argument(
- "--csv-separator",
- type=str,
- default="\t",
- help="For csv-like datasets, which separator to use.",
- )
- parser.add_argument(
- "--csv-img-key",
- type=str,
- default="filepath",
- help="For csv-like datasets, the name of the key for the image paths.",
- )
- parser.add_argument(
- "--csv-caption-key",
- type=str,
- default="title",
- help="For csv-like datasets, the name of the key for the captions.",
- )
- parser.add_argument(
- "--imagenet-val",
- type=str,
- default=None,
- help="Path to imagenet val set for conducting zero shot evaluation.",
- )
- parser.add_argument(
- "--imagenet-v2",
- type=str,
- default=None,
- help="Path to imagenet v2 for conducting zero shot evaluation.",
- )
- parser.add_argument(
- "--datasetnames",
- nargs="+",
- default=None,
- help="If loading webdataset, spedify the dataset names to load. Can be some of these: Clotho, audioset, audiocaps, BBCSoundEffects",
- )
- parser.add_argument(
- "--full-train-dataset",
- nargs="+",
- default=None,
- help="Which dataset will be trained with all the subsets. (train+test)",
- )
- parser.add_argument(
- "--exclude-eval-dataset",
- nargs="+",
- default=None,
- help="Which dataset will be excluded with evaluation",
- )
- parser.add_argument(
- "--datasetinfos",
- nargs="+",
- default=None,
- help="If loading webdataset, spedify the dataset types to load. Can be some of these: train, test, valid, unbalanced_train, balanced_train, eval",
- )
- parser.add_argument(
- "--dataset-proportion",
- type=float,
- default=1.0,
- help="How much proportion of dataset we want to train.",
- )
- parser.add_argument(
- "--remotedata",
- default=False,
- action="store_true",
- help="if the dataset is remote, set this flag",
- )
- parser.add_argument(
- "--class-label-path",
- type=str,
- default=None,
- help="The path of the class label pickle or csv.",
- )
- parser.add_argument(
- "--datasetpath",
- type=str,
- default="/mnt/audio_clip/webdataset_tar",
- help="The path to the dataset",
- )
- parser.add_argument(
- "--logs",
- type=str,
- default="./logs/",
- help="Where to store tensorboard logs. Use None to avoid storing logs.",
- )
- parser.add_argument(
- "--log-local",
- action="store_true",
- default=False,
- help="log files on local master, otherwise global master only.",
- )
- parser.add_argument(
- "--name",
- type=str,
- default=None,
- help="Optional identifier for the experiment when storing logs. Otherwise use current time.",
- )
- parser.add_argument(
- "--workers", type=int, default=1, help="Number of workers per GPU."
- )
- parser.add_argument(
- "--batch-size", type=int, default=64, help="Batch size per GPU."
- )
- parser.add_argument(
- "--epochs", type=int, default=32, help="Number of epochs to train for."
- )
- parser.add_argument("--lr", type=float, default=None, help="Learning rate.")
- parser.add_argument("--beta1", type=float, default=None, help="Adam beta 1.")
- parser.add_argument("--beta2", type=float, default=None, help="Adam beta 2.")
- parser.add_argument("--eps", type=float, default=None, help="Adam epsilon.")
- parser.add_argument("--momentum", type=float, default=None, help="SGD epsilon.")
- parser.add_argument("--wd", type=float, default=0.2, help="Weight decay.")
-
- parser.add_argument(
- "--split-opt",
- action="store_true",
- default=False,
- help="Use this flag to skip the learning rate decay.",
- )
- parser.add_argument(
- "--lr-pretrained", type=float, default=None, help="Learning rate for text."
- )
- parser.add_argument(
- "--beta1-pretrained", type=float, default=None, help="Adam beta 1 for text."
- )
- parser.add_argument(
- "--beta2-pretrained", type=float, default=None, help="Adam beta 2 for text."
- )
- parser.add_argument(
- "--eps-pretrained", type=float, default=None, help="Adam epsilon for text."
- )
- parser.add_argument(
- "--wd-pretrained", type=float, default=0.2, help="Weight decay for text."
- )
- parser.add_argument(
- "--momentum-pretrained", type=float, default=0.9, help="Momentum for text."
- )
- parser.add_argument(
- "--lr-new", type=float, default=None, help="Learning rate for audio."
- )
- parser.add_argument(
- "--beta1-new", type=float, default=None, help="Adam beta 1 for audio."
- )
- parser.add_argument(
- "--beta2-new", type=float, default=None, help="Adam beta 2 for audio."
- )
- parser.add_argument(
- "--eps-new", type=float, default=None, help="Adam epsilon for audio."
- )
- parser.add_argument(
- "--wd-new", type=float, default=0.2, help="Weight decay for audio."
- )
- parser.add_argument(
- "--momentum-new", type=float, default=0.9, help="Momentum for audio."
- )
- parser.add_argument(
- "--warmup", type=int, default=10000, help="Number of steps to warmup for."
- )
- parser.add_argument(
- "--use-bn-sync",
- default=False,
- action="store_true",
- help="Whether to use batch norm sync.",
- )
- parser.add_argument(
- "--skip-scheduler",
- action="store_true",
- default=False,
- help="Use this flag to skip the learning rate decay.",
- )
- parser.add_argument(
- "--save-frequency", type=int, default=1, help="How often to save checkpoints."
- )
- parser.add_argument(
- "--save-top-performance",
- type=int,
- default=0,
- help="Save the top x performance weights if the value >0",
- )
- parser.add_argument(
- "--save-most-recent",
- action="store_true",
- default=False,
- help="Always save the most recent model trained to epoch_latest.pt.",
- )
- parser.add_argument(
- "--zeroshot-frequency", type=int, default=2, help="How often to run zero shot."
- )
- parser.add_argument(
- "--val-frequency",
- type=int,
- default=1,
- help="How often to run evaluation with val data.",
- )
- parser.add_argument(
- "--resume",
- default=None,
- type=str,
- help="path to latest checkpoint (default: none)",
- )
- parser.add_argument(
- "--precision",
- choices=["amp", "fp16", "fp32"],
- default="amp",
- help="Floating point precision.",
- )
- parser.add_argument(
- "--amodel",
- type=str,
- default="RN50",
- help="Name of the audio backbone to use.",
- )
- parser.add_argument(
- "--tmodel",
- type=str,
- default="transformer",
- help="Name of the text backbone to use. Can be [transformer, bert, roberta, bart]",
- )
- parser.add_argument(
- "--pretrained-audio",
- default="",
- type=str,
- help="Use a pretrained audio model weights for the audio encoder of CLAP",
- )
- parser.add_argument(
- "--pretrained-text",
- default="",
- type=str,
- help="Use a pretrained text model weights for the text encoder of CLAP",
- )
- parser.add_argument(
- "--pretrained",
- default="",
- type=str,
- help="Use a pretrained CLIP model weights with the specified tag or file path.",
- )
- parser.add_argument(
- "--pretrained-image",
- default=False,
- action="store_true",
- help="Load imagenet pretrained weights for image tower backbone if available.",
- )
- parser.add_argument(
- "--lock-image",
- default=False,
- action="store_true",
- help="Lock full image tower by disabling gradients.",
- )
- parser.add_argument(
- "--lock-image-unlocked-groups",
- type=int,
- default=0,
- help="Leave last n image tower layer groups unlocked.",
- )
- parser.add_argument(
- "--lock-image-freeze-bn-stats",
- default=False,
- action="store_true",
- help="Freeze BatchNorm running stats in image tower for any locked layers.",
- )
- parser.add_argument(
- "--local-loss",
- default=False,
- action="store_true",
- help="calculate loss w/ local features @ global (instead of realizing full global @ global matrix)",
- )
- parser.add_argument(
- "--gather-with-grad",
- default=False,
- action="store_true",
- help="enable full distributed gradient for feature gather",
- )
- parser.add_argument(
- "--force-quick-gelu",
- default=False,
- action="store_true",
- help="Force use of QuickGELU activation for non-OpenAI transformer models.",
- )
- parser.add_argument(
- "--torchscript",
- default=False,
- action="store_true",
- help="torch.jit.script the model, also uses jit version of OpenAI models if pretrained=='openai'",
- )
- parser.add_argument(
- "--trace",
- default=False,
- action="store_true",
- help="torch.jit.trace the model for inference / eval only",
- )
- # arguments for distributed training
- parser.add_argument(
- "--dist-url",
- default="env://",
- type=str,
- help="url used to set up distributed training",
- )
- parser.add_argument(
- "--dist-backend", default="nccl", type=str, help="distributed backend"
- )
- parser.add_argument(
- "--report-to",
- default="",
- type=str,
- help="Options are ['wandb', 'tensorboard', 'wandb,tensorboard']",
- )
- parser.add_argument(
- "--wandb-notes", default="", type=str, help="Notes if logging with wandb"
- )
- parser.add_argument(
- "--C", type=float, default=3.16, help="inverse regularizer for logistic reg."
- )
- parser.add_argument(
- "--debug",
- default=False,
- action="store_true",
- help="If true, more information is logged.",
- )
- parser.add_argument(
- "--copy-codebase",
- default=False,
- action="store_true",
- help="If true, we copy the entire base on the log diretory, and execute from there.",
- )
- parser.add_argument(
- "--horovod",
- default=False,
- action="store_true",
- help="Use horovod for distributed training.",
- )
- parser.add_argument(
- "--ddp-static-graph",
- default=False,
- action="store_true",
- help="Enable static graph optimization for DDP in PyTorch >= 1.11.",
- )
- parser.add_argument(
- "--no-set-device-rank",
- default=False,
- action="store_true",
- help="Don't set device index from local rank (when CUDA_VISIBLE_DEVICES restricted to one per proc).",
- )
- parser.add_argument("--seed", type=int, default=4242, help="Default random seed.")
-
- parser.add_argument(
- "--top-k-checkpoint-select-dataset",
- type=str,
- default="all",
- help="The dataset of selecting top-k checkpoint.",
- )
-
- # @R10, @R@5, @R1, mAP@10
- parser.add_argument(
- "--top-k-checkpoint-select-metric",
- type=str,
- default="_R@10",
- help="The metric for selecting top-k checkpoint.",
- )
- parser.add_argument(
- "--openai-model-cache-dir",
- type=str,
- default="~/.cache/clip",
- help="Directory to download OpenAI models.",
- )
- parser.add_argument(
- "--optimizer",
- type=str,
- default="adamw",
- help="can be AdamW or SGD",
- )
- parser.add_argument(
- "--parallel-eval",
- default=False,
- action="store_true",
- help="Eval in parallel (multi-GPU, multi-node).",
- )
-
- parser.add_argument(
- "--no-eval",
- default=False,
- action="store_true",
- help="Training without evaluation.",
- )
-
- parser.add_argument(
- "--lp-mlp",
- default=False,
- action="store_true",
- help="Linear Probe using MLP layer or not.",
- )
-
- parser.add_argument(
- "--lp-freeze",
- default=False,
- action="store_true",
- help="Linear Probe using Freeze CLAP or not",
- )
-
- parser.add_argument(
- "--lp-act",
- default="None",
- type=str,
- help="Options are ['relu','elu','prelu','softmax','sigmoid']",
- )
-
- parser.add_argument(
- "--lp-loss", type=str, default="bce", help="Loss func of Linear Probe."
- )
-
- parser.add_argument(
- "--lp-metrics",
- type=str,
- default="map,mauc,acc",
- help="Metrics of Linear Probe.",
- )
-
- parser.add_argument(
- "--lp-lr", type=float, default=1e-4, help="learning rate of linear probe"
- )
- parser.add_argument(
- "--kappa",
- type=float,
- default=0,
- help="the kappa in the weighted contrastive loss, default is to turn off the weighted contrastive loss",
- )
-
- parser.add_argument(
- "--data-filling",
- type=str,
- default="pad",
- help="type of data filling when the audio length is shorter than the max length."
- "Can be one of the following: repeat, repeatpad, pad",
- )
- parser.add_argument(
- "--data-truncating",
- type=str,
- default="rand_trunc",
- help="type of data truncation when the audio length is longer than the max length."
- "Can be one of the following: rand_trunc, fusion",
- )
-
- parser.add_argument(
- "--clap-mlploss",
- default=False,
- action="store_true",
- help="Using MLP loss for CLAP model or not",
- )
-
- parser.add_argument(
- "--wandb-id",
- type=str,
- default=None,
- help="the id of wandb experiment to restore.",
- )
-
- parser.add_argument(
- "--sleep", type=float, default=0, help="sleep n seconds before start training"
- )
-
- # variable length processing
- parser.add_argument(
- "--enable-fusion",
- default=False,
- action="store_true",
- help="Enable feature funsion for variable-length data",
- )
-
- parser.add_argument(
- "--fusion-type",
- type=str,
- default="None",
- help="Type is among ['channel_map', 'daf_1d','aff_1d','iaff_1d','daf_2d','aff_2d','iaff_2d']",
- )
-
- parser.add_argument(
- "--mixup",
- default=False,
- action="store_true",
- help="Enable mixup in finetuning training.",
- )
- parser.add_argument(
- "--text-augment-selection",
- type=str,
- default=None,
- help="For selecting levels of augmented text. Type is among ['all', 'augment_only', 'none']",
- )
-
- args = parser.parse_args()
-
- # If some params are not passed, we use the default values based on model name.
- default_params = get_default_params(args.amodel)
- for name, val in default_params.items():
- if getattr(args, name) is None:
- setattr(args, name, val)
-
- return args
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/utils/SkeletonUtils.js b/spaces/banana-projects/web3d/node_modules/three/examples/jsm/utils/SkeletonUtils.js
deleted file mode 100644
index 3c7ef754468eb0f5d37b9439edfbb77571bc957c..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/utils/SkeletonUtils.js
+++ /dev/null
@@ -1,600 +0,0 @@
-/**
- * @author sunag / http://www.sunag.com.br
- */
-
-import {
- AnimationClip,
- AnimationMixer,
- Euler,
- Matrix4,
- Quaternion,
- QuaternionKeyframeTrack,
- SkeletonHelper,
- Vector2,
- Vector3,
- VectorKeyframeTrack
-} from "../../../build/three.module.js";
-
-'use strict';
-
-var SkeletonUtils = {
-
- retarget: function () {
-
- var pos = new Vector3(),
- quat = new Quaternion(),
- scale = new Vector3(),
- bindBoneMatrix = new Matrix4(),
- relativeMatrix = new Matrix4(),
- globalMatrix = new Matrix4();
-
- return function ( target, source, options ) {
-
- options = options || {};
- options.preserveMatrix = options.preserveMatrix !== undefined ? options.preserveMatrix : true;
- options.preservePosition = options.preservePosition !== undefined ? options.preservePosition : true;
- options.preserveHipPosition = options.preserveHipPosition !== undefined ? options.preserveHipPosition : false;
- options.useTargetMatrix = options.useTargetMatrix !== undefined ? options.useTargetMatrix : false;
- options.hip = options.hip !== undefined ? options.hip : "hip";
- options.names = options.names || {};
-
- var sourceBones = source.isObject3D ? source.skeleton.bones : this.getBones( source ),
- bones = target.isObject3D ? target.skeleton.bones : this.getBones( target ),
- bindBones,
- bone, name, boneTo,
- bonesPosition, i;
-
- // reset bones
-
- if ( target.isObject3D ) {
-
- target.skeleton.pose();
-
- } else {
-
- options.useTargetMatrix = true;
- options.preserveMatrix = false;
-
- }
-
- if ( options.preservePosition ) {
-
- bonesPosition = [];
-
- for ( i = 0; i < bones.length; i ++ ) {
-
- bonesPosition.push( bones[ i ].position.clone() );
-
- }
-
- }
-
- if ( options.preserveMatrix ) {
-
- // reset matrix
-
- target.updateMatrixWorld();
-
- target.matrixWorld.identity();
-
- // reset children matrix
-
- for ( i = 0; i < target.children.length; ++ i ) {
-
- target.children[ i ].updateMatrixWorld( true );
-
- }
-
- }
-
- if ( options.offsets ) {
-
- bindBones = [];
-
- for ( i = 0; i < bones.length; ++ i ) {
-
- bone = bones[ i ];
- name = options.names[ bone.name ] || bone.name;
-
- if ( options.offsets && options.offsets[ name ] ) {
-
- bone.matrix.multiply( options.offsets[ name ] );
-
- bone.matrix.decompose( bone.position, bone.quaternion, bone.scale );
-
- bone.updateMatrixWorld();
-
- }
-
- bindBones.push( bone.matrixWorld.clone() );
-
- }
-
- }
-
- for ( i = 0; i < bones.length; ++ i ) {
-
- bone = bones[ i ];
- name = options.names[ bone.name ] || bone.name;
-
- boneTo = this.getBoneByName( name, sourceBones );
-
- globalMatrix.copy( bone.matrixWorld );
-
- if ( boneTo ) {
-
- boneTo.updateMatrixWorld();
-
- if ( options.useTargetMatrix ) {
-
- relativeMatrix.copy( boneTo.matrixWorld );
-
- } else {
-
- relativeMatrix.getInverse( target.matrixWorld );
- relativeMatrix.multiply( boneTo.matrixWorld );
-
- }
-
- // ignore scale to extract rotation
-
- scale.setFromMatrixScale( relativeMatrix );
- relativeMatrix.scale( scale.set( 1 / scale.x, 1 / scale.y, 1 / scale.z ) );
-
- // apply to global matrix
-
- globalMatrix.makeRotationFromQuaternion( quat.setFromRotationMatrix( relativeMatrix ) );
-
- if ( target.isObject3D ) {
-
- var boneIndex = bones.indexOf( bone ),
- wBindMatrix = bindBones ? bindBones[ boneIndex ] : bindBoneMatrix.getInverse( target.skeleton.boneInverses[ boneIndex ] );
-
- globalMatrix.multiply( wBindMatrix );
-
- }
-
- globalMatrix.copyPosition( relativeMatrix );
-
- }
-
- if ( bone.parent && bone.parent.isBone ) {
-
- bone.matrix.getInverse( bone.parent.matrixWorld );
- bone.matrix.multiply( globalMatrix );
-
- } else {
-
- bone.matrix.copy( globalMatrix );
-
- }
-
- if ( options.preserveHipPosition && name === options.hip ) {
-
- bone.matrix.setPosition( pos.set( 0, bone.position.y, 0 ) );
-
- }
-
- bone.matrix.decompose( bone.position, bone.quaternion, bone.scale );
-
- bone.updateMatrixWorld();
-
- }
-
- if ( options.preservePosition ) {
-
- for ( i = 0; i < bones.length; ++ i ) {
-
- bone = bones[ i ];
- name = options.names[ bone.name ] || bone.name;
-
- if ( name !== options.hip ) {
-
- bone.position.copy( bonesPosition[ i ] );
-
- }
-
- }
-
- }
-
- if ( options.preserveMatrix ) {
-
- // restore matrix
-
- target.updateMatrixWorld( true );
-
- }
-
- };
-
- }(),
-
- retargetClip: function ( target, source, clip, options ) {
-
- options = options || {};
- options.useFirstFramePosition = options.useFirstFramePosition !== undefined ? options.useFirstFramePosition : false;
- options.fps = options.fps !== undefined ? options.fps : 30;
- options.names = options.names || [];
-
- if ( ! source.isObject3D ) {
-
- source = this.getHelperFromSkeleton( source );
-
- }
-
- var numFrames = Math.round( clip.duration * ( options.fps / 1000 ) * 1000 ),
- delta = 1 / options.fps,
- convertedTracks = [],
- mixer = new AnimationMixer( source ),
- bones = this.getBones( target.skeleton ),
- boneDatas = [],
- positionOffset,
- bone, boneTo, boneData,
- name, i, j;
-
- mixer.clipAction( clip ).play();
- mixer.update( 0 );
-
- source.updateMatrixWorld();
-
- for ( i = 0; i < numFrames; ++ i ) {
-
- var time = i * delta;
-
- this.retarget( target, source, options );
-
- for ( j = 0; j < bones.length; ++ j ) {
-
- name = options.names[ bones[ j ].name ] || bones[ j ].name;
-
- boneTo = this.getBoneByName( name, source.skeleton );
-
- if ( boneTo ) {
-
- bone = bones[ j ];
- boneData = boneDatas[ j ] = boneDatas[ j ] || { bone: bone };
-
- if ( options.hip === name ) {
-
- if ( ! boneData.pos ) {
-
- boneData.pos = {
- times: new Float32Array( numFrames ),
- values: new Float32Array( numFrames * 3 )
- };
-
- }
-
- if ( options.useFirstFramePosition ) {
-
- if ( i === 0 ) {
-
- positionOffset = bone.position.clone();
-
- }
-
- bone.position.sub( positionOffset );
-
- }
-
- boneData.pos.times[ i ] = time;
-
- bone.position.toArray( boneData.pos.values, i * 3 );
-
- }
-
- if ( ! boneData.quat ) {
-
- boneData.quat = {
- times: new Float32Array( numFrames ),
- values: new Float32Array( numFrames * 4 )
- };
-
- }
-
- boneData.quat.times[ i ] = time;
-
- bone.quaternion.toArray( boneData.quat.values, i * 4 );
-
- }
-
- }
-
- mixer.update( delta );
-
- source.updateMatrixWorld();
-
- }
-
- for ( i = 0; i < boneDatas.length; ++ i ) {
-
- boneData = boneDatas[ i ];
-
- if ( boneData ) {
-
- if ( boneData.pos ) {
-
- convertedTracks.push( new VectorKeyframeTrack(
- ".bones[" + boneData.bone.name + "].position",
- boneData.pos.times,
- boneData.pos.values
- ) );
-
- }
-
- convertedTracks.push( new QuaternionKeyframeTrack(
- ".bones[" + boneData.bone.name + "].quaternion",
- boneData.quat.times,
- boneData.quat.values
- ) );
-
- }
-
- }
-
- mixer.uncacheAction( clip );
-
- return new AnimationClip( clip.name, - 1, convertedTracks );
-
- },
-
- getHelperFromSkeleton: function ( skeleton ) {
-
- var source = new SkeletonHelper( skeleton.bones[ 0 ] );
- source.skeleton = skeleton;
-
- return source;
-
- },
-
- getSkeletonOffsets: function () {
-
- var targetParentPos = new Vector3(),
- targetPos = new Vector3(),
- sourceParentPos = new Vector3(),
- sourcePos = new Vector3(),
- targetDir = new Vector2(),
- sourceDir = new Vector2();
-
- return function ( target, source, options ) {
-
- options = options || {};
- options.hip = options.hip !== undefined ? options.hip : "hip";
- options.names = options.names || {};
-
- if ( ! source.isObject3D ) {
-
- source = this.getHelperFromSkeleton( source );
-
- }
-
- var nameKeys = Object.keys( options.names ),
- nameValues = Object.values( options.names ),
- sourceBones = source.isObject3D ? source.skeleton.bones : this.getBones( source ),
- bones = target.isObject3D ? target.skeleton.bones : this.getBones( target ),
- offsets = [],
- bone, boneTo,
- name, i;
-
- target.skeleton.pose();
-
- for ( i = 0; i < bones.length; ++ i ) {
-
- bone = bones[ i ];
- name = options.names[ bone.name ] || bone.name;
-
- boneTo = this.getBoneByName( name, sourceBones );
-
- if ( boneTo && name !== options.hip ) {
-
- var boneParent = this.getNearestBone( bone.parent, nameKeys ),
- boneToParent = this.getNearestBone( boneTo.parent, nameValues );
-
- boneParent.updateMatrixWorld();
- boneToParent.updateMatrixWorld();
-
- targetParentPos.setFromMatrixPosition( boneParent.matrixWorld );
- targetPos.setFromMatrixPosition( bone.matrixWorld );
-
- sourceParentPos.setFromMatrixPosition( boneToParent.matrixWorld );
- sourcePos.setFromMatrixPosition( boneTo.matrixWorld );
-
- targetDir.subVectors(
- new Vector2( targetPos.x, targetPos.y ),
- new Vector2( targetParentPos.x, targetParentPos.y )
- ).normalize();
-
- sourceDir.subVectors(
- new Vector2( sourcePos.x, sourcePos.y ),
- new Vector2( sourceParentPos.x, sourceParentPos.y )
- ).normalize();
-
- var laterialAngle = targetDir.angle() - sourceDir.angle();
-
- var offset = new Matrix4().makeRotationFromEuler(
- new Euler(
- 0,
- 0,
- laterialAngle
- )
- );
-
- bone.matrix.multiply( offset );
-
- bone.matrix.decompose( bone.position, bone.quaternion, bone.scale );
-
- bone.updateMatrixWorld();
-
- offsets[ name ] = offset;
-
- }
-
- }
-
- return offsets;
-
- };
-
- }(),
-
- renameBones: function ( skeleton, names ) {
-
- var bones = this.getBones( skeleton );
-
- for ( var i = 0; i < bones.length; ++ i ) {
-
- var bone = bones[ i ];
-
- if ( names[ bone.name ] ) {
-
- bone.name = names[ bone.name ];
-
- }
-
- }
-
- return this;
-
- },
-
- getBones: function ( skeleton ) {
-
- return Array.isArray( skeleton ) ? skeleton : skeleton.bones;
-
- },
-
- getBoneByName: function ( name, skeleton ) {
-
- for ( var i = 0, bones = this.getBones( skeleton ); i < bones.length; i ++ ) {
-
- if ( name === bones[ i ].name )
-
- return bones[ i ];
-
- }
-
- },
-
- getNearestBone: function ( bone, names ) {
-
- while ( bone.isBone ) {
-
- if ( names.indexOf( bone.name ) !== - 1 ) {
-
- return bone;
-
- }
-
- bone = bone.parent;
-
- }
-
- },
-
- findBoneTrackData: function ( name, tracks ) {
-
- var regexp = /\[(.*)\]\.(.*)/,
- result = { name: name };
-
- for ( var i = 0; i < tracks.length; ++ i ) {
-
- // 1 is track name
- // 2 is track type
- var trackData = regexp.exec( tracks[ i ].name );
-
- if ( trackData && name === trackData[ 1 ] ) {
-
- result[ trackData[ 2 ] ] = i;
-
- }
-
- }
-
- return result;
-
- },
-
- getEqualsBonesNames: function ( skeleton, targetSkeleton ) {
-
- var sourceBones = this.getBones( skeleton ),
- targetBones = this.getBones( targetSkeleton ),
- bones = [];
-
- search : for ( var i = 0; i < sourceBones.length; i ++ ) {
-
- var boneName = sourceBones[ i ].name;
-
- for ( var j = 0; j < targetBones.length; j ++ ) {
-
- if ( boneName === targetBones[ j ].name ) {
-
- bones.push( boneName );
-
- continue search;
-
- }
-
- }
-
- }
-
- return bones;
-
- },
-
- clone: function ( source ) {
-
- var sourceLookup = new Map();
- var cloneLookup = new Map();
-
- var clone = source.clone();
-
- parallelTraverse( source, clone, function ( sourceNode, clonedNode ) {
-
- sourceLookup.set( clonedNode, sourceNode );
- cloneLookup.set( sourceNode, clonedNode );
-
- } );
-
- clone.traverse( function ( node ) {
-
- if ( ! node.isSkinnedMesh ) return;
-
- var clonedMesh = node;
- var sourceMesh = sourceLookup.get( node );
- var sourceBones = sourceMesh.skeleton.bones;
-
- clonedMesh.skeleton = sourceMesh.skeleton.clone();
- clonedMesh.bindMatrix.copy( sourceMesh.bindMatrix );
-
- clonedMesh.skeleton.bones = sourceBones.map( function ( bone ) {
-
- return cloneLookup.get( bone );
-
- } );
-
- clonedMesh.bind( clonedMesh.skeleton, clonedMesh.bindMatrix );
-
- } );
-
- return clone;
-
- }
-
-};
-
-
-function parallelTraverse ( a, b, callback ) {
-
- callback( a, b );
-
- for ( var i = 0; i < a.children.length; i ++ ) {
-
- parallelTraverse( a.children[ i ], b.children[ i ], callback );
-
- }
-
-}
-
-export { SkeletonUtils };
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/math/Math.js b/spaces/banana-projects/web3d/node_modules/three/src/math/Math.js
deleted file mode 100644
index 6b03033dd3ebbcad3a7af260a09ed66ca2487752..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/math/Math.js
+++ /dev/null
@@ -1,153 +0,0 @@
-/**
- * @author alteredq / http://alteredqualia.com/
- * @author mrdoob / http://mrdoob.com/
- */
-
-var _Math = {
-
- DEG2RAD: Math.PI / 180,
- RAD2DEG: 180 / Math.PI,
-
- generateUUID: ( function () {
-
- // http://stackoverflow.com/questions/105034/how-to-create-a-guid-uuid-in-javascript/21963136#21963136
-
- var lut = [];
-
- for ( var i = 0; i < 256; i ++ ) {
-
- lut[ i ] = ( i < 16 ? '0' : '' ) + ( i ).toString( 16 );
-
- }
-
- return function generateUUID() {
-
- var d0 = Math.random() * 0xffffffff | 0;
- var d1 = Math.random() * 0xffffffff | 0;
- var d2 = Math.random() * 0xffffffff | 0;
- var d3 = Math.random() * 0xffffffff | 0;
- var uuid = lut[ d0 & 0xff ] + lut[ d0 >> 8 & 0xff ] + lut[ d0 >> 16 & 0xff ] + lut[ d0 >> 24 & 0xff ] + '-' +
- lut[ d1 & 0xff ] + lut[ d1 >> 8 & 0xff ] + '-' + lut[ d1 >> 16 & 0x0f | 0x40 ] + lut[ d1 >> 24 & 0xff ] + '-' +
- lut[ d2 & 0x3f | 0x80 ] + lut[ d2 >> 8 & 0xff ] + '-' + lut[ d2 >> 16 & 0xff ] + lut[ d2 >> 24 & 0xff ] +
- lut[ d3 & 0xff ] + lut[ d3 >> 8 & 0xff ] + lut[ d3 >> 16 & 0xff ] + lut[ d3 >> 24 & 0xff ];
-
- // .toUpperCase() here flattens concatenated strings to save heap memory space.
- return uuid.toUpperCase();
-
- };
-
- } )(),
-
- clamp: function ( value, min, max ) {
-
- return Math.max( min, Math.min( max, value ) );
-
- },
-
- // compute euclidian modulo of m % n
- // https://en.wikipedia.org/wiki/Modulo_operation
-
- euclideanModulo: function ( n, m ) {
-
- return ( ( n % m ) + m ) % m;
-
- },
-
- // Linear mapping from range to range
-
- mapLinear: function ( x, a1, a2, b1, b2 ) {
-
- return b1 + ( x - a1 ) * ( b2 - b1 ) / ( a2 - a1 );
-
- },
-
- // https://en.wikipedia.org/wiki/Linear_interpolation
-
- lerp: function ( x, y, t ) {
-
- return ( 1 - t ) * x + t * y;
-
- },
-
- // http://en.wikipedia.org/wiki/Smoothstep
-
- smoothstep: function ( x, min, max ) {
-
- if ( x <= min ) return 0;
- if ( x >= max ) return 1;
-
- x = ( x - min ) / ( max - min );
-
- return x * x * ( 3 - 2 * x );
-
- },
-
- smootherstep: function ( x, min, max ) {
-
- if ( x <= min ) return 0;
- if ( x >= max ) return 1;
-
- x = ( x - min ) / ( max - min );
-
- return x * x * x * ( x * ( x * 6 - 15 ) + 10 );
-
- },
-
- // Random integer from interval
-
- randInt: function ( low, high ) {
-
- return low + Math.floor( Math.random() * ( high - low + 1 ) );
-
- },
-
- // Random float from interval
-
- randFloat: function ( low, high ) {
-
- return low + Math.random() * ( high - low );
-
- },
-
- // Random float from <-range/2, range/2> interval
-
- randFloatSpread: function ( range ) {
-
- return range * ( 0.5 - Math.random() );
-
- },
-
- degToRad: function ( degrees ) {
-
- return degrees * _Math.DEG2RAD;
-
- },
-
- radToDeg: function ( radians ) {
-
- return radians * _Math.RAD2DEG;
-
- },
-
- isPowerOfTwo: function ( value ) {
-
- return ( value & ( value - 1 ) ) === 0 && value !== 0;
-
- },
-
- ceilPowerOfTwo: function ( value ) {
-
- return Math.pow( 2, Math.ceil( Math.log( value ) / Math.LN2 ) );
-
- },
-
- floorPowerOfTwo: function ( value ) {
-
- return Math.pow( 2, Math.floor( Math.log( value ) / Math.LN2 ) );
-
- }
-
-};
-
-
-export { _Math };
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLBackground.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLBackground.js
deleted file mode 100644
index 95a4f0677f629869fff85ee581c366a187648bd7..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLBackground.js
+++ /dev/null
@@ -1,223 +0,0 @@
-/**
- * @author mrdoob / http://mrdoob.com/
- */
-
-import { BackSide, FrontSide } from '../../constants.js';
-import { BoxBufferGeometry } from '../../geometries/BoxGeometry.js';
-import { PlaneBufferGeometry } from '../../geometries/PlaneGeometry.js';
-import { ShaderMaterial } from '../../materials/ShaderMaterial.js';
-import { Color } from '../../math/Color.js';
-import { Mesh } from '../../objects/Mesh.js';
-import { ShaderLib } from '../shaders/ShaderLib.js';
-import { cloneUniforms } from '../shaders/UniformsUtils.js';
-
-function WebGLBackground( renderer, state, objects, premultipliedAlpha ) {
-
- var clearColor = new Color( 0x000000 );
- var clearAlpha = 0;
-
- var planeMesh;
- var boxMesh;
- // Store the current background texture and its `version`
- // so we can recompile the material accordingly.
- var currentBackground = null;
- var currentBackgroundVersion = 0;
-
- function render( renderList, scene, camera, forceClear ) {
-
- var background = scene.background;
-
- // Ignore background in AR
- // TODO: Reconsider this.
-
- var vr = renderer.vr;
- var session = vr.getSession && vr.getSession();
-
- if ( session && session.environmentBlendMode === 'additive' ) {
-
- background = null;
-
- }
-
- if ( background === null ) {
-
- setClear( clearColor, clearAlpha );
- currentBackground = null;
- currentBackgroundVersion = 0;
-
- } else if ( background && background.isColor ) {
-
- setClear( background, 1 );
- forceClear = true;
- currentBackground = null;
- currentBackgroundVersion = 0;
-
- }
-
- if ( renderer.autoClear || forceClear ) {
-
- renderer.clear( renderer.autoClearColor, renderer.autoClearDepth, renderer.autoClearStencil );
-
- }
-
- if ( background && ( background.isCubeTexture || background.isWebGLRenderTargetCube ) ) {
-
- if ( boxMesh === undefined ) {
-
- boxMesh = new Mesh(
- new BoxBufferGeometry( 1, 1, 1 ),
- new ShaderMaterial( {
- type: 'BackgroundCubeMaterial',
- uniforms: cloneUniforms( ShaderLib.cube.uniforms ),
- vertexShader: ShaderLib.cube.vertexShader,
- fragmentShader: ShaderLib.cube.fragmentShader,
- side: BackSide,
- depthTest: false,
- depthWrite: false,
- fog: false
- } )
- );
-
- boxMesh.geometry.removeAttribute( 'normal' );
- boxMesh.geometry.removeAttribute( 'uv' );
-
- boxMesh.onBeforeRender = function ( renderer, scene, camera ) {
-
- this.matrixWorld.copyPosition( camera.matrixWorld );
-
- };
-
- // enable code injection for non-built-in material
- Object.defineProperty( boxMesh.material, 'map', {
-
- get: function () {
-
- return this.uniforms.tCube.value;
-
- }
-
- } );
-
- objects.update( boxMesh );
-
- }
-
- var texture = background.isWebGLRenderTargetCube ? background.texture : background;
- boxMesh.material.uniforms.tCube.value = texture;
- boxMesh.material.uniforms.tFlip.value = ( background.isWebGLRenderTargetCube ) ? 1 : - 1;
-
- if ( currentBackground !== background ||
- currentBackgroundVersion !== texture.version ) {
-
- boxMesh.material.needsUpdate = true;
-
- currentBackground = background;
- currentBackgroundVersion = texture.version;
-
- }
-
- // push to the pre-sorted opaque render list
- renderList.unshift( boxMesh, boxMesh.geometry, boxMesh.material, 0, 0, null );
-
- } else if ( background && background.isTexture ) {
-
- if ( planeMesh === undefined ) {
-
- planeMesh = new Mesh(
- new PlaneBufferGeometry( 2, 2 ),
- new ShaderMaterial( {
- type: 'BackgroundMaterial',
- uniforms: cloneUniforms( ShaderLib.background.uniforms ),
- vertexShader: ShaderLib.background.vertexShader,
- fragmentShader: ShaderLib.background.fragmentShader,
- side: FrontSide,
- depthTest: false,
- depthWrite: false,
- fog: false
- } )
- );
-
- planeMesh.geometry.removeAttribute( 'normal' );
-
- // enable code injection for non-built-in material
- Object.defineProperty( planeMesh.material, 'map', {
-
- get: function () {
-
- return this.uniforms.t2D.value;
-
- }
-
- } );
-
- objects.update( planeMesh );
-
- }
-
- planeMesh.material.uniforms.t2D.value = background;
-
- if ( background.matrixAutoUpdate === true ) {
-
- background.updateMatrix();
-
- }
-
- planeMesh.material.uniforms.uvTransform.value.copy( background.matrix );
-
- if ( currentBackground !== background ||
- currentBackgroundVersion !== background.version ) {
-
- planeMesh.material.needsUpdate = true;
-
- currentBackground = background;
- currentBackgroundVersion = background.version;
-
- }
-
-
- // push to the pre-sorted opaque render list
- renderList.unshift( planeMesh, planeMesh.geometry, planeMesh.material, 0, 0, null );
-
- }
-
- }
-
- function setClear( color, alpha ) {
-
- state.buffers.color.setClear( color.r, color.g, color.b, alpha, premultipliedAlpha );
-
- }
-
- return {
-
- getClearColor: function () {
-
- return clearColor;
-
- },
- setClearColor: function ( color, alpha ) {
-
- clearColor.set( color );
- clearAlpha = alpha !== undefined ? alpha : 1;
- setClear( clearColor, clearAlpha );
-
- },
- getClearAlpha: function () {
-
- return clearAlpha;
-
- },
- setClearAlpha: function ( alpha ) {
-
- clearAlpha = alpha;
- setClear( clearColor, clearAlpha );
-
- },
- render: render
-
- };
-
-}
-
-
-export { WebGLBackground };
diff --git a/spaces/beskrovnykh/danielsearch/app.py b/spaces/beskrovnykh/danielsearch/app.py
deleted file mode 100644
index 9e9e3d88339aedfed5a57dc4c3a36e7b9500ce56..0000000000000000000000000000000000000000
--- a/spaces/beskrovnykh/danielsearch/app.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# Press ⌃R to execute it or replace it with your code.
-# Press Double ⇧ to search everywhere for classes, files, tool windows, actions, and settings.
-from gpt_index import SimpleDirectoryReader, ServiceContext, GPTSimpleVectorIndex, LLMPredictor, \
- PromptHelper
-
-from langchain import OpenAI
-
-import gradio as gr
-import os.path
-
-from googletrans import Translator
-import openai
-
-translator = Translator()
-
-openai.api_key = os.environ["OPENAI_API_KEY"]
-
-
-def construct_context(model_name):
- max_input_size = 4096
- max_chunk_overlap = 20
- chunk_size_limit = 600
- num_outputs = 512
-
- llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.7, model_name=model_name, max_tokens=num_outputs))
- prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)
- service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
- return service_context
-
-
-def construct_index(directory_path, file_name, service_context):
- documents = SimpleDirectoryReader(directory_path).load_data()
- vector_index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)
- vector_index.save_to_disk(file_name)
- return vector_index
-
-
-def chatbot(query, model_name):
- question = translator.translate(query, src="ru", dest="en")
- input_text = f"""{question}
- According to Satsangs of Daniil Zuev, answer the question above.
- Give the answer in English following the rules:
- - You give answer in three parts
- Part 1:
- - Answer kindly, addressing the issue of the person who asks question
- - Assume that the person who asks the question does not know anything about Daniil's teaching
- - To answer, use your knowledge of all of the Daniil's Satsangs and find one idea, that
- - Limit your answer to 25 words.
- Part 2:
- - You give a quote, a phrase of Daniil that demonstrates this idea.
- - Limit the quote to 25 words.
- Part 3:
- - You give a link with a timecode to the Satsang where Daniil speaks about it"""
- vector_index = GPTSimpleVectorIndex.load_from_disk(model_name + "_index.json")
-
- response = vector_index.query(input_text, response_mode="compact")
- translation = translator.translate(response, dest="ru")
-
- return translation.text
-
-
-iface = gr.Interface(fn=chatbot,
- inputs=[
- gr.Textbox(lines=7, label="Enter your text"),
- gr.Dropdown(choices=["text-davinci-003", "text-curie-003", "text-babbage-003", "text-ada-003"],
- value="text-ada-003", label="Model")
- ],
- outputs="text",
- title="Umka AI")
-
-iface.launch()
diff --git a/spaces/bigbio/dataset-explore/README.md b/spaces/bigbio/dataset-explore/README.md
deleted file mode 100644
index a00a36950b47eec8433aa41baff3f4c90fd2ec93..0000000000000000000000000000000000000000
--- a/spaces/bigbio/dataset-explore/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Dataset Explore
-emoji: 💻
-colorFrom: indigo
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/txt2img.py b/spaces/bigjoker/stable-diffusion-webui/modules/txt2img.py
deleted file mode 100644
index 3927d8538f06c1ed270c9a6cfd55d4bb15705ee5..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/modules/txt2img.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import modules.scripts
-from modules import sd_samplers
-from modules.generation_parameters_copypaste import create_override_settings_dict
-from modules.processing import StableDiffusionProcessing, Processed, StableDiffusionProcessingTxt2Img, \
- StableDiffusionProcessingImg2Img, process_images
-from modules.shared import opts, cmd_opts
-import modules.shared as shared
-import modules.processing as processing
-from modules.ui import plaintext_to_html
-
-
-def txt2img(id_task: str, prompt: str, negative_prompt: str, prompt_styles, steps: int, sampler_index: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, height: int, width: int, enable_hr: bool, denoising_strength: float, hr_scale: float, hr_upscaler: str, hr_second_pass_steps: int, hr_resize_x: int, hr_resize_y: int, override_settings_texts, *args):
- override_settings = create_override_settings_dict(override_settings_texts)
-
- p = StableDiffusionProcessingTxt2Img(
- sd_model=shared.sd_model,
- outpath_samples=opts.outdir_samples or opts.outdir_txt2img_samples,
- outpath_grids=opts.outdir_grids or opts.outdir_txt2img_grids,
- prompt=prompt,
- styles=prompt_styles,
- negative_prompt=negative_prompt,
- seed=seed,
- subseed=subseed,
- subseed_strength=subseed_strength,
- seed_resize_from_h=seed_resize_from_h,
- seed_resize_from_w=seed_resize_from_w,
- seed_enable_extras=seed_enable_extras,
- sampler_name=sd_samplers.samplers[sampler_index].name,
- batch_size=batch_size,
- n_iter=n_iter,
- steps=steps,
- cfg_scale=cfg_scale,
- width=width,
- height=height,
- restore_faces=restore_faces,
- tiling=tiling,
- enable_hr=enable_hr,
- denoising_strength=denoising_strength if enable_hr else None,
- hr_scale=hr_scale,
- hr_upscaler=hr_upscaler,
- hr_second_pass_steps=hr_second_pass_steps,
- hr_resize_x=hr_resize_x,
- hr_resize_y=hr_resize_y,
- override_settings=override_settings,
- )
-
- p.scripts = modules.scripts.scripts_txt2img
- p.script_args = args
-
- if cmd_opts.enable_console_prompts:
- print(f"\ntxt2img: {prompt}", file=shared.progress_print_out)
-
- processed = modules.scripts.scripts_txt2img.run(p, *args)
-
- if processed is None:
- processed = process_images(p)
-
- p.close()
-
- shared.total_tqdm.clear()
-
- generation_info_js = processed.js()
- if opts.samples_log_stdout:
- print(generation_info_js)
-
- if opts.do_not_show_images:
- processed.images = []
-
- return processed.images, generation_info_js, plaintext_to_html(processed.info), plaintext_to_html(processed.comments)
diff --git a/spaces/bigscience/petals-api/app.py b/spaces/bigscience/petals-api/app.py
deleted file mode 100644
index b26e822f024aebbee689f57125a765d5656dc4ff..0000000000000000000000000000000000000000
--- a/spaces/bigscience/petals-api/app.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import torch
-import torch.nn.functional as F
-import transformers
-import gradio as gr
-
-from src.client import DistributedBloomForCausalLM
-
-INITIAL_PEERS = ['/ip4/193.106.95.184/tcp/443/p2p/QmSXDXLeSMXjS4YerDrdn1zpGQaNzkZ9ogN2SoAEyAdDhs']
-
-import hivemind # test that DHT instances work on localhost
-dht1 = hivemind.DHT(start=True)
-dht2 = hivemind.DHT(start=True, initial_peers=dht1.get_visible_maddrs())
-
-
-tokenizer = transformers.BloomTokenizerFast.from_pretrained("bigscience/test-bloomd-6b3")
-model = DistributedBloomForCausalLM.from_pretrained("bigscience/test-bloomd-6b3", initial_peers=INITIAL_PEERS, low_cpu_mem_usage=True, torch_dtype=torch.float32)
-
-def inference(text, seq_length=1):
- input_ids = tokenizer(text, return_tensors='pt')['input_ids']
- final_tokens = input_ids
- with torch.inference_mode(), model.transformer.h.inference_session() as remote_transformer:
- for i in range(seq_length):
- h = model.transformer.word_embeddings(input_ids)
- h = model.transformer.word_embeddings_layernorm(h)
- h = remote_transformer.step(h)
- h = model.transformer.ln_f(h)
- h = F.linear(h, weight=model.transformer.word_embeddings.weight) # note: this line takes a while, will also be fixed
- next_token_ix = torch.multinomial((h[0, -1] / 0.8).softmax(-1), 1)
-
- final_tokens = torch.cat([final_tokens, next_token_ix.view(1, 1)], dim=-1)
- input_ids = next_token_ix.view(1, 1)
- return tokenizer.decode(final_tokens[0], skip_special_tokens=False)
-
-iface = gr.Interface(
-fn=inference,
-inputs=[
-gr.Textbox(lines=10, label="Input text"),
-gr.inputs.Slider(
- minimum=0,
- maximum=1000,
- step=1,
- default=42,
- label="Sequence length for generation"
- )
-],
-outputs="text"
-)
-iface.launch()
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/1st Studio Nk 008 Siberian Mouse Wmv.md b/spaces/bioriAsaeru/text-to-voice/1st Studio Nk 008 Siberian Mouse Wmv.md
deleted file mode 100644
index b6aa66843bdda027bf820a317a23f63f3703b0ff..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/1st Studio Nk 008 Siberian Mouse Wmv.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Amtlib.framework folder zip for cs6 30 How to Get the Full Version of Adobe CS6 for Free.md b/spaces/bioriAsaeru/text-to-voice/Amtlib.framework folder zip for cs6 30 How to Get the Full Version of Adobe CS6 for Free.md
deleted file mode 100644
index 7e1c3de19997caf9f64ed5e15afe7648d828ee82..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Amtlib.framework folder zip for cs6 30 How to Get the Full Version of Adobe CS6 for Free.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
In the vast majority of cases, the solution is to properly reinstall amtlib.dll on your PC, to the Windows system folder. Alternatively, some programs, notably PC games, require that the DLL file is placed in the game/application installation folder.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Architectural Character And The History Of Architecture By George Salvan Pdf !!HOT!! Free Download.md b/spaces/bioriAsaeru/text-to-voice/Architectural Character And The History Of Architecture By George Salvan Pdf !!HOT!! Free Download.md
deleted file mode 100644
index 53223f9b92f810aebfe4fca8cfa2044421b015fd..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Architectural Character And The History Of Architecture By George Salvan Pdf !!HOT!! Free Download.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
architectural character and the history of architecture by george salvan pdf free download
-
-September 1, 2020 - 2. George Salvan Architectural Character and Architectural History 3. George Salvan Architectural design theory ü What is this book about?
-This is a book about how new trends in design are emerging these days.
-It describes the influence of George Salvan, who made his design for everyone who creates their homes.
-His ideas were implemented in the design of furniture, lamps, ceramics, textiles and interior items.
-The book chronicles his life and career, and his designs become part of the history of design.
-Who is this book for?
-This is a book for anyone interested in the history of design and architecture. 8a78ff9644
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Battlefield 2 Crack Exe 15 Tips and Tricks for Running the Game Smoothly.md b/spaces/bioriAsaeru/text-to-voice/Battlefield 2 Crack Exe 15 Tips and Tricks for Running the Game Smoothly.md
deleted file mode 100644
index e0352fcac89705dde8c89d8517a066dd30cf0655..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Battlefield 2 Crack Exe 15 Tips and Tricks for Running the Game Smoothly.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
Battlefield 2 Crack Exe 1.5Download File ===== 2 crack exe 1.5Battlefield 2: End Game (also known as Battlefield 2: The End Game) is a first-person shooter (FPS) game developed by EA Digital Illusions CE for the Microsoft Windows and Xbox platforms. It was released on March 29, 2005. 1.5 Crack, Battlefield 2 1.5 Patch Torrent, Battlefield 2 .Battlefield 2: End Game (also known as Battlefield 2: The End Game) is a first-person shooter (FPS) game developed by EA Digital Illusions CE for the Microsoft Windows and Xbox platforms. It was released on March 29, 2005. 1.5 Crack, Battlefield 2 1.5 Patch Torrent, Battlefield 2 .The objectives of the Career Development Award are to establish the applicant's independent research career, to enhance scientific knowledge on organizational behavior and leadership development, and to increase the applicant's success in obtaining a faculty position. The training plan to achieve this objective is based on the applicant's past accomplishments, extensive training in organizational behavior, and the resources available through the University of California at Los Angeles. The training plan includes: course work, attendance at seminars and conferences, small group and one-on-one interactions, and finally, the publication of a book (co-authored with Dr. Enos, co-director of the Career Development Award) and submission of a career development grant. The applicant proposes to develop quantitative models that describe the development of job skills as men and women progress in their career and to apply these models to the analysis of job-related decisions. In addition to the applicant's background in organizational behavior, the applicant has a background in mathematics and a substantive understanding of leaders and organizational behavior. She has already completed an initial study of the decisions that are made by college faculty regarding their research careers. The development and publishing of this work will lead to the development of a quantitative model for the analysis of faculty job-related decision making.Q:How to create a field of array in eloquent js (ororm)I want to create this array field:assignments: [ id: 1, subject: 'Math', date: '29.11.2017', start: '09:00', end: '10:00' , { id: 2, subject: 'Science', ee730c9e81 -girls-pussy-pictures-xxx-pictures-and-hot-photospdf -nalla/eliza-and-her-monsters-download-pdf -majid/prva-hrvatska-lchf-kuharica-pdf-download -solidworks-2017-sp1-x64-with-sn-and-activator -masek/auto-power-on-and-shut-down-keygen-serial
3DM, a Chinese warez group, first claimed to have breached Denuvo's technology in a blog post published on 1 December 2014, wherein they announced that they would release cracked versions of Denuvo-protected games FIFA 15, Dragon Age: Inquisition and Lords of the Fallen.[6] Following onto this, 3DM released the version of Dragon Age: Inquisition about two weeks after that game had shipped.[6] The overall cracking progress took about a month, an unusually long time in the game cracking scene.[2][7] When asked about this development, Denuvo Software Solutions acknowledged that "every protected game eventually gets cracked".[2] However, technology website Ars Technica noted that most sales for major games happen within 30 days of release, and so publishers may consider Denuvo a success if it meant a game took significantly longer to be cracked.[8] In January 2016, 3DM's founder, Bird Sister, revealed that they were to give up on trying to break the Denuvo implementation for Just Cause 3, and warned that, due to the ongoing trend for the implementation, there would be "no free games to play in the world" in the near future.[9] Subsequently, 3DM opted to not crack any games for one year to examine whether such a move would have any influence on game sales.[10] Denuvo's marketing director, Thomas Goebl, claimed that some console-exclusive games get PC releases due to this technology.[11]
-
By October 2017, crackers were able to bypass Denuvo's protection within hours of a game's release, with notable examples being South Park: The Fractured but Whole, Middle-earth: Shadow of War, Total War: Warhammer 2 and FIFA 18, all being cracked on their release dates.[12] In another notable case, Assassin's Creed Origins, which wrapped Denuvo within security tool VMProtect as well as Ubisoft's proprietary DRM used for their Uplay distribution software, had its security features bypassed by Italian collective CPY in February 2018, three months after the game's release.[13] In December 2018, Hitman 2's protection was bypassed three days before its official release date due to exclusive pre-order access, drawing comparisons to Final Fantasy XV, which had its protection removed four days before release.[14]
-
By 2019, several products like Devil May Cry 5, Metro Exodus, Resident Evil 2, Far Cry New Dawn, Football Manager 2019 and Soul Calibur 6, were cracked within their first week of release, with Ace Combat 7 taking thirteen days.[14][15][16] In the case of Rage 2, which was released on Steam as well as Bethesda Softworks' own Bethesda Launcher, the Steam version was protected by Denuvo, whereas the Bethesda Launcher version was not, leading to the game being cracked immediately, and Denuvo being removed from the Steam release two days later.[17][18]
-
Games protected by Denuvo require an online activation.[24] According to Empress, a notable Denuvo cracker, the software assigns a unique authentication token to each copy of a game, depending on factors like the user's hardware. The DRM is integrated with the game's code, which makes it especially hard to circumvent.[25]
-
In July 2018, Denuvo Software Solutions filed a lawsuit against Voksi, a 21-year-old Bulgarian hacker who had cracked several Denuvo-protected games.[33] Voksi was arrested by Bulgarian authorities, and his website, Revolt, was taken offline.[33]
-
-
Hi, my name is Andrew. I download client game from here I create my own key in "bf1942changer.exe", then launch bf42.reg and my key goes to register. The game consists of battlefield_1942_patch_v1.6.19 battlefield_1942_incremental_patch_v1.6_to_v1.61b DesertCombat0.7FullInstall dc_final_client and mappaks Road to Rome and Secret Weapons What am i to do to still play bf1942 after 31 may. Thanks. i think my game is noCD version
-
The games graphical performance is quite badly optimized though, as I was only able to reach an unplayable 11fps with all the settings cracked up. If you compare that to other games in this list, you will see that most average around 30-40fps, so the developer really has to look into better optimization.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Download Bot Para Wyd Download 5 The Best Way to Enjoy With Your Destiny with Bots.md b/spaces/bioriAsaeru/text-to-voice/Download Bot Para Wyd Download 5 The Best Way to Enjoy With Your Destiny with Bots.md
deleted file mode 100644
index ab4f68d86d07e080b86ecb7fa6833a241c0f38df..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Download Bot Para Wyd Download 5 The Best Way to Enjoy With Your Destiny with Bots.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
GS Auto Clicker is an excellent tool for automating repetitive mouse clicking tasks. However, before you download the program, you might want to explore a few alternatives. There are plenty of choices for task automation.
-
I think I figured out why this failed on occasion for me. It's when the iCloud Drive app hasn't yet downloaded the file but it's passing it to the app anyway. I don't know why iCloud Drive would do that, but that seems to be the only case that's failing.
Delete the app. When you do this, you might lose data that was stored in the app. Then redownload the app. If you didn't get the app from the App Store, redownload the app on the app developer's website. Then check to see if the problem with the app is fixed.
-
If you have email campaigns tied to lead generation material, you have a perfect opportunity to build in a custom chatbot. After a visitor downloads your amazing whitepaper, eBook or offer, you likely send them a sequence of emails.
-
Com o download do GS Auto Clicker, você não precisa se preocupar com atrasos ou travamentos, pois o processo de instalação é rápido e o programa é fácil de usar, consumindo poucos recursos do sistema. Programas alternativos são Free Auto Clicker ou Auto-Clicker.
-
Embora GS Auto Clicker seja uma ferramenta simples, é útil para pessoas envolvidas em tarefas de computação repetitivas. Na verdade, o software é bastante popular entre os jogadores, que precisam clicar constantemente nos botões do mouse para ganhar pontos ou uma pontuação. O programa possui uma interface simples e limpa, focada em uma IU antiquada.
-
Simplificando, GS Auto Clicker é um software de automação de tarefas que evita o trabalho de clicar repetidamente na tela. Embora não seja a alternativa perfeita para um mouse, ele é útil para várias tarefas. Por exemplo, você pode usar o programa em jogos como Minecraft e Roblox, que exigem que você construa do zero para ganhar pontos.
-
O download do GS Auto Clicker é uma excelente escolha para automação de tarefas em PCs com Windows. O programa vem com vários recursos e o desenvolvedor oferece um bom suporte. Se você está procurando um software para reduzir o uso do mouse em seu computador, o GS Auto Clicker será uma opção ideal para instalação. Na verdade, a versão mais recente do programa vem com uma interface aprimorada para facilitar a navegação.
-
Gostaríamos de destacar que, de vez em quando, um programa de software potencialmente mal-intencionado pode não ser encontrado. Para continuar prometendo a você um catálogo de programas e apps livre de malware, nossa equipe incluiu o recurso Report Software (Relatar software) em cada página de catálogo que encaminha seu feedback de volta para nós.
-
The VAT element added to your Creator Earnings (and Referral Payments, if any) (the "VAT Amount") will be paid to you by way of a separate payment outside of your regular Creator Earnings, provided that you must have submitted to us copies of the following before payment of the VAT Amount will be made to you:
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Download film kartun chibi maruko chan bahasa indonesian full episode terbaru.md b/spaces/bioriAsaeru/text-to-voice/Download film kartun chibi maruko chan bahasa indonesian full episode terbaru.md
deleted file mode 100644
index b85b5004dd3a370ea1fdeb25a051d0db83b46268..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Download film kartun chibi maruko chan bahasa indonesian full episode terbaru.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
download film kartun chibi maruko chan bahasa indonesian
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Gandolfo Economic Dynamics Pdf 1 A Source of Dynamic Mathematical Tools for Economists with Coverage of Many of the Deepest Areas of Current Research in Economic Dynamics.md b/spaces/bioriAsaeru/text-to-voice/Gandolfo Economic Dynamics Pdf 1 A Source of Dynamic Mathematical Tools for Economists with Coverage of Many of the Deepest Areas of Current Research in Economic Dynamics.md
deleted file mode 100644
index 792f8eea59a9caf67b7517e0ca744accddaf652b..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Gandolfo Economic Dynamics Pdf 1 A Source of Dynamic Mathematical Tools for Economists with Coverage of Many of the Deepest Areas of Current Research in Economic Dynamics.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
In the field of economic dynamics Hopf bifurcations are of interest for the mathematical modelling of endogenous business cycles. Several authors have used this theorem to study the appearance of business cycles in continuous time economic models. For example, Asada [1] and Asada and Yoshida [2] treated three- and four-dimensional Hopf bifurcations by means of coefficient criteria. In recent years the tendency is to consider higher-dimensional dynamics in macroeconomic modelling (see, e.g. [3]), but complete coefficient criteria for Hopf bifurcations have not been proposed so far for -dimensional systems with .
-
In this section, we present an application of the coefficient criteria stated in the previous section to a typical model of five-dimensional macroeconomic dynamics. For the application we consider a continuous time version of the Kaldorian two-region discrete time business cycle model proposed by Asada et al. [5].
The parameters of the model are the adjustment speed of the goods market of each region, the degree of capital mobility, and the degree of interregional trade, . We note that under the specifications and functional forms adopted in the formulation of the system of equations (9), the two regional economies are assumed quite similar; any dissimilarity will be due to the possibly unequal speeds of adjustment , . For a full description of the model and its economic foundations see Asada et al. [5].
-
60 per cent of Jordanians are of Palestinian origin,a statistic which has propelled Jordan into the role of both player and pawn in regional issues such as the birth of the state of Israel,the prolonged Israel-Palestine conflict, the ascent and decline of Arab nationalism and the subsequent rise of political Islam and radicalism. Exploring Jordan's diverse Palestinian communities, Luisa Gandolfo illustrates how the Palestinian majority has been subject to discrimination,all the while also playing a defining role in shaping Jordanian politics,legal frameworks and national identity. The conflicts of 1948 and 1967,the civil unrest following Black September in 1972 and the uprisings of 1988 and 2000 have all contributed to a fractious Jordanian-Palestinian relationship. In Palestinians in Jordan,Gandolfo examines the history of this relationship,looking at the socio-political circumstances,the economic and domestic policies,the legal status of Palestinians in Jordan and the security dimension of Jordan's role in the region. She argues that policies put in place over the last century have created a society that is marked by high levels of inter-faith cohesion,as evidenced by the success and integration of minority Christian communities. She goes on to suggest that society divides along lines of ethnic and nationalist loyalty,between Jordanians and Palestinians,while domestic politics become increasingly fractious with the growth of Islamist groups that have gained grassroots appeal,especially in the refugee camps. Palestinians in Jordan looks through the kaleidoscope of Palestinian-Jordanian identities that accommodate a complex and overlapping web of different religious affiliations, mixed socio-economic conditions and the experience of exile reconciled with daily life in Jordan. At the same time,identities of these communities continue to be rooted in an attachment to the concept of Palestine,and the unifying force of the struggle against Zionism. These layers have made the versatile and fluid nature of identities essential,affording a fascinating study in inter-communal dynamics and nationalism. It is this which makes Palestinians in Jordan an important resource for those researching the Israel-Palestine conflict as well as for students of the Middle East,Politics,Anthropology and Gender with an interest in identity.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Ham Radio Deluxe 6.1.4.189 Serial Key.md b/spaces/bioriAsaeru/text-to-voice/Ham Radio Deluxe 6.1.4.189 Serial Key.md
deleted file mode 100644
index 6b4d9f25a1415b25560e1e330988ce10f1948864..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Ham Radio Deluxe 6.1.4.189 Serial Key.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- 1fdad05405
-
-
-
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/proposal_generator/rpn.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/proposal_generator/rpn.py
deleted file mode 100644
index 99cd536d2f9880d2049390c45f73eb22335e1b82..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/proposal_generator/rpn.py
+++ /dev/null
@@ -1,533 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from typing import Dict, List, Optional, Tuple, Union
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d, ShapeSpec, cat
-from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
-from detectron2.utils.events import get_event_storage
-from detectron2.utils.memory import retry_if_cuda_oom
-from detectron2.utils.registry import Registry
-
-from ..anchor_generator import build_anchor_generator
-from ..box_regression import Box2BoxTransform, _dense_box_regression_loss
-from ..matcher import Matcher
-from ..sampling import subsample_labels
-from .build import PROPOSAL_GENERATOR_REGISTRY
-from .proposal_utils import find_top_rpn_proposals
-
-RPN_HEAD_REGISTRY = Registry("RPN_HEAD")
-RPN_HEAD_REGISTRY.__doc__ = """
-Registry for RPN heads, which take feature maps and perform
-objectness classification and bounding box regression for anchors.
-
-The registered object will be called with `obj(cfg, input_shape)`.
-The call should return a `nn.Module` object.
-"""
-
-
-"""
-Shape shorthand in this module:
-
- N: number of images in the minibatch
- L: number of feature maps per image on which RPN is run
- A: number of cell anchors (must be the same for all feature maps)
- Hi, Wi: height and width of the i-th feature map
- B: size of the box parameterization
-
-Naming convention:
-
- objectness: refers to the binary classification of an anchor as object vs. not object.
-
- deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box
- transform (see :class:`box_regression.Box2BoxTransform`), or 5d for rotated boxes.
-
- pred_objectness_logits: predicted objectness scores in [-inf, +inf]; use
- sigmoid(pred_objectness_logits) to estimate P(object).
-
- gt_labels: ground-truth binary classification labels for objectness
-
- pred_anchor_deltas: predicted box2box transform deltas
-
- gt_anchor_deltas: ground-truth box2box transform deltas
-"""
-
-
-def build_rpn_head(cfg, input_shape):
- """
- Build an RPN head defined by `cfg.MODEL.RPN.HEAD_NAME`.
- """
- name = cfg.MODEL.RPN.HEAD_NAME
- return RPN_HEAD_REGISTRY.get(name)(cfg, input_shape)
-
-
-@RPN_HEAD_REGISTRY.register()
-class StandardRPNHead(nn.Module):
- """
- Standard RPN classification and regression heads described in :paper:`Faster R-CNN`.
- Uses a 3x3 conv to produce a shared hidden state from which one 1x1 conv predicts
- objectness logits for each anchor and a second 1x1 conv predicts bounding-box deltas
- specifying how to deform each anchor into an object proposal.
- """
-
- @configurable
- def __init__(
- self, *, in_channels: int, num_anchors: int, box_dim: int = 4, conv_dims: List[int] = (-1,)
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- in_channels (int): number of input feature channels. When using multiple
- input features, they must have the same number of channels.
- num_anchors (int): number of anchors to predict for *each spatial position*
- on the feature map. The total number of anchors for each
- feature map will be `num_anchors * H * W`.
- box_dim (int): dimension of a box, which is also the number of box regression
- predictions to make for each anchor. An axis aligned box has
- box_dim=4, while a rotated box has box_dim=5.
- conv_dims (list[int]): a list of integers representing the output channels
- of N conv layers. Set it to -1 to use the same number of output channels
- as input channels.
- """
- super().__init__()
- cur_channels = in_channels
- # Keeping the old variable names and structure for backwards compatiblity.
- # Otherwise the old checkpoints will fail to load.
- if len(conv_dims) == 1:
- out_channels = cur_channels if conv_dims[0] == -1 else conv_dims[0]
- # 3x3 conv for the hidden representation
- self.conv = self._get_rpn_conv(cur_channels, out_channels)
- cur_channels = out_channels
- else:
- self.conv = nn.Sequential()
- for k, conv_dim in enumerate(conv_dims):
- out_channels = cur_channels if conv_dim == -1 else conv_dim
- if out_channels <= 0:
- raise ValueError(
- f"Conv output channels should be greater than 0. Got {out_channels}"
- )
- conv = self._get_rpn_conv(cur_channels, out_channels)
- self.conv.add_module(f"conv{k}", conv)
- cur_channels = out_channels
- # 1x1 conv for predicting objectness logits
- self.objectness_logits = nn.Conv2d(cur_channels, num_anchors, kernel_size=1, stride=1)
- # 1x1 conv for predicting box2box transform deltas
- self.anchor_deltas = nn.Conv2d(cur_channels, num_anchors * box_dim, kernel_size=1, stride=1)
-
- # Keeping the order of weights initialization same for backwards compatiblility.
- for layer in self.modules():
- if isinstance(layer, nn.Conv2d):
- nn.init.normal_(layer.weight, std=0.01)
- nn.init.constant_(layer.bias, 0)
-
- def _get_rpn_conv(self, in_channels, out_channels):
- return Conv2d(
- in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- activation=nn.ReLU(),
- )
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- # Standard RPN is shared across levels:
- in_channels = [s.channels for s in input_shape]
- assert len(set(in_channels)) == 1, "Each level must have the same channel!"
- in_channels = in_channels[0]
-
- # RPNHead should take the same input as anchor generator
- # NOTE: it assumes that creating an anchor generator does not have unwanted side effect.
- anchor_generator = build_anchor_generator(cfg, input_shape)
- num_anchors = anchor_generator.num_anchors
- box_dim = anchor_generator.box_dim
- assert (
- len(set(num_anchors)) == 1
- ), "Each level must have the same number of anchors per spatial position"
- return {
- "in_channels": in_channels,
- "num_anchors": num_anchors[0],
- "box_dim": box_dim,
- "conv_dims": cfg.MODEL.RPN.CONV_DIMS,
- }
-
- def forward(self, features: List[torch.Tensor]):
- """
- Args:
- features (list[Tensor]): list of feature maps
-
- Returns:
- list[Tensor]: A list of L elements.
- Element i is a tensor of shape (N, A, Hi, Wi) representing
- the predicted objectness logits for all anchors. A is the number of cell anchors.
- list[Tensor]: A list of L elements. Element i is a tensor of shape
- (N, A*box_dim, Hi, Wi) representing the predicted "deltas" used to transform anchors
- to proposals.
- """
- pred_objectness_logits = []
- pred_anchor_deltas = []
- for x in features:
- t = self.conv(x)
- pred_objectness_logits.append(self.objectness_logits(t))
- pred_anchor_deltas.append(self.anchor_deltas(t))
- return pred_objectness_logits, pred_anchor_deltas
-
-
-@PROPOSAL_GENERATOR_REGISTRY.register()
-class RPN(nn.Module):
- """
- Region Proposal Network, introduced by :paper:`Faster R-CNN`.
- """
-
- @configurable
- def __init__(
- self,
- *,
- in_features: List[str],
- head: nn.Module,
- anchor_generator: nn.Module,
- anchor_matcher: Matcher,
- box2box_transform: Box2BoxTransform,
- batch_size_per_image: int,
- positive_fraction: float,
- pre_nms_topk: Tuple[float, float],
- post_nms_topk: Tuple[float, float],
- nms_thresh: float = 0.7,
- min_box_size: float = 0.0,
- anchor_boundary_thresh: float = -1.0,
- loss_weight: Union[float, Dict[str, float]] = 1.0,
- box_reg_loss_type: str = "smooth_l1",
- smooth_l1_beta: float = 0.0,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- in_features (list[str]): list of names of input features to use
- head (nn.Module): a module that predicts logits and regression deltas
- for each level from a list of per-level features
- anchor_generator (nn.Module): a module that creates anchors from a
- list of features. Usually an instance of :class:`AnchorGenerator`
- anchor_matcher (Matcher): label the anchors by matching them with ground truth.
- box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to
- instance boxes
- batch_size_per_image (int): number of anchors per image to sample for training
- positive_fraction (float): fraction of foreground anchors to sample for training
- pre_nms_topk (tuple[float]): (train, test) that represents the
- number of top k proposals to select before NMS, in
- training and testing.
- post_nms_topk (tuple[float]): (train, test) that represents the
- number of top k proposals to select after NMS, in
- training and testing.
- nms_thresh (float): NMS threshold used to de-duplicate the predicted proposals
- min_box_size (float): remove proposal boxes with any side smaller than this threshold,
- in the unit of input image pixels
- anchor_boundary_thresh (float): legacy option
- loss_weight (float|dict): weights to use for losses. Can be single float for weighting
- all rpn losses together, or a dict of individual weightings. Valid dict keys are:
- "loss_rpn_cls" - applied to classification loss
- "loss_rpn_loc" - applied to box regression loss
- box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou".
- smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to
- use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1"
- """
- super().__init__()
- self.in_features = in_features
- self.rpn_head = head
- self.anchor_generator = anchor_generator
- self.anchor_matcher = anchor_matcher
- self.box2box_transform = box2box_transform
- self.batch_size_per_image = batch_size_per_image
- self.positive_fraction = positive_fraction
- # Map from self.training state to train/test settings
- self.pre_nms_topk = {True: pre_nms_topk[0], False: pre_nms_topk[1]}
- self.post_nms_topk = {True: post_nms_topk[0], False: post_nms_topk[1]}
- self.nms_thresh = nms_thresh
- self.min_box_size = float(min_box_size)
- self.anchor_boundary_thresh = anchor_boundary_thresh
- if isinstance(loss_weight, float):
- loss_weight = {"loss_rpn_cls": loss_weight, "loss_rpn_loc": loss_weight}
- self.loss_weight = loss_weight
- self.box_reg_loss_type = box_reg_loss_type
- self.smooth_l1_beta = smooth_l1_beta
-
- @classmethod
- def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
- in_features = cfg.MODEL.RPN.IN_FEATURES
- ret = {
- "in_features": in_features,
- "min_box_size": cfg.MODEL.PROPOSAL_GENERATOR.MIN_SIZE,
- "nms_thresh": cfg.MODEL.RPN.NMS_THRESH,
- "batch_size_per_image": cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE,
- "positive_fraction": cfg.MODEL.RPN.POSITIVE_FRACTION,
- "loss_weight": {
- "loss_rpn_cls": cfg.MODEL.RPN.LOSS_WEIGHT,
- "loss_rpn_loc": cfg.MODEL.RPN.BBOX_REG_LOSS_WEIGHT * cfg.MODEL.RPN.LOSS_WEIGHT,
- },
- "anchor_boundary_thresh": cfg.MODEL.RPN.BOUNDARY_THRESH,
- "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS),
- "box_reg_loss_type": cfg.MODEL.RPN.BBOX_REG_LOSS_TYPE,
- "smooth_l1_beta": cfg.MODEL.RPN.SMOOTH_L1_BETA,
- }
-
- ret["pre_nms_topk"] = (cfg.MODEL.RPN.PRE_NMS_TOPK_TRAIN, cfg.MODEL.RPN.PRE_NMS_TOPK_TEST)
- ret["post_nms_topk"] = (cfg.MODEL.RPN.POST_NMS_TOPK_TRAIN, cfg.MODEL.RPN.POST_NMS_TOPK_TEST)
-
- ret["anchor_generator"] = build_anchor_generator(cfg, [input_shape[f] for f in in_features])
- ret["anchor_matcher"] = Matcher(
- cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS, allow_low_quality_matches=True
- )
- ret["head"] = build_rpn_head(cfg, [input_shape[f] for f in in_features])
- return ret
-
- def _subsample_labels(self, label):
- """
- Randomly sample a subset of positive and negative examples, and overwrite
- the label vector to the ignore value (-1) for all elements that are not
- included in the sample.
-
- Args:
- labels (Tensor): a vector of -1, 0, 1. Will be modified in-place and returned.
- """
- pos_idx, neg_idx = subsample_labels(
- label, self.batch_size_per_image, self.positive_fraction, 0
- )
- # Fill with the ignore label (-1), then set positive and negative labels
- label.fill_(-1)
- label.scatter_(0, pos_idx, 1)
- label.scatter_(0, neg_idx, 0)
- return label
-
- @torch.jit.unused
- @torch.no_grad()
- def label_and_sample_anchors(
- self, anchors: List[Boxes], gt_instances: List[Instances]
- ) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:
- """
- Args:
- anchors (list[Boxes]): anchors for each feature map.
- gt_instances: the ground-truth instances for each image.
-
- Returns:
- list[Tensor]:
- List of #img tensors. i-th element is a vector of labels whose length is
- the total number of anchors across all feature maps R = sum(Hi * Wi * A).
- Label values are in {-1, 0, 1}, with meanings: -1 = ignore; 0 = negative
- class; 1 = positive class.
- list[Tensor]:
- i-th element is a Rx4 tensor. The values are the matched gt boxes for each
- anchor. Values are undefined for those anchors not labeled as 1.
- """
- anchors = Boxes.cat(anchors)
-
- gt_boxes = [x.gt_boxes for x in gt_instances]
- image_sizes = [x.image_size for x in gt_instances]
- del gt_instances
-
- gt_labels = []
- matched_gt_boxes = []
- for image_size_i, gt_boxes_i in zip(image_sizes, gt_boxes):
- """
- image_size_i: (h, w) for the i-th image
- gt_boxes_i: ground-truth boxes for i-th image
- """
-
- match_quality_matrix = retry_if_cuda_oom(pairwise_iou)(gt_boxes_i, anchors)
- matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix)
- # Matching is memory-expensive and may result in CPU tensors. But the result is small
- gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device)
- del match_quality_matrix
-
- if self.anchor_boundary_thresh >= 0:
- # Discard anchors that go out of the boundaries of the image
- # NOTE: This is legacy functionality that is turned off by default in Detectron2
- anchors_inside_image = anchors.inside_box(image_size_i, self.anchor_boundary_thresh)
- gt_labels_i[~anchors_inside_image] = -1
-
- # A vector of labels (-1, 0, 1) for each anchor
- gt_labels_i = self._subsample_labels(gt_labels_i)
-
- if len(gt_boxes_i) == 0:
- # These values won't be used anyway since the anchor is labeled as background
- matched_gt_boxes_i = torch.zeros_like(anchors.tensor)
- else:
- # TODO wasted indexing computation for ignored boxes
- matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor
-
- gt_labels.append(gt_labels_i) # N,AHW
- matched_gt_boxes.append(matched_gt_boxes_i)
- return gt_labels, matched_gt_boxes
-
- @torch.jit.unused
- def losses(
- self,
- anchors: List[Boxes],
- pred_objectness_logits: List[torch.Tensor],
- gt_labels: List[torch.Tensor],
- pred_anchor_deltas: List[torch.Tensor],
- gt_boxes: List[torch.Tensor],
- ) -> Dict[str, torch.Tensor]:
- """
- Return the losses from a set of RPN predictions and their associated ground-truth.
-
- Args:
- anchors (list[Boxes or RotatedBoxes]): anchors for each feature map, each
- has shape (Hi*Wi*A, B), where B is box dimension (4 or 5).
- pred_objectness_logits (list[Tensor]): A list of L elements.
- Element i is a tensor of shape (N, Hi*Wi*A) representing
- the predicted objectness logits for all anchors.
- gt_labels (list[Tensor]): Output of :meth:`label_and_sample_anchors`.
- pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape
- (N, Hi*Wi*A, 4 or 5) representing the predicted "deltas" used to transform anchors
- to proposals.
- gt_boxes (list[Tensor]): Output of :meth:`label_and_sample_anchors`.
-
- Returns:
- dict[loss name -> loss value]: A dict mapping from loss name to loss value.
- Loss names are: `loss_rpn_cls` for objectness classification and
- `loss_rpn_loc` for proposal localization.
- """
- num_images = len(gt_labels)
- gt_labels = torch.stack(gt_labels) # (N, sum(Hi*Wi*Ai))
-
- # Log the number of positive/negative anchors per-image that's used in training
- pos_mask = gt_labels == 1
- num_pos_anchors = pos_mask.sum().item()
- num_neg_anchors = (gt_labels == 0).sum().item()
- storage = get_event_storage()
- storage.put_scalar("rpn/num_pos_anchors", num_pos_anchors / num_images)
- storage.put_scalar("rpn/num_neg_anchors", num_neg_anchors / num_images)
-
- localization_loss = _dense_box_regression_loss(
- anchors,
- self.box2box_transform,
- pred_anchor_deltas,
- gt_boxes,
- pos_mask,
- box_reg_loss_type=self.box_reg_loss_type,
- smooth_l1_beta=self.smooth_l1_beta,
- )
-
- valid_mask = gt_labels >= 0
- objectness_loss = F.binary_cross_entropy_with_logits(
- cat(pred_objectness_logits, dim=1)[valid_mask],
- gt_labels[valid_mask].to(torch.float32),
- reduction="sum",
- )
- normalizer = self.batch_size_per_image * num_images
- losses = {
- "loss_rpn_cls": objectness_loss / normalizer,
- # The original Faster R-CNN paper uses a slightly different normalizer
- # for loc loss. But it doesn't matter in practice
- "loss_rpn_loc": localization_loss / normalizer,
- }
- losses = {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()}
- return losses
-
- def forward(
- self,
- images: ImageList,
- features: Dict[str, torch.Tensor],
- gt_instances: Optional[List[Instances]] = None,
- ):
- """
- Args:
- images (ImageList): input images of length `N`
- features (dict[str, Tensor]): input data as a mapping from feature
- map name to tensor. Axis 0 represents the number of images `N` in
- the input data; axes 1-3 are channels, height, and width, which may
- vary between feature maps (e.g., if a feature pyramid is used).
- gt_instances (list[Instances], optional): a length `N` list of `Instances`s.
- Each `Instances` stores ground-truth instances for the corresponding image.
-
- Returns:
- proposals: list[Instances]: contains fields "proposal_boxes", "objectness_logits"
- loss: dict[Tensor] or None
- """
- features = [features[f] for f in self.in_features]
- anchors = self.anchor_generator(features)
-
- pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features)
- # Transpose the Hi*Wi*A dimension to the middle:
- pred_objectness_logits = [
- # (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A)
- score.permute(0, 2, 3, 1).flatten(1)
- for score in pred_objectness_logits
- ]
- pred_anchor_deltas = [
- # (N, A*B, Hi, Wi) -> (N, A, B, Hi, Wi) -> (N, Hi, Wi, A, B) -> (N, Hi*Wi*A, B)
- x.view(x.shape[0], -1, self.anchor_generator.box_dim, x.shape[-2], x.shape[-1])
- .permute(0, 3, 4, 1, 2)
- .flatten(1, -2)
- for x in pred_anchor_deltas
- ]
-
- if self.training:
- assert gt_instances is not None, "RPN requires gt_instances in training!"
- gt_labels, gt_boxes = self.label_and_sample_anchors(anchors, gt_instances)
- losses = self.losses(
- anchors, pred_objectness_logits, gt_labels, pred_anchor_deltas, gt_boxes
- )
- else:
- losses = {}
- proposals = self.predict_proposals(
- anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes
- )
- return proposals, losses
-
- def predict_proposals(
- self,
- anchors: List[Boxes],
- pred_objectness_logits: List[torch.Tensor],
- pred_anchor_deltas: List[torch.Tensor],
- image_sizes: List[Tuple[int, int]],
- ):
- """
- Decode all the predicted box regression deltas to proposals. Find the top proposals
- by applying NMS and removing boxes that are too small.
-
- Returns:
- proposals (list[Instances]): list of N Instances. The i-th Instances
- stores post_nms_topk object proposals for image i, sorted by their
- objectness score in descending order.
- """
- # The proposals are treated as fixed for joint training with roi heads.
- # This approach ignores the derivative w.r.t. the proposal boxes’ coordinates that
- # are also network responses.
- with torch.no_grad():
- pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas)
- return find_top_rpn_proposals(
- pred_proposals,
- pred_objectness_logits,
- image_sizes,
- self.nms_thresh,
- self.pre_nms_topk[self.training],
- self.post_nms_topk[self.training],
- self.min_box_size,
- self.training,
- )
-
- def _decode_proposals(self, anchors: List[Boxes], pred_anchor_deltas: List[torch.Tensor]):
- """
- Transform anchors into proposals by applying the predicted anchor deltas.
-
- Returns:
- proposals (list[Tensor]): A list of L tensors. Tensor i has shape
- (N, Hi*Wi*A, B)
- """
- N = pred_anchor_deltas[0].shape[0]
- proposals = []
- # For each feature map
- for anchors_i, pred_anchor_deltas_i in zip(anchors, pred_anchor_deltas):
- B = anchors_i.tensor.size(1)
- pred_anchor_deltas_i = pred_anchor_deltas_i.reshape(-1, B)
- # Expand anchors to shape (N*Hi*Wi*A, B)
- anchors_i = anchors_i.tensor.unsqueeze(0).expand(N, -1, -1).reshape(-1, B)
- proposals_i = self.box2box_transform.apply_deltas(pred_anchor_deltas_i, anchors_i)
- # Append feature map proposals with shape (N, Hi*Wi*A, B)
- proposals.append(proposals_i.view(N, -1, B))
- return proposals
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/tracking/bbox_iou_tracker.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/tracking/bbox_iou_tracker.py
deleted file mode 100644
index 598081cb542ce64dd1d100c0d3e12a59f57b8e0e..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/tracking/bbox_iou_tracker.py
+++ /dev/null
@@ -1,276 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2004-present Facebook. All Rights Reserved.
-import copy
-import numpy as np
-from typing import List
-import torch
-
-from detectron2.config import configurable
-from detectron2.structures import Boxes, Instances
-from detectron2.structures.boxes import pairwise_iou
-
-from ..config.config import CfgNode as CfgNode_
-from .base_tracker import TRACKER_HEADS_REGISTRY, BaseTracker
-
-
-@TRACKER_HEADS_REGISTRY.register()
-class BBoxIOUTracker(BaseTracker):
- """
- A bounding box tracker to assign ID based on IoU between current and previous instances
- """
-
- @configurable
- def __init__(
- self,
- *,
- video_height: int,
- video_width: int,
- max_num_instances: int = 200,
- max_lost_frame_count: int = 0,
- min_box_rel_dim: float = 0.02,
- min_instance_period: int = 1,
- track_iou_threshold: float = 0.5,
- **kwargs,
- ):
- """
- Args:
- video_height: height the video frame
- video_width: width of the video frame
- max_num_instances: maximum number of id allowed to be tracked
- max_lost_frame_count: maximum number of frame an id can lost tracking
- exceed this number, an id is considered as lost
- forever
- min_box_rel_dim: a percentage, smaller than this dimension, a bbox is
- removed from tracking
- min_instance_period: an instance will be shown after this number of period
- since its first showing up in the video
- track_iou_threshold: iou threshold, below this number a bbox pair is removed
- from tracking
- """
- super().__init__(**kwargs)
- self._video_height = video_height
- self._video_width = video_width
- self._max_num_instances = max_num_instances
- self._max_lost_frame_count = max_lost_frame_count
- self._min_box_rel_dim = min_box_rel_dim
- self._min_instance_period = min_instance_period
- self._track_iou_threshold = track_iou_threshold
-
- @classmethod
- def from_config(cls, cfg: CfgNode_):
- """
- Old style initialization using CfgNode
-
- Args:
- cfg: D2 CfgNode, config file
- Return:
- dictionary storing arguments for __init__ method
- """
- assert "VIDEO_HEIGHT" in cfg.TRACKER_HEADS
- assert "VIDEO_WIDTH" in cfg.TRACKER_HEADS
- video_height = cfg.TRACKER_HEADS.get("VIDEO_HEIGHT")
- video_width = cfg.TRACKER_HEADS.get("VIDEO_WIDTH")
- max_num_instances = cfg.TRACKER_HEADS.get("MAX_NUM_INSTANCES", 200)
- max_lost_frame_count = cfg.TRACKER_HEADS.get("MAX_LOST_FRAME_COUNT", 0)
- min_box_rel_dim = cfg.TRACKER_HEADS.get("MIN_BOX_REL_DIM", 0.02)
- min_instance_period = cfg.TRACKER_HEADS.get("MIN_INSTANCE_PERIOD", 1)
- track_iou_threshold = cfg.TRACKER_HEADS.get("TRACK_IOU_THRESHOLD", 0.5)
- return {
- "_target_": "detectron2.tracking.bbox_iou_tracker.BBoxIOUTracker",
- "video_height": video_height,
- "video_width": video_width,
- "max_num_instances": max_num_instances,
- "max_lost_frame_count": max_lost_frame_count,
- "min_box_rel_dim": min_box_rel_dim,
- "min_instance_period": min_instance_period,
- "track_iou_threshold": track_iou_threshold,
- }
-
- def update(self, instances: Instances) -> Instances:
- """
- See BaseTracker description
- """
- instances = self._initialize_extra_fields(instances)
- if self._prev_instances is not None:
- # calculate IoU of all bbox pairs
- iou_all = pairwise_iou(
- boxes1=instances.pred_boxes,
- boxes2=self._prev_instances.pred_boxes,
- )
- # sort IoU in descending order
- bbox_pairs = self._create_prediction_pairs(instances, iou_all)
- # assign previous ID to current bbox if IoU > track_iou_threshold
- self._reset_fields()
- for bbox_pair in bbox_pairs:
- idx = bbox_pair["idx"]
- prev_id = bbox_pair["prev_id"]
- if (
- idx in self._matched_idx
- or prev_id in self._matched_ID
- or bbox_pair["IoU"] < self._track_iou_threshold
- ):
- continue
- instances.ID[idx] = prev_id
- instances.ID_period[idx] = bbox_pair["prev_period"] + 1
- instances.lost_frame_count[idx] = 0
- self._matched_idx.add(idx)
- self._matched_ID.add(prev_id)
- self._untracked_prev_idx.remove(bbox_pair["prev_idx"])
- instances = self._assign_new_id(instances)
- instances = self._merge_untracked_instances(instances)
- self._prev_instances = copy.deepcopy(instances)
- return instances
-
- def _create_prediction_pairs(self, instances: Instances, iou_all: np.ndarray) -> List:
- """
- For all instances in previous and current frames, create pairs. For each
- pair, store index of the instance in current frame predcitions, index in
- previous predictions, ID in previous predictions, IoU of the bboxes in this
- pair, period in previous predictions.
-
- Args:
- instances: D2 Instances, for predictions of the current frame
- iou_all: IoU for all bboxes pairs
- Return:
- A list of IoU for all pairs
- """
- bbox_pairs = []
- for i in range(len(instances)):
- for j in range(len(self._prev_instances)):
- bbox_pairs.append(
- {
- "idx": i,
- "prev_idx": j,
- "prev_id": self._prev_instances.ID[j],
- "IoU": iou_all[i, j],
- "prev_period": self._prev_instances.ID_period[j],
- }
- )
- return bbox_pairs
-
- def _initialize_extra_fields(self, instances: Instances) -> Instances:
- """
- If input instances don't have ID, ID_period, lost_frame_count fields,
- this method is used to initialize these fields.
-
- Args:
- instances: D2 Instances, for predictions of the current frame
- Return:
- D2 Instances with extra fields added
- """
- if not instances.has("ID"):
- instances.set("ID", [None] * len(instances))
- if not instances.has("ID_period"):
- instances.set("ID_period", [None] * len(instances))
- if not instances.has("lost_frame_count"):
- instances.set("lost_frame_count", [None] * len(instances))
- if self._prev_instances is None:
- instances.ID = list(range(len(instances)))
- self._id_count += len(instances)
- instances.ID_period = [1] * len(instances)
- instances.lost_frame_count = [0] * len(instances)
- return instances
-
- def _reset_fields(self):
- """
- Before each uodate call, reset fields first
- """
- self._matched_idx = set()
- self._matched_ID = set()
- self._untracked_prev_idx = set(range(len(self._prev_instances)))
-
- def _assign_new_id(self, instances: Instances) -> Instances:
- """
- For each untracked instance, assign a new id
-
- Args:
- instances: D2 Instances, for predictions of the current frame
- Return:
- D2 Instances with new ID assigned
- """
- untracked_idx = set(range(len(instances))).difference(self._matched_idx)
- for idx in untracked_idx:
- instances.ID[idx] = self._id_count
- self._id_count += 1
- instances.ID_period[idx] = 1
- instances.lost_frame_count[idx] = 0
- return instances
-
- def _merge_untracked_instances(self, instances: Instances) -> Instances:
- """
- For untracked previous instances, under certain condition, still keep them
- in tracking and merge with the current instances.
-
- Args:
- instances: D2 Instances, for predictions of the current frame
- Return:
- D2 Instances merging current instances and instances from previous
- frame decided to keep tracking
- """
- untracked_instances = Instances(
- image_size=instances.image_size,
- pred_boxes=[],
- pred_classes=[],
- scores=[],
- ID=[],
- ID_period=[],
- lost_frame_count=[],
- )
- prev_bboxes = list(self._prev_instances.pred_boxes)
- prev_classes = list(self._prev_instances.pred_classes)
- prev_scores = list(self._prev_instances.scores)
- prev_ID_period = self._prev_instances.ID_period
- if instances.has("pred_masks"):
- untracked_instances.set("pred_masks", [])
- prev_masks = list(self._prev_instances.pred_masks)
- if instances.has("pred_keypoints"):
- untracked_instances.set("pred_keypoints", [])
- prev_keypoints = list(self._prev_instances.pred_keypoints)
- if instances.has("pred_keypoint_heatmaps"):
- untracked_instances.set("pred_keypoint_heatmaps", [])
- prev_keypoint_heatmaps = list(self._prev_instances.pred_keypoint_heatmaps)
- for idx in self._untracked_prev_idx:
- x_left, y_top, x_right, y_bot = prev_bboxes[idx]
- if (
- (1.0 * (x_right - x_left) / self._video_width < self._min_box_rel_dim)
- or (1.0 * (y_bot - y_top) / self._video_height < self._min_box_rel_dim)
- or self._prev_instances.lost_frame_count[idx] >= self._max_lost_frame_count
- or prev_ID_period[idx] <= self._min_instance_period
- ):
- continue
- untracked_instances.pred_boxes.append(list(prev_bboxes[idx].numpy()))
- untracked_instances.pred_classes.append(int(prev_classes[idx]))
- untracked_instances.scores.append(float(prev_scores[idx]))
- untracked_instances.ID.append(self._prev_instances.ID[idx])
- untracked_instances.ID_period.append(self._prev_instances.ID_period[idx])
- untracked_instances.lost_frame_count.append(
- self._prev_instances.lost_frame_count[idx] + 1
- )
- if instances.has("pred_masks"):
- untracked_instances.pred_masks.append(prev_masks[idx].numpy().astype(np.uint8))
- if instances.has("pred_keypoints"):
- untracked_instances.pred_keypoints.append(
- prev_keypoints[idx].numpy().astype(np.uint8)
- )
- if instances.has("pred_keypoint_heatmaps"):
- untracked_instances.pred_keypoint_heatmaps.append(
- prev_keypoint_heatmaps[idx].numpy().astype(np.float32)
- )
- untracked_instances.pred_boxes = Boxes(torch.FloatTensor(untracked_instances.pred_boxes))
- untracked_instances.pred_classes = torch.IntTensor(untracked_instances.pred_classes)
- untracked_instances.scores = torch.FloatTensor(untracked_instances.scores)
- if instances.has("pred_masks"):
- untracked_instances.pred_masks = torch.IntTensor(untracked_instances.pred_masks)
- if instances.has("pred_keypoints"):
- untracked_instances.pred_keypoints = torch.IntTensor(untracked_instances.pred_keypoints)
- if instances.has("pred_keypoint_heatmaps"):
- untracked_instances.pred_keypoint_heatmaps = torch.FloatTensor(
- untracked_instances.pred_keypoint_heatmaps
- )
-
- return Instances.cat(
- [
- instances,
- untracked_instances,
- ]
- )
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/testing.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/testing.py
deleted file mode 100644
index 3f5b9dbe4438e1f5c6976b45bafed8966aee2dd9..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/testing.py
+++ /dev/null
@@ -1,478 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import io
-import numpy as np
-import os
-import re
-import tempfile
-import unittest
-from typing import Callable
-import torch
-import torch.onnx.symbolic_helper as sym_help
-from packaging import version
-from torch._C import ListType
-from torch.onnx import register_custom_op_symbolic
-
-from detectron2 import model_zoo
-from detectron2.config import CfgNode, LazyConfig, instantiate
-from detectron2.data import DatasetCatalog
-from detectron2.data.detection_utils import read_image
-from detectron2.modeling import build_model
-from detectron2.structures import Boxes, Instances, ROIMasks
-from detectron2.utils.file_io import PathManager
-
-
-"""
-Internal utilities for tests. Don't use except for writing tests.
-"""
-
-
-def get_model_no_weights(config_path):
- """
- Like model_zoo.get, but do not load any weights (even pretrained)
- """
- cfg = model_zoo.get_config(config_path)
- if isinstance(cfg, CfgNode):
- if not torch.cuda.is_available():
- cfg.MODEL.DEVICE = "cpu"
- return build_model(cfg)
- else:
- return instantiate(cfg.model)
-
-
-def random_boxes(num_boxes, max_coord=100, device="cpu"):
- """
- Create a random Nx4 boxes tensor, with coordinates < max_coord.
- """
- boxes = torch.rand(num_boxes, 4, device=device) * (max_coord * 0.5)
- boxes.clamp_(min=1.0) # tiny boxes cause numerical instability in box regression
- # Note: the implementation of this function in torchvision is:
- # boxes[:, 2:] += torch.rand(N, 2) * 100
- # but it does not guarantee non-negative widths/heights constraints:
- # boxes[:, 2] >= boxes[:, 0] and boxes[:, 3] >= boxes[:, 1]:
- boxes[:, 2:] += boxes[:, :2]
- return boxes
-
-
-def get_sample_coco_image(tensor=True):
- """
- Args:
- tensor (bool): if True, returns 3xHxW tensor.
- else, returns a HxWx3 numpy array.
-
- Returns:
- an image, in BGR color.
- """
- try:
- file_name = DatasetCatalog.get("coco_2017_val_100")[0]["file_name"]
- if not PathManager.exists(file_name):
- raise FileNotFoundError()
- except IOError:
- # for public CI to run
- file_name = PathManager.get_local_path(
- "http://images.cocodataset.org/train2017/000000000009.jpg"
- )
- ret = read_image(file_name, format="BGR")
- if tensor:
- ret = torch.from_numpy(np.ascontiguousarray(ret.transpose(2, 0, 1)))
- return ret
-
-
-def convert_scripted_instances(instances):
- """
- Convert a scripted Instances object to a regular :class:`Instances` object
- """
- assert hasattr(
- instances, "image_size"
- ), f"Expect an Instances object, but got {type(instances)}!"
- ret = Instances(instances.image_size)
- for name in instances._field_names:
- val = getattr(instances, "_" + name, None)
- if val is not None:
- ret.set(name, val)
- return ret
-
-
-def assert_instances_allclose(input, other, *, rtol=1e-5, msg="", size_as_tensor=False):
- """
- Args:
- input, other (Instances):
- size_as_tensor: compare image_size of the Instances as tensors (instead of tuples).
- Useful for comparing outputs of tracing.
- """
- if not isinstance(input, Instances):
- input = convert_scripted_instances(input)
- if not isinstance(other, Instances):
- other = convert_scripted_instances(other)
-
- if not msg:
- msg = "Two Instances are different! "
- else:
- msg = msg.rstrip() + " "
-
- size_error_msg = msg + f"image_size is {input.image_size} vs. {other.image_size}!"
- if size_as_tensor:
- assert torch.equal(
- torch.tensor(input.image_size), torch.tensor(other.image_size)
- ), size_error_msg
- else:
- assert input.image_size == other.image_size, size_error_msg
- fields = sorted(input.get_fields().keys())
- fields_other = sorted(other.get_fields().keys())
- assert fields == fields_other, msg + f"Fields are {fields} vs {fields_other}!"
-
- for f in fields:
- val1, val2 = input.get(f), other.get(f)
- if isinstance(val1, (Boxes, ROIMasks)):
- # boxes in the range of O(100) and can have a larger tolerance
- assert torch.allclose(val1.tensor, val2.tensor, atol=100 * rtol), (
- msg + f"Field {f} differs too much!"
- )
- elif isinstance(val1, torch.Tensor):
- if val1.dtype.is_floating_point:
- mag = torch.abs(val1).max().cpu().item()
- assert torch.allclose(val1, val2, atol=mag * rtol), (
- msg + f"Field {f} differs too much!"
- )
- else:
- assert torch.equal(val1, val2), msg + f"Field {f} is different!"
- else:
- raise ValueError(f"Don't know how to compare type {type(val1)}")
-
-
-def reload_script_model(module):
- """
- Save a jit module and load it back.
- Similar to the `getExportImportCopy` function in torch/testing/
- """
- buffer = io.BytesIO()
- torch.jit.save(module, buffer)
- buffer.seek(0)
- return torch.jit.load(buffer)
-
-
-def reload_lazy_config(cfg):
- """
- Save an object by LazyConfig.save and load it back.
- This is used to test that a config still works the same after
- serialization/deserialization.
- """
- with tempfile.TemporaryDirectory(prefix="detectron2") as d:
- fname = os.path.join(d, "d2_cfg_test.yaml")
- LazyConfig.save(cfg, fname)
- return LazyConfig.load(fname)
-
-
-def min_torch_version(min_version: str) -> bool:
- """
- Returns True when torch's version is at least `min_version`.
- """
- try:
- import torch
- except ImportError:
- return False
-
- installed_version = version.parse(torch.__version__.split("+")[0])
- min_version = version.parse(min_version)
- return installed_version >= min_version
-
-
-def has_dynamic_axes(onnx_model):
- """
- Return True when all ONNX input/output have only dynamic axes for all ranks
- """
- return all(
- not dim.dim_param.isnumeric()
- for inp in onnx_model.graph.input
- for dim in inp.type.tensor_type.shape.dim
- ) and all(
- not dim.dim_param.isnumeric()
- for out in onnx_model.graph.output
- for dim in out.type.tensor_type.shape.dim
- )
-
-
-def register_custom_op_onnx_export(
- opname: str, symbolic_fn: Callable, opset_version: int, min_version: str
-) -> None:
- """
- Register `symbolic_fn` as PyTorch's symbolic `opname`-`opset_version` for ONNX export.
- The registration is performed only when current PyTorch's version is < `min_version.`
- IMPORTANT: symbolic must be manually unregistered after the caller function returns
- """
- if min_torch_version(min_version):
- return
- register_custom_op_symbolic(opname, symbolic_fn, opset_version)
- print(f"_register_custom_op_onnx_export({opname}, {opset_version}) succeeded.")
-
-
-def unregister_custom_op_onnx_export(opname: str, opset_version: int, min_version: str) -> None:
- """
- Unregister PyTorch's symbolic `opname`-`opset_version` for ONNX export.
- The un-registration is performed only when PyTorch's version is < `min_version`
- IMPORTANT: The symbolic must have been manually registered by the caller, otherwise
- the incorrect symbolic may be unregistered instead.
- """
-
- # TODO: _unregister_custom_op_symbolic is introduced PyTorch>=1.10
- # Remove after PyTorch 1.10+ is used by ALL detectron2's CI
- try:
- from torch.onnx import unregister_custom_op_symbolic as _unregister_custom_op_symbolic
- except ImportError:
-
- def _unregister_custom_op_symbolic(symbolic_name, opset_version):
- import torch.onnx.symbolic_registry as sym_registry
- from torch.onnx.symbolic_helper import _onnx_main_opset, _onnx_stable_opsets
-
- def _get_ns_op_name_from_custom_op(symbolic_name):
- try:
- from torch.onnx.utils import get_ns_op_name_from_custom_op
-
- ns, op_name = get_ns_op_name_from_custom_op(symbolic_name)
- except ImportError as import_error:
- if not bool(
- re.match(r"^[a-zA-Z0-9-_]*::[a-zA-Z-_]+[a-zA-Z0-9-_]*$", symbolic_name)
- ):
- raise ValueError(
- f"Invalid symbolic name {symbolic_name}. Must be `domain::name`"
- ) from import_error
-
- ns, op_name = symbolic_name.split("::")
- if ns == "onnx":
- raise ValueError(f"{ns} domain cannot be modified.") from import_error
-
- if ns == "aten":
- ns = ""
-
- return ns, op_name
-
- def _unregister_op(opname: str, domain: str, version: int):
- try:
- sym_registry.unregister_op(op_name, ns, ver)
- except AttributeError as attribute_error:
- if sym_registry.is_registered_op(opname, domain, version):
- del sym_registry._registry[(domain, version)][opname]
- if not sym_registry._registry[(domain, version)]:
- del sym_registry._registry[(domain, version)]
- else:
- raise RuntimeError(
- f"The opname {opname} is not registered."
- ) from attribute_error
-
- ns, op_name = _get_ns_op_name_from_custom_op(symbolic_name)
- for ver in _onnx_stable_opsets + [_onnx_main_opset]:
- if ver >= opset_version:
- _unregister_op(op_name, ns, ver)
-
- if min_torch_version(min_version):
- return
- _unregister_custom_op_symbolic(opname, opset_version)
- print(f"_unregister_custom_op_onnx_export({opname}, {opset_version}) succeeded.")
-
-
-skipIfOnCPUCI = unittest.skipIf(
- os.environ.get("CI") and not torch.cuda.is_available(),
- "The test is too slow on CPUs and will be executed on CircleCI's GPU jobs.",
-)
-
-
-def skipIfUnsupportedMinOpsetVersion(min_opset_version, current_opset_version=None):
- """
- Skips tests for ONNX Opset versions older than min_opset_version.
- """
-
- def skip_dec(func):
- def wrapper(self):
- try:
- opset_version = self.opset_version
- except AttributeError:
- opset_version = current_opset_version
- if opset_version < min_opset_version:
- raise unittest.SkipTest(
- f"Unsupported opset_version {opset_version}"
- f", required is {min_opset_version}"
- )
- return func(self)
-
- return wrapper
-
- return skip_dec
-
-
-def skipIfUnsupportedMinTorchVersion(min_version):
- """
- Skips tests for PyTorch versions older than min_version.
- """
- reason = f"module 'torch' has __version__ {torch.__version__}" f", required is: {min_version}"
- return unittest.skipIf(not min_torch_version(min_version), reason)
-
-
-# TODO: Remove after PyTorch 1.11.1+ is used by detectron2's CI
-def _pytorch1111_symbolic_opset9_to(g, self, *args):
- """aten::to() symbolic that must be used for testing with PyTorch < 1.11.1."""
-
- def is_aten_to_device_only(args):
- if len(args) == 4:
- # aten::to(Tensor, Device, bool, bool, memory_format)
- return (
- args[0].node().kind() == "prim::device"
- or args[0].type().isSubtypeOf(ListType.ofInts())
- or (
- sym_help._is_value(args[0])
- and args[0].node().kind() == "onnx::Constant"
- and isinstance(args[0].node()["value"], str)
- )
- )
- elif len(args) == 5:
- # aten::to(Tensor, Device, ScalarType, bool, bool, memory_format)
- # When dtype is None, this is a aten::to(device) call
- dtype = sym_help._get_const(args[1], "i", "dtype")
- return dtype is None
- elif len(args) in (6, 7):
- # aten::to(Tensor, ScalarType, Layout, Device, bool, bool, memory_format)
- # aten::to(Tensor, ScalarType, Layout, Device, bool, bool, bool, memory_format)
- # When dtype is None, this is a aten::to(device) call
- dtype = sym_help._get_const(args[0], "i", "dtype")
- return dtype is None
- return False
-
- # ONNX doesn't have a concept of a device, so we ignore device-only casts
- if is_aten_to_device_only(args):
- return self
-
- if len(args) == 4:
- # TestONNXRuntime::test_ones_bool shows args[0] of aten::to can be onnx::Constant[Tensor]
- # In this case, the constant value is a tensor not int,
- # so sym_help._maybe_get_const(args[0], 'i') would not work.
- dtype = args[0]
- if sym_help._is_value(args[0]) and args[0].node().kind() == "onnx::Constant":
- tval = args[0].node()["value"]
- if isinstance(tval, torch.Tensor):
- if len(tval.shape) == 0:
- tval = tval.item()
- dtype = int(tval)
- else:
- dtype = tval
-
- if sym_help._is_value(dtype) or isinstance(dtype, torch.Tensor):
- # aten::to(Tensor, Tensor, bool, bool, memory_format)
- dtype = args[0].type().scalarType()
- return g.op("Cast", self, to_i=sym_help.cast_pytorch_to_onnx[dtype])
- else:
- # aten::to(Tensor, ScalarType, bool, bool, memory_format)
- # memory_format is ignored
- return g.op("Cast", self, to_i=sym_help.scalar_type_to_onnx[dtype])
- elif len(args) == 5:
- # aten::to(Tensor, Device, ScalarType, bool, bool, memory_format)
- dtype = sym_help._get_const(args[1], "i", "dtype")
- # memory_format is ignored
- return g.op("Cast", self, to_i=sym_help.scalar_type_to_onnx[dtype])
- elif len(args) == 6:
- # aten::to(Tensor, ScalarType, Layout, Device, bool, bool, memory_format)
- dtype = sym_help._get_const(args[0], "i", "dtype")
- # Layout, device and memory_format are ignored
- return g.op("Cast", self, to_i=sym_help.scalar_type_to_onnx[dtype])
- elif len(args) == 7:
- # aten::to(Tensor, ScalarType, Layout, Device, bool, bool, bool, memory_format)
- dtype = sym_help._get_const(args[0], "i", "dtype")
- # Layout, device and memory_format are ignored
- return g.op("Cast", self, to_i=sym_help.scalar_type_to_onnx[dtype])
- else:
- return sym_help._onnx_unsupported("Unknown aten::to signature")
-
-
-# TODO: Remove after PyTorch 1.11.1+ is used by detectron2's CI
-def _pytorch1111_symbolic_opset9_repeat_interleave(g, self, repeats, dim=None, output_size=None):
-
- # from torch.onnx.symbolic_helper import ScalarType
- from torch.onnx.symbolic_opset9 import expand, unsqueeze
-
- input = self
- # if dim is None flatten
- # By default, use the flattened input array, and return a flat output array
- if sym_help._is_none(dim):
- input = sym_help._reshape_helper(g, self, g.op("Constant", value_t=torch.tensor([-1])))
- dim = 0
- else:
- dim = sym_help._maybe_get_scalar(dim)
-
- repeats_dim = sym_help._get_tensor_rank(repeats)
- repeats_sizes = sym_help._get_tensor_sizes(repeats)
- input_sizes = sym_help._get_tensor_sizes(input)
- if repeats_dim is None:
- raise RuntimeError(
- "Unsupported: ONNX export of repeat_interleave for unknown " "repeats rank."
- )
- if repeats_sizes is None:
- raise RuntimeError(
- "Unsupported: ONNX export of repeat_interleave for unknown " "repeats size."
- )
- if input_sizes is None:
- raise RuntimeError(
- "Unsupported: ONNX export of repeat_interleave for unknown " "input size."
- )
-
- input_sizes_temp = input_sizes.copy()
- for idx, input_size in enumerate(input_sizes):
- if input_size is None:
- input_sizes[idx], input_sizes_temp[idx] = 0, -1
-
- # Cases where repeats is an int or single value tensor
- if repeats_dim == 0 or (repeats_dim == 1 and repeats_sizes[0] == 1):
- if not sym_help._is_tensor(repeats):
- repeats = g.op("Constant", value_t=torch.LongTensor(repeats))
- if input_sizes[dim] == 0:
- return sym_help._onnx_opset_unsupported_detailed(
- "repeat_interleave",
- 9,
- 13,
- "Unsupported along dimension with unknown input size",
- )
- else:
- reps = input_sizes[dim]
- repeats = expand(g, repeats, g.op("Constant", value_t=torch.tensor([reps])), None)
-
- # Cases where repeats is a 1 dim Tensor
- elif repeats_dim == 1:
- if input_sizes[dim] == 0:
- return sym_help._onnx_opset_unsupported_detailed(
- "repeat_interleave",
- 9,
- 13,
- "Unsupported along dimension with unknown input size",
- )
- if repeats_sizes[0] is None:
- return sym_help._onnx_opset_unsupported_detailed(
- "repeat_interleave", 9, 13, "Unsupported for cases with dynamic repeats"
- )
- assert (
- repeats_sizes[0] == input_sizes[dim]
- ), "repeats must have the same size as input along dim"
- reps = repeats_sizes[0]
- else:
- raise RuntimeError("repeats must be 0-dim or 1-dim tensor")
-
- final_splits = list()
- r_splits = sym_help._repeat_interleave_split_helper(g, repeats, reps, 0)
- if isinstance(r_splits, torch._C.Value):
- r_splits = [r_splits]
- i_splits = sym_help._repeat_interleave_split_helper(g, input, reps, dim)
- if isinstance(i_splits, torch._C.Value):
- i_splits = [i_splits]
- input_sizes[dim], input_sizes_temp[dim] = -1, 1
- for idx, r_split in enumerate(r_splits):
- i_split = unsqueeze(g, i_splits[idx], dim + 1)
- r_concat = [
- g.op("Constant", value_t=torch.LongTensor(input_sizes_temp[: dim + 1])),
- r_split,
- g.op("Constant", value_t=torch.LongTensor(input_sizes_temp[dim + 1 :])),
- ]
- r_concat = g.op("Concat", *r_concat, axis_i=0)
- i_split = expand(g, i_split, r_concat, None)
- i_split = sym_help._reshape_helper(
- g,
- i_split,
- g.op("Constant", value_t=torch.LongTensor(input_sizes)),
- allowzero=0,
- )
- final_splits.append(i_split)
- return g.op("Concat", *final_splits, axis_i=dim)
diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/viewer.py b/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/viewer.py
deleted file mode 100644
index d2326c38205c6eaddb4f567e3b088329187af258..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/viewer.py
+++ /dev/null
@@ -1,1160 +0,0 @@
-"""A pyglet-based interactive 3D scene viewer.
-"""
-import copy
-import os
-import sys
-from threading import Thread, RLock
-import time
-
-import imageio
-import numpy as np
-import OpenGL
-import trimesh
-
-try:
- from Tkinter import Tk, tkFileDialog as filedialog
-except Exception:
- try:
- from tkinter import Tk, filedialog as filedialog
- except Exception:
- pass
-
-from .constants import (TARGET_OPEN_GL_MAJOR, TARGET_OPEN_GL_MINOR,
- MIN_OPEN_GL_MAJOR, MIN_OPEN_GL_MINOR,
- TEXT_PADDING, DEFAULT_SCENE_SCALE,
- DEFAULT_Z_FAR, DEFAULT_Z_NEAR, RenderFlags, TextAlign)
-from .light import DirectionalLight
-from .node import Node
-from .camera import PerspectiveCamera, OrthographicCamera, IntrinsicsCamera
-from .trackball import Trackball
-from .renderer import Renderer
-from .mesh import Mesh
-
-import pyglet
-from pyglet import clock
-pyglet.options['shadow_window'] = False
-
-
-class Viewer(pyglet.window.Window):
- """An interactive viewer for 3D scenes.
-
- The viewer's camera is separate from the scene's, but will take on
- the parameters of the scene's main view camera and start in the same pose.
- If the scene does not have a camera, a suitable default will be provided.
-
- Parameters
- ----------
- scene : :class:`Scene`
- The scene to visualize.
- viewport_size : (2,) int
- The width and height of the initial viewing window.
- render_flags : dict
- A set of flags for rendering the scene. Described in the note below.
- viewer_flags : dict
- A set of flags for controlling the viewer's behavior.
- Described in the note below.
- registered_keys : dict
- A map from ASCII key characters to tuples containing:
-
- - A function to be called whenever the key is pressed,
- whose first argument will be the viewer itself.
- - (Optionally) A list of additional positional arguments
- to be passed to the function.
- - (Optionally) A dict of keyword arguments to be passed
- to the function.
-
- kwargs : dict
- Any keyword arguments left over will be interpreted as belonging to
- either the :attr:`.Viewer.render_flags` or :attr:`.Viewer.viewer_flags`
- dictionaries. Those flag sets will be updated appropriately.
-
- Note
- ----
- The basic commands for moving about the scene are given as follows:
-
- - **Rotating about the scene**: Hold the left mouse button and
- drag the cursor.
- - **Rotating about the view axis**: Hold ``CTRL`` and the left mouse
- button and drag the cursor.
- - **Panning**:
-
- - Hold SHIFT, then hold the left mouse button and drag the cursor, or
- - Hold the middle mouse button and drag the cursor.
-
- - **Zooming**:
-
- - Scroll the mouse wheel, or
- - Hold the right mouse button and drag the cursor.
-
- Other keyboard commands are as follows:
-
- - ``a``: Toggles rotational animation mode.
- - ``c``: Toggles backface culling.
- - ``f``: Toggles fullscreen mode.
- - ``h``: Toggles shadow rendering.
- - ``i``: Toggles axis display mode
- (no axes, world axis, mesh axes, all axes).
- - ``l``: Toggles lighting mode
- (scene lighting, Raymond lighting, or direct lighting).
- - ``m``: Toggles face normal visualization.
- - ``n``: Toggles vertex normal visualization.
- - ``o``: Toggles orthographic mode.
- - ``q``: Quits the viewer.
- - ``r``: Starts recording a GIF, and pressing again stops recording
- and opens a file dialog.
- - ``s``: Opens a file dialog to save the current view as an image.
- - ``w``: Toggles wireframe mode
- (scene default, flip wireframes, all wireframe, or all solid).
- - ``z``: Resets the camera to the initial view.
-
- Note
- ----
- The valid keys for ``render_flags`` are as follows:
-
- - ``flip_wireframe``: `bool`, If `True`, all objects will have their
- wireframe modes flipped from what their material indicates.
- Defaults to `False`.
- - ``all_wireframe``: `bool`, If `True`, all objects will be rendered
- in wireframe mode. Defaults to `False`.
- - ``all_solid``: `bool`, If `True`, all objects will be rendered in
- solid mode. Defaults to `False`.
- - ``shadows``: `bool`, If `True`, shadows will be rendered.
- Defaults to `False`.
- - ``vertex_normals``: `bool`, If `True`, vertex normals will be
- rendered as blue lines. Defaults to `False`.
- - ``face_normals``: `bool`, If `True`, face normals will be rendered as
- blue lines. Defaults to `False`.
- - ``cull_faces``: `bool`, If `True`, backfaces will be culled.
- Defaults to `True`.
- - ``point_size`` : float, The point size in pixels. Defaults to 1px.
-
- Note
- ----
- The valid keys for ``viewer_flags`` are as follows:
-
- - ``rotate``: `bool`, If `True`, the scene's camera will rotate
- about an axis. Defaults to `False`.
- - ``rotate_rate``: `float`, The rate of rotation in radians per second.
- Defaults to `PI / 3.0`.
- - ``rotate_axis``: `(3,) float`, The axis in world coordinates to rotate
- about. Defaults to ``[0,0,1]``.
- - ``view_center``: `(3,) float`, The position to rotate the scene about.
- Defaults to the scene's centroid.
- - ``use_raymond_lighting``: `bool`, If `True`, an additional set of three
- directional lights that move with the camera will be added to the scene.
- Defaults to `False`.
- - ``use_direct_lighting``: `bool`, If `True`, an additional directional
- light that moves with the camera and points out of it will be added to
- the scene. Defaults to `False`.
- - ``lighting_intensity``: `float`, The overall intensity of the
- viewer's additional lights (when they're in use). Defaults to 3.0.
- - ``use_perspective_cam``: `bool`, If `True`, a perspective camera will
- be used. Otherwise, an orthographic camera is used. Defaults to `True`.
- - ``save_directory``: `str`, A directory to open the file dialogs in.
- Defaults to `None`.
- - ``window_title``: `str`, A title for the viewer's application window.
- Defaults to `"Scene Viewer"`.
- - ``refresh_rate``: `float`, A refresh rate for rendering, in Hertz.
- Defaults to `30.0`.
- - ``fullscreen``: `bool`, Whether to make viewer fullscreen.
- Defaults to `False`.
- - ``show_world_axis``: `bool`, Whether to show the world axis.
- Defaults to `False`.
- - ``show_mesh_axes``: `bool`, Whether to show the individual mesh axes.
- Defaults to `False`.
- - ``caption``: `list of dict`, Text caption(s) to display on the viewer.
- Defaults to `None`.
-
- Note
- ----
- Animation can be accomplished by running the viewer with ``run_in_thread``
- enabled. Then, just run a loop in your main thread, updating the scene as
- needed. Before updating the scene, be sure to acquire the
- :attr:`.Viewer.render_lock`, and release it when your update is done.
- """
-
- def __init__(self, scene, viewport_size=None,
- render_flags=None, viewer_flags=None,
- registered_keys=None, run_in_thread=False,
- auto_start=True,
- **kwargs):
-
- #######################################################################
- # Save attributes and flags
- #######################################################################
- if viewport_size is None:
- viewport_size = (640, 480)
- self._scene = scene
- self._viewport_size = viewport_size
- self._render_lock = RLock()
- self._is_active = False
- self._should_close = False
- self._run_in_thread = run_in_thread
- self._auto_start = auto_start
-
- self._default_render_flags = {
- 'flip_wireframe': False,
- 'all_wireframe': False,
- 'all_solid': False,
- 'shadows': False,
- 'vertex_normals': False,
- 'face_normals': False,
- 'cull_faces': True,
- 'point_size': 1.0,
- }
- self._default_viewer_flags = {
- 'mouse_pressed': False,
- 'rotate': False,
- 'rotate_rate': np.pi / 3.0,
- 'rotate_axis': np.array([0.0, 0.0, 1.0]),
- 'view_center': None,
- 'record': False,
- 'use_raymond_lighting': False,
- 'use_direct_lighting': False,
- 'lighting_intensity': 3.0,
- 'use_perspective_cam': True,
- 'save_directory': None,
- 'window_title': 'Scene Viewer',
- 'refresh_rate': 30.0,
- 'fullscreen': False,
- 'show_world_axis': False,
- 'show_mesh_axes': False,
- 'caption': None
- }
- self._render_flags = self._default_render_flags.copy()
- self._viewer_flags = self._default_viewer_flags.copy()
- self._viewer_flags['rotate_axis'] = (
- self._default_viewer_flags['rotate_axis'].copy()
- )
-
- if render_flags is not None:
- self._render_flags.update(render_flags)
- if viewer_flags is not None:
- self._viewer_flags.update(viewer_flags)
-
- for key in kwargs:
- if key in self.render_flags:
- self._render_flags[key] = kwargs[key]
- elif key in self.viewer_flags:
- self._viewer_flags[key] = kwargs[key]
-
- # TODO MAC OS BUG FOR SHADOWS
- if sys.platform == 'darwin':
- self._render_flags['shadows'] = False
-
- self._registered_keys = {}
- if registered_keys is not None:
- self._registered_keys = {
- ord(k.lower()): registered_keys[k] for k in registered_keys
- }
-
- #######################################################################
- # Save internal settings
- #######################################################################
-
- # Set up caption stuff
- self._message_text = None
- self._ticks_till_fade = 2.0 / 3.0 * self.viewer_flags['refresh_rate']
- self._message_opac = 1.0 + self._ticks_till_fade
-
- # Set up raymond lights and direct lights
- self._raymond_lights = self._create_raymond_lights()
- self._direct_light = self._create_direct_light()
-
- # Set up axes
- self._axes = {}
- self._axis_mesh = Mesh.from_trimesh(
- trimesh.creation.axis(origin_size=0.1, axis_radius=0.05,
- axis_length=1.0), smooth=False)
- if self.viewer_flags['show_world_axis']:
- self._set_axes(world=self.viewer_flags['show_world_axis'],
- mesh=self.viewer_flags['show_mesh_axes'])
-
- #######################################################################
- # Set up camera node
- #######################################################################
- self._camera_node = None
- self._prior_main_camera_node = None
- self._default_camera_pose = None
- self._default_persp_cam = None
- self._default_orth_cam = None
- self._trackball = None
- self._saved_frames = []
-
- # Extract main camera from scene and set up our mirrored copy
- znear = None
- zfar = None
- if scene.main_camera_node is not None:
- n = scene.main_camera_node
- camera = copy.copy(n.camera)
- if isinstance(camera, (PerspectiveCamera, IntrinsicsCamera)):
- self._default_persp_cam = camera
- znear = camera.znear
- zfar = camera.zfar
- elif isinstance(camera, OrthographicCamera):
- self._default_orth_cam = camera
- znear = camera.znear
- zfar = camera.zfar
- self._default_camera_pose = scene.get_pose(scene.main_camera_node)
- self._prior_main_camera_node = n
-
- # Set defaults as needed
- if zfar is None:
- zfar = max(scene.scale * 10.0, DEFAULT_Z_FAR)
- if znear is None or znear == 0:
- if scene.scale == 0:
- znear = DEFAULT_Z_NEAR
- else:
- znear = min(scene.scale / 10.0, DEFAULT_Z_NEAR)
-
- if self._default_persp_cam is None:
- self._default_persp_cam = PerspectiveCamera(
- yfov=np.pi / 3.0, znear=znear, zfar=zfar
- )
- if self._default_orth_cam is None:
- xmag = ymag = scene.scale
- if scene.scale == 0:
- xmag = ymag = 1.0
- self._default_orth_cam = OrthographicCamera(
- xmag=xmag, ymag=ymag,
- znear=znear,
- zfar=zfar
- )
- if self._default_camera_pose is None:
- self._default_camera_pose = self._compute_initial_camera_pose()
-
- # Pick camera
- if self.viewer_flags['use_perspective_cam']:
- camera = self._default_persp_cam
- else:
- camera = self._default_orth_cam
-
- self._camera_node = Node(
- matrix=self._default_camera_pose, camera=camera
- )
- scene.add_node(self._camera_node)
- scene.main_camera_node = self._camera_node
- self._reset_view()
-
- #######################################################################
- # Initialize OpenGL context and renderer
- #######################################################################
- self._renderer = Renderer(
- self._viewport_size[0], self._viewport_size[1],
- self.render_flags['point_size']
- )
- self._is_active = True
-
- if self.run_in_thread:
- self._thread = Thread(target=self._init_and_start_app)
- self._thread.start()
- else:
- if auto_start:
- self._init_and_start_app()
-
- def start(self):
- self._init_and_start_app()
-
- @property
- def scene(self):
- """:class:`.Scene` : The scene being visualized.
- """
- return self._scene
-
- @property
- def viewport_size(self):
- """(2,) int : The width and height of the viewing window.
- """
- return self._viewport_size
-
- @property
- def render_lock(self):
- """:class:`threading.RLock` : If acquired, prevents the viewer from
- rendering until released.
-
- Run :meth:`.Viewer.render_lock.acquire` before making updates to
- the scene in a different thread, and run
- :meth:`.Viewer.render_lock.release` once you're done to let the viewer
- continue.
- """
- return self._render_lock
-
- @property
- def is_active(self):
- """bool : `True` if the viewer is active, or `False` if it has
- been closed.
- """
- return self._is_active
-
- @property
- def run_in_thread(self):
- """bool : Whether the viewer was run in a separate thread.
- """
- return self._run_in_thread
-
- @property
- def render_flags(self):
- """dict : Flags for controlling the renderer's behavior.
-
- - ``flip_wireframe``: `bool`, If `True`, all objects will have their
- wireframe modes flipped from what their material indicates.
- Defaults to `False`.
- - ``all_wireframe``: `bool`, If `True`, all objects will be rendered
- in wireframe mode. Defaults to `False`.
- - ``all_solid``: `bool`, If `True`, all objects will be rendered in
- solid mode. Defaults to `False`.
- - ``shadows``: `bool`, If `True`, shadows will be rendered.
- Defaults to `False`.
- - ``vertex_normals``: `bool`, If `True`, vertex normals will be
- rendered as blue lines. Defaults to `False`.
- - ``face_normals``: `bool`, If `True`, face normals will be rendered as
- blue lines. Defaults to `False`.
- - ``cull_faces``: `bool`, If `True`, backfaces will be culled.
- Defaults to `True`.
- - ``point_size`` : float, The point size in pixels. Defaults to 1px.
-
- """
- return self._render_flags
-
- @render_flags.setter
- def render_flags(self, value):
- self._render_flags = value
-
- @property
- def viewer_flags(self):
- """dict : Flags for controlling the viewer's behavior.
-
- The valid keys for ``viewer_flags`` are as follows:
-
- - ``rotate``: `bool`, If `True`, the scene's camera will rotate
- about an axis. Defaults to `False`.
- - ``rotate_rate``: `float`, The rate of rotation in radians per second.
- Defaults to `PI / 3.0`.
- - ``rotate_axis``: `(3,) float`, The axis in world coordinates to
- rotate about. Defaults to ``[0,0,1]``.
- - ``view_center``: `(3,) float`, The position to rotate the scene
- about. Defaults to the scene's centroid.
- - ``use_raymond_lighting``: `bool`, If `True`, an additional set of
- three directional lights that move with the camera will be added to
- the scene. Defaults to `False`.
- - ``use_direct_lighting``: `bool`, If `True`, an additional directional
- light that moves with the camera and points out of it will be
- added to the scene. Defaults to `False`.
- - ``lighting_intensity``: `float`, The overall intensity of the
- viewer's additional lights (when they're in use). Defaults to 3.0.
- - ``use_perspective_cam``: `bool`, If `True`, a perspective camera will
- be used. Otherwise, an orthographic camera is used. Defaults to
- `True`.
- - ``save_directory``: `str`, A directory to open the file dialogs in.
- Defaults to `None`.
- - ``window_title``: `str`, A title for the viewer's application window.
- Defaults to `"Scene Viewer"`.
- - ``refresh_rate``: `float`, A refresh rate for rendering, in Hertz.
- Defaults to `30.0`.
- - ``fullscreen``: `bool`, Whether to make viewer fullscreen.
- Defaults to `False`.
- - ``show_world_axis``: `bool`, Whether to show the world axis.
- Defaults to `False`.
- - ``show_mesh_axes``: `bool`, Whether to show the individual mesh axes.
- Defaults to `False`.
- - ``caption``: `list of dict`, Text caption(s) to display on
- the viewer. Defaults to `None`.
-
- """
- return self._viewer_flags
-
- @viewer_flags.setter
- def viewer_flags(self, value):
- self._viewer_flags = value
-
- @property
- def registered_keys(self):
- """dict : Map from ASCII key character to a handler function.
-
- This is a map from ASCII key characters to tuples containing:
-
- - A function to be called whenever the key is pressed,
- whose first argument will be the viewer itself.
- - (Optionally) A list of additional positional arguments
- to be passed to the function.
- - (Optionally) A dict of keyword arguments to be passed
- to the function.
-
- """
- return self._registered_keys
-
- @registered_keys.setter
- def registered_keys(self, value):
- self._registered_keys = value
-
- def close_external(self):
- """Close the viewer from another thread.
-
- This function will wait for the actual close, so you immediately
- manipulate the scene afterwards.
- """
- self._should_close = True
- while self.is_active:
- time.sleep(1.0 / self.viewer_flags['refresh_rate'])
-
- def save_gif(self, filename=None):
- """Save the stored GIF frames to a file.
-
- To use this asynchronously, run the viewer with the ``record``
- flag and the ``run_in_thread`` flags set.
- Kill the viewer after your desired time with
- :meth:`.Viewer.close_external`, and then call :meth:`.Viewer.save_gif`.
-
- Parameters
- ----------
- filename : str
- The file to save the GIF to. If not specified,
- a file dialog will be opened to ask the user where
- to save the GIF file.
- """
- if filename is None:
- filename = self._get_save_filename(['gif', 'all'])
- if filename is not None:
- self.viewer_flags['save_directory'] = os.path.dirname(filename)
- imageio.mimwrite(filename, self._saved_frames,
- fps=self.viewer_flags['refresh_rate'],
- palettesize=128, subrectangles=True)
- self._saved_frames = []
-
- def on_close(self):
- """Exit the event loop when the window is closed.
- """
- # Remove our camera and restore the prior one
- if self._camera_node is not None:
- self.scene.remove_node(self._camera_node)
- if self._prior_main_camera_node is not None:
- self.scene.main_camera_node = self._prior_main_camera_node
-
- # Delete any lighting nodes that we've attached
- if self.viewer_flags['use_raymond_lighting']:
- for n in self._raymond_lights:
- if self.scene.has_node(n):
- self.scene.remove_node(n)
- if self.viewer_flags['use_direct_lighting']:
- if self.scene.has_node(self._direct_light):
- self.scene.remove_node(self._direct_light)
-
- # Delete any axis nodes that we've attached
- self._remove_axes()
-
- # Delete renderer
- if self._renderer is not None:
- self._renderer.delete()
- self._renderer = None
-
- # Force clean-up of OpenGL context data
- try:
- OpenGL.contextdata.cleanupContext()
- self.close()
- except Exception:
- pass
- finally:
- self._is_active = False
- super(Viewer, self).on_close()
- pyglet.app.exit()
-
- def on_draw(self):
- """Redraw the scene into the viewing window.
- """
- if self._renderer is None:
- return
-
- if self.run_in_thread or not self._auto_start:
- self.render_lock.acquire()
-
- # Make OpenGL context current
- self.switch_to()
-
- # Render the scene
- self.clear()
- self._render()
-
- if self._message_text is not None:
- self._renderer.render_text(
- self._message_text,
- self.viewport_size[0] - TEXT_PADDING,
- TEXT_PADDING,
- font_pt=20,
- color=np.array([0.1, 0.7, 0.2,
- np.clip(self._message_opac, 0.0, 1.0)]),
- align=TextAlign.BOTTOM_RIGHT
- )
-
- if self.viewer_flags['caption'] is not None:
- for caption in self.viewer_flags['caption']:
- xpos, ypos = self._location_to_x_y(caption['location'])
- self._renderer.render_text(
- caption['text'],
- xpos,
- ypos,
- font_name=caption['font_name'],
- font_pt=caption['font_pt'],
- color=caption['color'],
- scale=caption['scale'],
- align=caption['location']
- )
-
- if self.run_in_thread or not self._auto_start:
- self.render_lock.release()
-
- def on_resize(self, width, height):
- """Resize the camera and trackball when the window is resized.
- """
- if self._renderer is None:
- return
-
- self._viewport_size = (width, height)
- self._trackball.resize(self._viewport_size)
- self._renderer.viewport_width = self._viewport_size[0]
- self._renderer.viewport_height = self._viewport_size[1]
- self.on_draw()
-
- def on_mouse_press(self, x, y, buttons, modifiers):
- """Record an initial mouse press.
- """
- self._trackball.set_state(Trackball.STATE_ROTATE)
- if (buttons == pyglet.window.mouse.LEFT):
- ctrl = (modifiers & pyglet.window.key.MOD_CTRL)
- shift = (modifiers & pyglet.window.key.MOD_SHIFT)
- if (ctrl and shift):
- self._trackball.set_state(Trackball.STATE_ZOOM)
- elif ctrl:
- self._trackball.set_state(Trackball.STATE_ROLL)
- elif shift:
- self._trackball.set_state(Trackball.STATE_PAN)
- elif (buttons == pyglet.window.mouse.MIDDLE):
- self._trackball.set_state(Trackball.STATE_PAN)
- elif (buttons == pyglet.window.mouse.RIGHT):
- self._trackball.set_state(Trackball.STATE_ZOOM)
-
- self._trackball.down(np.array([x, y]))
-
- # Stop animating while using the mouse
- self.viewer_flags['mouse_pressed'] = True
-
- def on_mouse_drag(self, x, y, dx, dy, buttons, modifiers):
- """Record a mouse drag.
- """
- self._trackball.drag(np.array([x, y]))
-
- def on_mouse_release(self, x, y, button, modifiers):
- """Record a mouse release.
- """
- self.viewer_flags['mouse_pressed'] = False
-
- def on_mouse_scroll(self, x, y, dx, dy):
- """Record a mouse scroll.
- """
- if self.viewer_flags['use_perspective_cam']:
- self._trackball.scroll(dy)
- else:
- spfc = 0.95
- spbc = 1.0 / 0.95
- sf = 1.0
- if dy > 0:
- sf = spfc * dy
- elif dy < 0:
- sf = - spbc * dy
-
- c = self._camera_node.camera
- xmag = max(c.xmag * sf, 1e-8)
- ymag = max(c.ymag * sf, 1e-8 * c.ymag / c.xmag)
- c.xmag = xmag
- c.ymag = ymag
-
- def on_key_press(self, symbol, modifiers):
- """Record a key press.
- """
- # First, check for registered key callbacks
- if symbol in self.registered_keys:
- tup = self.registered_keys[symbol]
- callback = None
- args = []
- kwargs = {}
- if not isinstance(tup, (list, tuple, np.ndarray)):
- callback = tup
- else:
- callback = tup[0]
- if len(tup) == 2:
- args = tup[1]
- if len(tup) == 3:
- kwargs = tup[2]
- callback(self, *args, **kwargs)
- return
-
- # Otherwise, use default key functions
-
- # A causes the frame to rotate
- self._message_text = None
- if symbol == pyglet.window.key.A:
- self.viewer_flags['rotate'] = not self.viewer_flags['rotate']
- if self.viewer_flags['rotate']:
- self._message_text = 'Rotation On'
- else:
- self._message_text = 'Rotation Off'
-
- # C toggles backface culling
- elif symbol == pyglet.window.key.C:
- self.render_flags['cull_faces'] = (
- not self.render_flags['cull_faces']
- )
- if self.render_flags['cull_faces']:
- self._message_text = 'Cull Faces On'
- else:
- self._message_text = 'Cull Faces Off'
-
- # F toggles face normals
- elif symbol == pyglet.window.key.F:
- self.viewer_flags['fullscreen'] = (
- not self.viewer_flags['fullscreen']
- )
- self.set_fullscreen(self.viewer_flags['fullscreen'])
- self.activate()
- if self.viewer_flags['fullscreen']:
- self._message_text = 'Fullscreen On'
- else:
- self._message_text = 'Fullscreen Off'
-
- # S toggles shadows
- elif symbol == pyglet.window.key.H and sys.platform != 'darwin':
- self.render_flags['shadows'] = not self.render_flags['shadows']
- if self.render_flags['shadows']:
- self._message_text = 'Shadows On'
- else:
- self._message_text = 'Shadows Off'
-
- elif symbol == pyglet.window.key.I:
- if (self.viewer_flags['show_world_axis'] and not
- self.viewer_flags['show_mesh_axes']):
- self.viewer_flags['show_world_axis'] = False
- self.viewer_flags['show_mesh_axes'] = True
- self._set_axes(False, True)
- self._message_text = 'Mesh Axes On'
- elif (not self.viewer_flags['show_world_axis'] and
- self.viewer_flags['show_mesh_axes']):
- self.viewer_flags['show_world_axis'] = True
- self.viewer_flags['show_mesh_axes'] = True
- self._set_axes(True, True)
- self._message_text = 'All Axes On'
- elif (self.viewer_flags['show_world_axis'] and
- self.viewer_flags['show_mesh_axes']):
- self.viewer_flags['show_world_axis'] = False
- self.viewer_flags['show_mesh_axes'] = False
- self._set_axes(False, False)
- self._message_text = 'All Axes Off'
- else:
- self.viewer_flags['show_world_axis'] = True
- self.viewer_flags['show_mesh_axes'] = False
- self._set_axes(True, False)
- self._message_text = 'World Axis On'
-
- # L toggles the lighting mode
- elif symbol == pyglet.window.key.L:
- if self.viewer_flags['use_raymond_lighting']:
- self.viewer_flags['use_raymond_lighting'] = False
- self.viewer_flags['use_direct_lighting'] = True
- self._message_text = 'Direct Lighting'
- elif self.viewer_flags['use_direct_lighting']:
- self.viewer_flags['use_raymond_lighting'] = False
- self.viewer_flags['use_direct_lighting'] = False
- self._message_text = 'Default Lighting'
- else:
- self.viewer_flags['use_raymond_lighting'] = True
- self.viewer_flags['use_direct_lighting'] = False
- self._message_text = 'Raymond Lighting'
-
- # M toggles face normals
- elif symbol == pyglet.window.key.M:
- self.render_flags['face_normals'] = (
- not self.render_flags['face_normals']
- )
- if self.render_flags['face_normals']:
- self._message_text = 'Face Normals On'
- else:
- self._message_text = 'Face Normals Off'
-
- # N toggles vertex normals
- elif symbol == pyglet.window.key.N:
- self.render_flags['vertex_normals'] = (
- not self.render_flags['vertex_normals']
- )
- if self.render_flags['vertex_normals']:
- self._message_text = 'Vert Normals On'
- else:
- self._message_text = 'Vert Normals Off'
-
- # O toggles orthographic camera mode
- elif symbol == pyglet.window.key.O:
- self.viewer_flags['use_perspective_cam'] = (
- not self.viewer_flags['use_perspective_cam']
- )
- if self.viewer_flags['use_perspective_cam']:
- camera = self._default_persp_cam
- self._message_text = 'Perspective View'
- else:
- camera = self._default_orth_cam
- self._message_text = 'Orthographic View'
-
- cam_pose = self._camera_node.matrix.copy()
- cam_node = Node(matrix=cam_pose, camera=camera)
- self.scene.remove_node(self._camera_node)
- self.scene.add_node(cam_node)
- self.scene.main_camera_node = cam_node
- self._camera_node = cam_node
-
- # Q quits the viewer
- elif symbol == pyglet.window.key.Q:
- self.on_close()
-
- # R starts recording frames
- elif symbol == pyglet.window.key.R:
- if self.viewer_flags['record']:
- self.save_gif()
- self.set_caption(self.viewer_flags['window_title'])
- else:
- self.set_caption(
- '{} (RECORDING)'.format(self.viewer_flags['window_title'])
- )
- self.viewer_flags['record'] = not self.viewer_flags['record']
-
- # S saves the current frame as an image
- elif symbol == pyglet.window.key.S:
- self._save_image()
-
- # W toggles through wireframe modes
- elif symbol == pyglet.window.key.W:
- if self.render_flags['flip_wireframe']:
- self.render_flags['flip_wireframe'] = False
- self.render_flags['all_wireframe'] = True
- self.render_flags['all_solid'] = False
- self._message_text = 'All Wireframe'
- elif self.render_flags['all_wireframe']:
- self.render_flags['flip_wireframe'] = False
- self.render_flags['all_wireframe'] = False
- self.render_flags['all_solid'] = True
- self._message_text = 'All Solid'
- elif self.render_flags['all_solid']:
- self.render_flags['flip_wireframe'] = False
- self.render_flags['all_wireframe'] = False
- self.render_flags['all_solid'] = False
- self._message_text = 'Default Wireframe'
- else:
- self.render_flags['flip_wireframe'] = True
- self.render_flags['all_wireframe'] = False
- self.render_flags['all_solid'] = False
- self._message_text = 'Flip Wireframe'
-
- # Z resets the camera viewpoint
- elif symbol == pyglet.window.key.Z:
- self._reset_view()
-
- if self._message_text is not None:
- self._message_opac = 1.0 + self._ticks_till_fade
-
- @staticmethod
- def _time_event(dt, self):
- """The timer callback.
- """
- # Don't run old dead events after we've already closed
- if not self._is_active:
- return
-
- if self.viewer_flags['record']:
- self._record()
- if (self.viewer_flags['rotate'] and not
- self.viewer_flags['mouse_pressed']):
- self._rotate()
-
- # Manage message opacity
- if self._message_text is not None:
- if self._message_opac > 1.0:
- self._message_opac -= 1.0
- else:
- self._message_opac *= 0.90
- if self._message_opac < 0.05:
- self._message_opac = 1.0 + self._ticks_till_fade
- self._message_text = None
-
- if self._should_close:
- self.on_close()
- else:
- self.on_draw()
-
- def _reset_view(self):
- """Reset the view to a good initial state.
-
- The view is initially along the positive x-axis at a
- sufficient distance from the scene.
- """
- scale = self.scene.scale
- if scale == 0.0:
- scale = DEFAULT_SCENE_SCALE
- centroid = self.scene.centroid
-
- if self.viewer_flags['view_center'] is not None:
- centroid = self.viewer_flags['view_center']
-
- self._camera_node.matrix = self._default_camera_pose
- self._trackball = Trackball(
- self._default_camera_pose, self.viewport_size, scale, centroid
- )
-
- def _get_save_filename(self, file_exts):
- file_types = {
- 'png': ('png files', '*.png'),
- 'jpg': ('jpeg files', '*.jpg'),
- 'gif': ('gif files', '*.gif'),
- 'all': ('all files', '*'),
- }
- filetypes = [file_types[x] for x in file_exts]
- try:
- root = Tk()
- save_dir = self.viewer_flags['save_directory']
- if save_dir is None:
- save_dir = os.getcwd()
- filename = filedialog.asksaveasfilename(
- initialdir=save_dir, title='Select file save location',
- filetypes=filetypes
- )
- except Exception:
- return None
-
- root.destroy()
- if filename == ():
- return None
- return filename
-
- def _save_image(self):
- filename = self._get_save_filename(['png', 'jpg', 'gif', 'all'])
- if filename is not None:
- self.viewer_flags['save_directory'] = os.path.dirname(filename)
- imageio.imwrite(filename, self._renderer.read_color_buf())
-
- def _record(self):
- """Save another frame for the GIF.
- """
- data = self._renderer.read_color_buf()
- if not np.all(data == 0.0):
- self._saved_frames.append(data)
-
- def _rotate(self):
- """Animate the scene by rotating the camera.
- """
- az = (self.viewer_flags['rotate_rate'] /
- self.viewer_flags['refresh_rate'])
- self._trackball.rotate(az, self.viewer_flags['rotate_axis'])
-
- def _render(self):
- """Render the scene into the framebuffer and flip.
- """
- scene = self.scene
- self._camera_node.matrix = self._trackball.pose.copy()
-
- # Set lighting
- vli = self.viewer_flags['lighting_intensity']
- if self.viewer_flags['use_raymond_lighting']:
- for n in self._raymond_lights:
- n.light.intensity = vli / 3.0
- if not self.scene.has_node(n):
- scene.add_node(n, parent_node=self._camera_node)
- else:
- self._direct_light.light.intensity = vli
- for n in self._raymond_lights:
- if self.scene.has_node(n):
- self.scene.remove_node(n)
-
- if self.viewer_flags['use_direct_lighting']:
- if not self.scene.has_node(self._direct_light):
- scene.add_node(
- self._direct_light, parent_node=self._camera_node
- )
- elif self.scene.has_node(self._direct_light):
- self.scene.remove_node(self._direct_light)
-
- flags = RenderFlags.NONE
- if self.render_flags['flip_wireframe']:
- flags |= RenderFlags.FLIP_WIREFRAME
- elif self.render_flags['all_wireframe']:
- flags |= RenderFlags.ALL_WIREFRAME
- elif self.render_flags['all_solid']:
- flags |= RenderFlags.ALL_SOLID
-
- if self.render_flags['shadows']:
- flags |= RenderFlags.SHADOWS_DIRECTIONAL | RenderFlags.SHADOWS_SPOT
- if self.render_flags['vertex_normals']:
- flags |= RenderFlags.VERTEX_NORMALS
- if self.render_flags['face_normals']:
- flags |= RenderFlags.FACE_NORMALS
- if not self.render_flags['cull_faces']:
- flags |= RenderFlags.SKIP_CULL_FACES
-
- self._renderer.render(self.scene, flags)
-
- def _init_and_start_app(self):
- # Try multiple configs starting with target OpenGL version
- # and multisampling and removing these options if exception
- # Note: multisampling not available on all hardware
- from pyglet.gl import Config
- confs = [Config(sample_buffers=1, samples=4,
- depth_size=24,
- double_buffer=True,
- major_version=TARGET_OPEN_GL_MAJOR,
- minor_version=TARGET_OPEN_GL_MINOR),
- Config(depth_size=24,
- double_buffer=True,
- major_version=TARGET_OPEN_GL_MAJOR,
- minor_version=TARGET_OPEN_GL_MINOR),
- Config(sample_buffers=1, samples=4,
- depth_size=24,
- double_buffer=True,
- major_version=MIN_OPEN_GL_MAJOR,
- minor_version=MIN_OPEN_GL_MINOR),
- Config(depth_size=24,
- double_buffer=True,
- major_version=MIN_OPEN_GL_MAJOR,
- minor_version=MIN_OPEN_GL_MINOR)]
- for conf in confs:
- try:
- super(Viewer, self).__init__(config=conf, resizable=True,
- width=self._viewport_size[0],
- height=self._viewport_size[1])
- break
- except pyglet.window.NoSuchConfigException:
- pass
-
- if not self.context:
- raise ValueError('Unable to initialize an OpenGL 3+ context')
- clock.schedule_interval(
- Viewer._time_event, 1.0 / self.viewer_flags['refresh_rate'], self
- )
- self.switch_to()
- self.set_caption(self.viewer_flags['window_title'])
- pyglet.app.run()
-
- def _compute_initial_camera_pose(self):
- centroid = self.scene.centroid
- if self.viewer_flags['view_center'] is not None:
- centroid = self.viewer_flags['view_center']
- scale = self.scene.scale
- if scale == 0.0:
- scale = DEFAULT_SCENE_SCALE
-
- s2 = 1.0 / np.sqrt(2.0)
- cp = np.eye(4)
- cp[:3,:3] = np.array([
- [0.0, -s2, s2],
- [1.0, 0.0, 0.0],
- [0.0, s2, s2]
- ])
- hfov = np.pi / 6.0
- dist = scale / (2.0 * np.tan(hfov))
- cp[:3,3] = dist * np.array([1.0, 0.0, 1.0]) + centroid
-
- return cp
-
- def _create_raymond_lights(self):
- thetas = np.pi * np.array([1.0 / 6.0, 1.0 / 6.0, 1.0 / 6.0])
- phis = np.pi * np.array([0.0, 2.0 / 3.0, 4.0 / 3.0])
-
- nodes = []
-
- for phi, theta in zip(phis, thetas):
- xp = np.sin(theta) * np.cos(phi)
- yp = np.sin(theta) * np.sin(phi)
- zp = np.cos(theta)
-
- z = np.array([xp, yp, zp])
- z = z / np.linalg.norm(z)
- x = np.array([-z[1], z[0], 0.0])
- if np.linalg.norm(x) == 0:
- x = np.array([1.0, 0.0, 0.0])
- x = x / np.linalg.norm(x)
- y = np.cross(z, x)
-
- matrix = np.eye(4)
- matrix[:3,:3] = np.c_[x,y,z]
- nodes.append(Node(
- light=DirectionalLight(color=np.ones(3), intensity=1.0),
- matrix=matrix
- ))
-
- return nodes
-
- def _create_direct_light(self):
- light = DirectionalLight(color=np.ones(3), intensity=1.0)
- n = Node(light=light, matrix=np.eye(4))
- return n
-
- def _set_axes(self, world, mesh):
- scale = self.scene.scale
- if world:
- if 'scene' not in self._axes:
- n = Node(mesh=self._axis_mesh, scale=np.ones(3) * scale * 0.3)
- self.scene.add_node(n)
- self._axes['scene'] = n
- else:
- if 'scene' in self._axes:
- self.scene.remove_node(self._axes['scene'])
- self._axes.pop('scene')
-
- if mesh:
- old_nodes = []
- existing_axes = set([self._axes[k] for k in self._axes])
- for node in self.scene.mesh_nodes:
- if node not in existing_axes:
- old_nodes.append(node)
-
- for node in old_nodes:
- if node in self._axes:
- continue
- n = Node(
- mesh=self._axis_mesh,
- scale=np.ones(3) * node.mesh.scale * 0.5
- )
- self.scene.add_node(n, parent_node=node)
- self._axes[node] = n
- else:
- to_remove = set()
- for main_node in self._axes:
- if main_node in self.scene.mesh_nodes:
- self.scene.remove_node(self._axes[main_node])
- to_remove.add(main_node)
- for main_node in to_remove:
- self._axes.pop(main_node)
-
- def _remove_axes(self):
- for main_node in self._axes:
- axis_node = self._axes[main_node]
- self.scene.remove_node(axis_node)
- self._axes = {}
-
- def _location_to_x_y(self, location):
- if location == TextAlign.CENTER:
- return (self.viewport_size[0] / 2.0, self.viewport_size[1] / 2.0)
- elif location == TextAlign.CENTER_LEFT:
- return (TEXT_PADDING, self.viewport_size[1] / 2.0)
- elif location == TextAlign.CENTER_RIGHT:
- return (self.viewport_size[0] - TEXT_PADDING,
- self.viewport_size[1] / 2.0)
- elif location == TextAlign.BOTTOM_LEFT:
- return (TEXT_PADDING, TEXT_PADDING)
- elif location == TextAlign.BOTTOM_RIGHT:
- return (self.viewport_size[0] - TEXT_PADDING, TEXT_PADDING)
- elif location == TextAlign.BOTTOM_CENTER:
- return (self.viewport_size[0] / 2.0, TEXT_PADDING)
- elif location == TextAlign.TOP_LEFT:
- return (TEXT_PADDING, self.viewport_size[1] - TEXT_PADDING)
- elif location == TextAlign.TOP_RIGHT:
- return (self.viewport_size[0] - TEXT_PADDING,
- self.viewport_size[1] - TEXT_PADDING)
- elif location == TextAlign.TOP_CENTER:
- return (self.viewport_size[0] / 2.0,
- self.viewport_size[1] - TEXT_PADDING)
-
-
-__all__ = ['Viewer']
diff --git a/spaces/caffeinum/VToonify/vtoonify_model.py b/spaces/caffeinum/VToonify/vtoonify_model.py
deleted file mode 100644
index 8a506c2da195acafa2e6a18b3ef0874a58b5b15f..0000000000000000000000000000000000000000
--- a/spaces/caffeinum/VToonify/vtoonify_model.py
+++ /dev/null
@@ -1,284 +0,0 @@
-from __future__ import annotations
-import gradio as gr
-import pathlib
-import sys
-sys.path.insert(0, 'vtoonify')
-
-from util import load_psp_standalone, get_video_crop_parameter, tensor2cv2
-import torch
-import torch.nn as nn
-import numpy as np
-import dlib
-import cv2
-from model.vtoonify import VToonify
-from model.bisenet.model import BiSeNet
-import torch.nn.functional as F
-from torchvision import transforms
-from model.encoder.align_all_parallel import align_face
-import gc
-import huggingface_hub
-import os
-
-MODEL_REPO = 'PKUWilliamYang/VToonify'
-
-class Model():
- def __init__(self, device):
- super().__init__()
-
- self.device = device
- self.style_types = {
- 'cartoon1': ['vtoonify_d_cartoon/vtoonify_s026_d0.5.pt', 26],
- 'cartoon1-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 26],
- 'cartoon2-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 64],
- 'cartoon3-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 153],
- 'cartoon4': ['vtoonify_d_cartoon/vtoonify_s299_d0.5.pt', 299],
- 'cartoon4-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 299],
- 'cartoon5-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 8],
- 'comic1-d': ['vtoonify_d_comic/vtoonify_s_d.pt', 28],
- 'comic2-d': ['vtoonify_d_comic/vtoonify_s_d.pt', 18],
- 'arcane1': ['vtoonify_d_arcane/vtoonify_s000_d0.5.pt', 0],
- 'arcane1-d': ['vtoonify_d_arcane/vtoonify_s_d.pt', 0],
- 'arcane2': ['vtoonify_d_arcane/vtoonify_s077_d0.5.pt', 77],
- 'arcane2-d': ['vtoonify_d_arcane/vtoonify_s_d.pt', 77],
- 'caricature1': ['vtoonify_d_caricature/vtoonify_s039_d0.5.pt', 39],
- 'caricature2': ['vtoonify_d_caricature/vtoonify_s068_d0.5.pt', 68],
- 'pixar': ['vtoonify_d_pixar/vtoonify_s052_d0.5.pt', 52],
- 'pixar-d': ['vtoonify_d_pixar/vtoonify_s_d.pt', 52],
- 'illustration1-d': ['vtoonify_d_illustration/vtoonify_s054_d_c.pt', 54],
- 'illustration2-d': ['vtoonify_d_illustration/vtoonify_s004_d_c.pt', 4],
- 'illustration3-d': ['vtoonify_d_illustration/vtoonify_s009_d_c.pt', 9],
- 'illustration4-d': ['vtoonify_d_illustration/vtoonify_s043_d_c.pt', 43],
- 'illustration5-d': ['vtoonify_d_illustration/vtoonify_s086_d_c.pt', 86],
- }
-
- self.landmarkpredictor = self._create_dlib_landmark_model()
- self.parsingpredictor = self._create_parsing_model()
- self.pspencoder = self._load_encoder()
- self.transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- self.vtoonify, self.exstyle = self._load_default_model()
- self.color_transfer = False
- self.style_name = 'cartoon1'
- self.video_limit_cpu = 100
- self.video_limit_gpu = 300
-
- @staticmethod
- def _create_dlib_landmark_model():
- return dlib.shape_predictor(huggingface_hub.hf_hub_download(MODEL_REPO,
- 'models/shape_predictor_68_face_landmarks.dat'))
-
- def _create_parsing_model(self):
- parsingpredictor = BiSeNet(n_classes=19)
- parsingpredictor.load_state_dict(torch.load(huggingface_hub.hf_hub_download(MODEL_REPO, 'models/faceparsing.pth'),
- map_location=lambda storage, loc: storage))
- parsingpredictor.to(self.device).eval()
- return parsingpredictor
-
- def _load_encoder(self) -> nn.Module:
- style_encoder_path = huggingface_hub.hf_hub_download(MODEL_REPO,'models/encoder.pt')
- return load_psp_standalone(style_encoder_path, self.device)
-
- def _load_default_model(self) -> tuple[torch.Tensor, str]:
- vtoonify = VToonify(backbone = 'dualstylegan')
- vtoonify.load_state_dict(torch.load(huggingface_hub.hf_hub_download(MODEL_REPO,
- 'models/vtoonify_d_cartoon/vtoonify_s026_d0.5.pt'),
- map_location=lambda storage, loc: storage)['g_ema'])
- vtoonify.to(self.device)
- tmp = np.load(huggingface_hub.hf_hub_download(MODEL_REPO,'models/vtoonify_d_cartoon/exstyle_code.npy'), allow_pickle=True).item()
- exstyle = torch.tensor(tmp[list(tmp.keys())[26]]).to(self.device)
- with torch.no_grad():
- exstyle = vtoonify.zplus2wplus(exstyle)
- return vtoonify, exstyle
-
- def load_model(self, style_type: str) -> tuple[torch.Tensor, str]:
- if 'illustration' in style_type:
- self.color_transfer = True
- else:
- self.color_transfer = False
- if style_type not in self.style_types.keys():
- return None, 'Oops, wrong Style Type. Please select a valid model.'
- self.style_name = style_type
- model_path, ind = self.style_types[style_type]
- style_path = os.path.join('models',os.path.dirname(model_path),'exstyle_code.npy')
- self.vtoonify.load_state_dict(torch.load(huggingface_hub.hf_hub_download(MODEL_REPO,'models/'+model_path),
- map_location=lambda storage, loc: storage)['g_ema'])
- tmp = np.load(huggingface_hub.hf_hub_download(MODEL_REPO, style_path), allow_pickle=True).item()
- exstyle = torch.tensor(tmp[list(tmp.keys())[ind]]).to(self.device)
- with torch.no_grad():
- exstyle = self.vtoonify.zplus2wplus(exstyle)
- return exstyle, 'Model of %s loaded.'%(style_type)
-
- def detect_and_align(self, frame, top, bottom, left, right, return_para=False):
- message = 'Error: no face detected! Please retry or change the photo.'
- paras = get_video_crop_parameter(frame, self.landmarkpredictor, [left, right, top, bottom])
- instyle = None
- h, w, scale = 0, 0, 0
- if paras is not None:
- h,w,top,bottom,left,right,scale = paras
- H, W = int(bottom-top), int(right-left)
- # for HR image, we apply gaussian blur to it to avoid over-sharp stylization results
- kernel_1d = np.array([[0.125],[0.375],[0.375],[0.125]])
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
- with torch.no_grad():
- I = align_face(frame, self.landmarkpredictor)
- if I is not None:
- I = self.transform(I).unsqueeze(dim=0).to(self.device)
- instyle = self.pspencoder(I)
- instyle = self.vtoonify.zplus2wplus(instyle)
- message = 'Successfully rescale the frame to (%d, %d)'%(bottom-top, right-left)
- else:
- frame = np.zeros((256,256,3), np.uint8)
- else:
- frame = np.zeros((256,256,3), np.uint8)
- if return_para:
- return frame, instyle, message, w, h, top, bottom, left, right, scale
- return frame, instyle, message
-
- #@torch.inference_mode()
- def detect_and_align_image(self, image: str, top: int, bottom: int, left: int, right: int
- ) -> tuple[np.ndarray, torch.Tensor, str]:
- if image is None:
- return np.zeros((256,256,3), np.uint8), None, 'Error: fail to load empty file.'
- frame = cv2.imread(image)
- if frame is None:
- return np.zeros((256,256,3), np.uint8), None, 'Error: fail to load the image.'
- frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
- return self.detect_and_align(frame, top, bottom, left, right)
-
- def detect_and_align_video(self, video: str, top: int, bottom: int, left: int, right: int
- ) -> tuple[np.ndarray, torch.Tensor, str]:
- if video is None:
- return np.zeros((256,256,3), np.uint8), None, 'Error: fail to load empty file.'
- video_cap = cv2.VideoCapture(video)
- if video_cap.get(7) == 0:
- video_cap.release()
- return np.zeros((256,256,3), np.uint8), torch.zeros(1,18,512).to(self.device), 'Error: fail to load the video.'
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- video_cap.release()
- return self.detect_and_align(frame, top, bottom, left, right)
-
- def detect_and_align_full_video(self, video: str, top: int, bottom: int, left: int, right: int) -> tuple[str, torch.Tensor, str]:
- message = 'Error: no face detected! Please retry or change the video.'
- instyle = None
- if video is None:
- return 'default.mp4', instyle, 'Error: fail to load empty file.'
- video_cap = cv2.VideoCapture(video)
- if video_cap.get(7) == 0:
- video_cap.release()
- return 'default.mp4', instyle, 'Error: fail to load the video.'
- num = min(self.video_limit_gpu, int(video_cap.get(7)))
- if self.device == 'cpu':
- num = min(self.video_limit_cpu, num)
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- frame, instyle, message, w, h, top, bottom, left, right, scale = self.detect_and_align(frame, top, bottom, left, right, True)
- if instyle is None:
- return 'default.mp4', instyle, message
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- videoWriter = cv2.VideoWriter('input.mp4', fourcc, video_cap.get(5), (int(right-left), int(bottom-top)))
- videoWriter.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
- kernel_1d = np.array([[0.125],[0.375],[0.375],[0.125]])
- for i in range(num-1):
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
- videoWriter.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
-
- videoWriter.release()
- video_cap.release()
-
- return 'input.mp4', instyle, 'Successfully rescale the video to (%d, %d)'%(bottom-top, right-left)
-
- def image_toonify(self, aligned_face: np.ndarray, instyle: torch.Tensor, exstyle: torch.Tensor, style_degree: float, style_type: str) -> tuple[np.ndarray, str]:
- #print(style_type + ' ' + self.style_name)
- if instyle is None or aligned_face is None:
- return np.zeros((256,256,3), np.uint8), 'Opps, something wrong with the input. Please go to Step 2 and Rescale Image/First Frame again.'
- if self.style_name != style_type:
- exstyle, _ = self.load_model(style_type)
- if exstyle is None:
- return np.zeros((256,256,3), np.uint8), 'Opps, something wrong with the style type. Please go to Step 1 and load model again.'
- with torch.no_grad():
- if self.color_transfer:
- s_w = exstyle
- else:
- s_w = instyle.clone()
- s_w[:,:7] = exstyle[:,:7]
-
- x = self.transform(aligned_face).unsqueeze(dim=0).to(self.device)
- x_p = F.interpolate(self.parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- inputs = torch.cat((x, x_p/16.), dim=1)
- y_tilde = self.vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), d_s = style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
- print('*** Toonify %dx%d image with style of %s'%(y_tilde.shape[2], y_tilde.shape[3], style_type))
- return ((y_tilde[0].cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8), 'Successfully toonify the image with style of %s'%(self.style_name)
-
- def video_tooniy(self, aligned_video: str, instyle: torch.Tensor, exstyle: torch.Tensor, style_degree: float, style_type: str) -> tuple[str, str]:
- #print(style_type + ' ' + self.style_name)
- if aligned_video is None:
- return 'default.mp4', 'Opps, something wrong with the input. Please go to Step 2 and Rescale Video again.'
- video_cap = cv2.VideoCapture(aligned_video)
- if instyle is None or aligned_video is None or video_cap.get(7) == 0:
- video_cap.release()
- return 'default.mp4', 'Opps, something wrong with the input. Please go to Step 2 and Rescale Video again.'
- if self.style_name != style_type:
- exstyle, _ = self.load_model(style_type)
- if exstyle is None:
- return 'default.mp4', 'Opps, something wrong with the style type. Please go to Step 1 and load model again.'
- num = min(self.video_limit_gpu, int(video_cap.get(7)))
- if self.device == 'cpu':
- num = min(self.video_limit_cpu, num)
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- videoWriter = cv2.VideoWriter('output.mp4', fourcc,
- video_cap.get(5), (int(video_cap.get(3)*4),
- int(video_cap.get(4)*4)))
-
- batch_frames = []
- if video_cap.get(3) != 0:
- if self.device == 'cpu':
- batch_size = max(1, int(4 * 256* 256/ video_cap.get(3) / video_cap.get(4)))
- else:
- batch_size = min(max(1, int(4 * 400 * 360/ video_cap.get(3) / video_cap.get(4))), 4)
- else:
- batch_size = 1
- print('*** Toonify using batch size of %d on %dx%d video of %d frames with style of %s'%(batch_size, int(video_cap.get(3)*4), int(video_cap.get(4)*4), num, style_type))
- with torch.no_grad():
- if self.color_transfer:
- s_w = exstyle
- else:
- s_w = instyle.clone()
- s_w[:,:7] = exstyle[:,:7]
- for i in range(num):
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- batch_frames += [self.transform(frame).unsqueeze(dim=0).to(self.device)]
- if len(batch_frames) == batch_size or (i+1) == num:
- x = torch.cat(batch_frames, dim=0)
- batch_frames = []
- with torch.no_grad():
- x_p = F.interpolate(self.parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- inputs = torch.cat((x, x_p/16.), dim=1)
- y_tilde = self.vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
- for k in range(y_tilde.size(0)):
- videoWriter.write(tensor2cv2(y_tilde[k].cpu()))
- gc.collect()
-
- videoWriter.release()
- video_cap.release()
- return 'output.mp4', 'Successfully toonify video of %d frames with style of %s'%(num, self.style_name)
-
-
diff --git a/spaces/ccds/vits_onnx/Dockerfile b/spaces/ccds/vits_onnx/Dockerfile
deleted file mode 100644
index 787d5fba65aa264d1a60675f2169e79f22c1b8ef..0000000000000000000000000000000000000000
--- a/spaces/ccds/vits_onnx/Dockerfile
+++ /dev/null
@@ -1,36 +0,0 @@
-# create time : 2023.1.09
-# onnxruntime==1.13.1 not supported py=3.11
-# FROM python:3.11.1-slim-bullseye as compile-image
-FROM python:3.9.15-slim-bullseye as compile-image
-
-ENV POETRY_VERSION=1.5.1
-
-RUN export DEBIAN_FRONTEND=noninteractive && \
- apt-get update && \
- apt-get install cmake build-essential -y --no-install-recommends && \
- pip install poetry==$POETRY_VERSION
-
-
-
-COPY ./pyproject.toml ./app/init_jptalk.py ./poetry.lock ./
-RUN poetry export -f requirements.txt -o requirements.txt && \
- python -m venv /opt/venv && \
- /opt/venv/bin/pip install --no-cache-dir -U pip && \
- /opt/venv/bin/pip install --no-cache-dir -r requirements.txt && \
- /opt/venv/bin/python3 init_jptalk.py
-
-# FROM python:3.11.1-slim-bullseye as final
-FROM python:3.9.15-slim-bullseye as final
-EXPOSE 7860
-COPY --from=compile-image /opt/venv /opt/venv
-# COPY ./app/init_jptalk.py /app/init_jptalk.py
-ENV TZ=Asia/Shanghai PATH="/opt/venv/bin:$PATH"
-COPY ./app /app
-WORKDIR /
-# use for huggingface
-RUN mkdir -p /app/.model && \
- chmod 777 -R /app
-
-
-
-CMD ["python", "-m","app.main"]
\ No newline at end of file
diff --git a/spaces/ccolas/TastyPiano/src/music/pipeline/__init__.py b/spaces/ccolas/TastyPiano/src/music/pipeline/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/chansung/LLM-As-Chatbot/chats/koalpaca.py b/spaces/chansung/LLM-As-Chatbot/chats/koalpaca.py
deleted file mode 100644
index aff5e2a102ea10bc5a65161e1df4507dcaec116b..0000000000000000000000000000000000000000
--- a/spaces/chansung/LLM-As-Chatbot/chats/koalpaca.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import copy
-import json
-import global_vars
-from chats import pre, post
-from pingpong import PingPong
-from gens.batch_gen import get_output_batch
-
-from pingpong.context import CtxLastWindowStrategy
-
-def build_prompts(ppmanager, user_message, global_context, win_size=3):
- dummy_ppm = copy.deepcopy(ppmanager)
-
- dummy_ppm.ctx = global_context
- for pingpong in dummy_ppm.pingpongs:
- pong = pingpong.pong
- first_sentence = pong.split("\n")[0]
- if first_sentence != "" and \
- pre.contains_image_markdown(first_sentence):
- pong = ' '.join(pong.split("\n")[1:]).strip()
- pingpong.pong = pong
-
- lws = CtxLastWindowStrategy(win_size)
-
- prompt = lws(dummy_ppm)
- return prompt
-
-def text_stream(ppmanager, streamer):
- for new_text in streamer:
- ppmanager.append_pong(new_text)
- yield ppmanager, ppmanager.build_uis()
-
- yield ppmanager, ppmanager.build_uis()
-
-def summarize(
- ppmanager, prompt_to_summarize, win_size,
- temperature, top_p, top_k, repetition_penalty, max_new_tokens,
- num_beams, use_cache, do_sample, eos_token_id, pad_token_id
-):
- ctx = ppmanager.ctx
- last_pong = ppmanager.pingpongs[-1].pong
- ppmanager.add_pingpong(PingPong(prompt_to_summarize, ""))
- prompt = ppmanager.build_prompts(from_idx=-win_size)
-
- _, gen_config_summarization = pre.build_gen_config(
- temperature, top_p, top_k, repetition_penalty, max_new_tokens,
- num_beams, use_cache, do_sample, eos_token_id, pad_token_id
- )
- summarize_output = get_output_batch(
- global_vars.model, global_vars.tokenizer, [prompt], gen_config_summarization
- )[0].split("### 응답:")[-1].strip()
- ppmanager.ctx = summarize_output
- ppmanager.pop_pingpong()
- return ppmanager
-
-def chat_stream(
- idx, local_data, user_message, state, model_num,
- global_context, ctx_num_lconv, ctx_sum_prompt,
- res_temp, res_topp, res_topk, res_rpen, res_mnts, res_beams, res_cache, res_sample, res_eosid, res_padid,
-):
- res = [
- state["ppmanager_type"].from_json(json.dumps(ppm))
- for ppm in local_data
- ]
-
- ppm = res[idx]
-
- # add_ping returns a prompt structured in Alpaca form
- ppm.add_pingpong(
- PingPong(user_message, "")
- )
- prompt = build_prompts(ppm, user_message, global_context, ctx_num_lconv)
-
- # prepare text generating streamer & start generating
- gen_kwargs, streamer = pre.build(
- prompt, model_num,
- res_temp, res_topp, res_topk, res_rpen, res_mnts,
- res_beams, res_cache, res_sample, res_eosid, res_padid,
- return_token_type_ids=False
- )
- pre.start_gen(gen_kwargs, model_num)
-
- # handling stream
- for ppmanager, uis in text_stream(ppm, streamer):
- yield "", uis, prompt, str(res)
-
- ppm = post.strip_pong(ppm)
- yield "", ppm.build_uis(), prompt, str(res)
-
- # summarization
- # ppm.add_pingpong(
- # PingPong(None, "")
- # )
- # yield "", ppm.build_uis(), prompt, state
- # ppm.pop_pingpong()
-
- # ppm = summarize(
- # ppm, ctx_sum_prompt, ctx_num_lconv,
- # sum_temp, sum_topp, sum_topk, sum_rpen, sum_mnts,
- # sum_beams, sum_cache, sum_sample, sum_eosid, sum_padid
- # )
- yield "", ppm.build_uis(), prompt, str(res)
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/data/datasets/voc_classes.py b/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/data/datasets/voc_classes.py
deleted file mode 100644
index 89354b3fdb19195f63f76ed56c86565323de5434..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/data/datasets/voc_classes.py
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-# VOC_CLASSES = ( '__background__', # always index 0
-VOC_CLASSES = (
- "aeroplane",
- "bicycle",
- "bird",
- "boat",
- "bottle",
- "bus",
- "car",
- "cat",
- "chair",
- "cow",
- "diningtable",
- "dog",
- "horse",
- "motorbike",
- "person",
- "pottedplant",
- "sheep",
- "sofa",
- "train",
- "tvmonitor",
-)
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/wav2vec2/finetune_large_lv60_100.sh b/spaces/chendl/compositional_test/transformers/examples/research_projects/wav2vec2/finetune_large_lv60_100.sh
deleted file mode 100644
index 3d2423df970c8e3a4f373372ce42763b3240b4a4..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/wav2vec2/finetune_large_lv60_100.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/usr/bin/env bash
-python run_asr.py \
---output_dir="./wav2vec2-large-lv60-100h" \
---num_train_epochs="30" \
---per_device_train_batch_size="16" \
---per_device_eval_batch_size="16" \
---evaluation_strategy="steps" \
---save_total_limit="3" \
---save_steps="500" \
---eval_steps="100" \
---logging_steps="50" \
---learning_rate="5e-4" \
---warmup_steps="3000" \
---model_name_or_path="facebook/wav2vec2-large-lv60" \
---fp16 \
---dataset_name="librispeech_asr" \
---dataset_config_name="clean" \
---train_split_name="train.100" \
---preprocessing_num_workers="32" \
---group_by_length \
---freeze_feature_extractor
diff --git a/spaces/chendl/compositional_test/transformers/examples/tensorflow/text-classification/run_text_classification.py b/spaces/chendl/compositional_test/transformers/examples/tensorflow/text-classification/run_text_classification.py
deleted file mode 100644
index 64799eda3c0283529fe829858fbb4e3e1ca1107b..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/tensorflow/text-classification/run_text_classification.py
+++ /dev/null
@@ -1,563 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Fine-tuning the library models for sequence classification."""
-# You can also adapt this script on your own text classification task. Pointers for this are left as comments.
-
-import json
-import logging
-import os
-import sys
-from dataclasses import dataclass, field
-from pathlib import Path
-from typing import Optional
-
-import numpy as np
-from datasets import load_dataset
-
-from transformers import (
- AutoConfig,
- AutoTokenizer,
- HfArgumentParser,
- PretrainedConfig,
- PushToHubCallback,
- TFAutoModelForSequenceClassification,
- TFTrainingArguments,
- create_optimizer,
- set_seed,
-)
-from transformers.utils import CONFIG_NAME, TF2_WEIGHTS_NAME, send_example_telemetry
-
-
-os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1" # Reduce the amount of console output from TF
-import tensorflow as tf # noqa: E402
-
-
-logger = logging.getLogger(__name__)
-
-
-# region Helper classes
-class SavePretrainedCallback(tf.keras.callbacks.Callback):
- # Hugging Face models have a save_pretrained() method that saves both the weights and the necessary
- # metadata to allow them to be loaded as a pretrained model in future. This is a simple Keras callback
- # that saves the model with this method after each epoch.
- def __init__(self, output_dir, **kwargs):
- super().__init__()
- self.output_dir = output_dir
-
- def on_epoch_end(self, epoch, logs=None):
- self.model.save_pretrained(self.output_dir)
-
-
-# endregion
-
-
-# region Command-line arguments
-@dataclass
-class DataTrainingArguments:
- """
- Arguments pertaining to what data we are going to input our model for training and eval.
-
- Using `HfArgumentParser` we can turn this class
- into argparse arguments to be able to specify them on
- the command line.
- """
-
- train_file: Optional[str] = field(
- default=None, metadata={"help": "A csv or a json file containing the training data."}
- )
- validation_file: Optional[str] = field(
- default=None, metadata={"help": "A csv or a json file containing the validation data."}
- )
- test_file: Optional[str] = field(default=None, metadata={"help": "A csv or a json file containing the test data."})
-
- max_seq_length: int = field(
- default=128,
- metadata={
- "help": (
- "The maximum total input sequence length after tokenization. Sequences longer "
- "than this will be truncated, sequences shorter will be padded."
- )
- },
- )
- overwrite_cache: bool = field(
- default=False, metadata={"help": "Overwrite the cached preprocessed datasets or not."}
- )
- pad_to_max_length: bool = field(
- default=False,
- metadata={
- "help": (
- "Whether to pad all samples to `max_seq_length`. "
- "If False, will pad the samples dynamically when batching to the maximum length in the batch."
- "Data will always be padded when using TPUs."
- )
- },
- )
- max_train_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of training examples to this "
- "value if set."
- )
- },
- )
- max_val_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of validation examples to this "
- "value if set."
- )
- },
- )
- max_test_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of test examples to this "
- "value if set."
- )
- },
- )
-
- def __post_init__(self):
- train_extension = self.train_file.split(".")[-1].lower() if self.train_file is not None else None
- validation_extension = (
- self.validation_file.split(".")[-1].lower() if self.validation_file is not None else None
- )
- test_extension = self.test_file.split(".")[-1].lower() if self.test_file is not None else None
- extensions = {train_extension, validation_extension, test_extension}
- extensions.discard(None)
- assert len(extensions) != 0, "Need to supply at least one of --train_file, --validation_file or --test_file!"
- assert len(extensions) == 1, "All input files should have the same file extension, either csv or json!"
- assert "csv" in extensions or "json" in extensions, "Input files should have either .csv or .json extensions!"
- self.input_file_extension = extensions.pop()
-
-
-@dataclass
-class ModelArguments:
- """
- Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
- """
-
- model_name_or_path: str = field(
- metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
- )
- config_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
- )
- tokenizer_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
- )
- cache_dir: Optional[str] = field(
- default=None,
- metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
- )
- model_revision: str = field(
- default="main",
- metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
- )
- use_auth_token: bool = field(
- default=False,
- metadata={
- "help": (
- "Will use the token generated when running `huggingface-cli login` (necessary to use this script "
- "with private models)."
- )
- },
- )
-
-
-# endregion
-
-
-def main():
- # region Argument parsing
- # See all possible arguments in src/transformers/training_args.py
- # or by passing the --help flag to this script.
- # We now keep distinct sets of args, for a cleaner separation of concerns.
-
- parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments))
- if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
- # If we pass only one argument to the script and it's the path to a json file,
- # let's parse it to get our arguments.
- model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
- else:
- model_args, data_args, training_args = parser.parse_args_into_dataclasses()
-
- # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
- # information sent is the one passed as arguments along with your Python/PyTorch versions.
- send_example_telemetry("run_text_classification", model_args, data_args, framework="tensorflow")
-
- output_dir = Path(training_args.output_dir)
- output_dir.mkdir(parents=True, exist_ok=True)
- # endregion
-
- # region Checkpoints
- # Detecting last checkpoint.
- checkpoint = None
- if len(os.listdir(training_args.output_dir)) > 0 and not training_args.overwrite_output_dir:
- if (output_dir / CONFIG_NAME).is_file() and (output_dir / TF2_WEIGHTS_NAME).is_file():
- checkpoint = output_dir
- logger.info(
- f"Checkpoint detected, resuming training from checkpoint in {training_args.output_dir}. To avoid this"
- " behavior, change the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
- )
- else:
- raise ValueError(
- f"Output directory ({training_args.output_dir}) already exists and is not empty. "
- "Use --overwrite_output_dir to continue regardless."
- )
-
- # endregion
-
- # region Logging
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- handlers=[logging.StreamHandler(sys.stdout)],
- )
- logger.setLevel(logging.INFO)
-
- logger.info(f"Training/evaluation parameters {training_args}")
- # endregion
-
- # region Loading data
- # For CSV/JSON files, this script will use the 'label' field as the label and the 'sentence1' and optionally
- # 'sentence2' fields as inputs if they exist. If not, the first two fields not named label are used if at least two
- # columns are provided. Note that the term 'sentence' can be slightly misleading, as they often contain more than
- # a single grammatical sentence, when the task requires it.
- #
- # If the CSVs/JSONs contain only one non-label column, the script does single sentence classification on this
- # single column. You can easily tweak this behavior (see below)
- #
- # In distributed training, the load_dataset function guarantee that only one local process can concurrently
- # download the dataset.
- data_files = {"train": data_args.train_file, "validation": data_args.validation_file, "test": data_args.test_file}
- data_files = {key: file for key, file in data_files.items() if file is not None}
-
- for key in data_files.keys():
- logger.info(f"Loading a local file for {key}: {data_files[key]}")
-
- if data_args.input_file_extension == "csv":
- # Loading a dataset from local csv files
- datasets = load_dataset(
- "csv",
- data_files=data_files,
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- else:
- # Loading a dataset from local json files
- datasets = load_dataset("json", data_files=data_files, cache_dir=model_args.cache_dir)
- # See more about loading any type of standard or custom dataset at
- # https://huggingface.co/docs/datasets/loading_datasets.html.
- # endregion
-
- # region Label preprocessing
- # If you've passed us a training set, we try to infer your labels from it
- if "train" in datasets:
- # By default we assume that if your label column looks like a float then you're doing regression,
- # and if not then you're doing classification. This is something you may want to change!
- is_regression = datasets["train"].features["label"].dtype in ["float32", "float64"]
- if is_regression:
- num_labels = 1
- else:
- # A useful fast method:
- # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.unique
- label_list = datasets["train"].unique("label")
- label_list.sort() # Let's sort it for determinism
- num_labels = len(label_list)
- # If you haven't passed a training set, we read label info from the saved model (this happens later)
- else:
- num_labels = None
- label_list = None
- is_regression = None
- # endregion
-
- # region Load model config and tokenizer
- if checkpoint is not None:
- config_path = training_args.output_dir
- elif model_args.config_name:
- config_path = model_args.config_name
- else:
- config_path = model_args.model_name_or_path
- if num_labels is not None:
- config = AutoConfig.from_pretrained(
- config_path,
- num_labels=num_labels,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- else:
- config = AutoConfig.from_pretrained(
- config_path,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- tokenizer = AutoTokenizer.from_pretrained(
- model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- # endregion
-
- # region Dataset preprocessing
- # Again, we try to have some nice defaults but don't hesitate to tweak to your use case.
- column_names = {col for cols in datasets.column_names.values() for col in cols}
- non_label_column_names = [name for name in column_names if name != "label"]
- if "sentence1" in non_label_column_names and "sentence2" in non_label_column_names:
- sentence1_key, sentence2_key = "sentence1", "sentence2"
- elif "sentence1" in non_label_column_names:
- sentence1_key, sentence2_key = "sentence1", None
- else:
- if len(non_label_column_names) >= 2:
- sentence1_key, sentence2_key = non_label_column_names[:2]
- else:
- sentence1_key, sentence2_key = non_label_column_names[0], None
-
- if data_args.max_seq_length > tokenizer.model_max_length:
- logger.warning(
- f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the"
- f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}."
- )
- max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
-
- # Ensure that our labels match the model's, if it has some pre-specified
- if "train" in datasets:
- if not is_regression and config.label2id != PretrainedConfig(num_labels=num_labels).label2id:
- label_name_to_id = config.label2id
- if sorted(label_name_to_id.keys()) == sorted(label_list):
- label_to_id = label_name_to_id # Use the model's labels
- else:
- logger.warning(
- "Your model seems to have been trained with labels, but they don't match the dataset: ",
- f"model labels: {sorted(label_name_to_id.keys())}, dataset labels:"
- f" {sorted(label_list)}.\nIgnoring the model labels as a result.",
- )
- label_to_id = {v: i for i, v in enumerate(label_list)}
- elif not is_regression:
- label_to_id = {v: i for i, v in enumerate(label_list)}
- else:
- label_to_id = None
- # Now we've established our label2id, let's overwrite the model config with it.
- config.label2id = label_to_id
- if config.label2id is not None:
- config.id2label = {id: label for label, id in label_to_id.items()}
- else:
- config.id2label = None
- else:
- label_to_id = config.label2id # Just load the data from the model
-
- if "validation" in datasets and config.label2id is not None:
- validation_label_list = datasets["validation"].unique("label")
- for val_label in validation_label_list:
- assert val_label in label_to_id, f"Label {val_label} is in the validation set but not the training set!"
-
- def preprocess_function(examples):
- # Tokenize the texts
- args = (
- (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key])
- )
- result = tokenizer(*args, max_length=max_seq_length, truncation=True)
-
- # Map labels to IDs
- if config.label2id is not None and "label" in examples:
- result["label"] = [(config.label2id[l] if l != -1 else -1) for l in examples["label"]]
- return result
-
- datasets = datasets.map(preprocess_function, batched=True, load_from_cache_file=not data_args.overwrite_cache)
-
- # endregion
-
- with training_args.strategy.scope():
- # region Load pretrained model
- # Set seed before initializing model
- set_seed(training_args.seed)
- #
- # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
- # download model & vocab.
- if checkpoint is None:
- model_path = model_args.model_name_or_path
- else:
- model_path = checkpoint
- model = TFAutoModelForSequenceClassification.from_pretrained(
- model_path,
- config=config,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- # endregion
-
- # region Convert data to a tf.data.Dataset
- dataset_options = tf.data.Options()
- dataset_options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
- num_replicas = training_args.strategy.num_replicas_in_sync
-
- tf_data = {}
- max_samples = {
- "train": data_args.max_train_samples,
- "validation": data_args.max_val_samples,
- "test": data_args.max_test_samples,
- }
- for key in ("train", "validation", "test"):
- if key not in datasets:
- tf_data[key] = None
- continue
- if (
- (key == "train" and not training_args.do_train)
- or (key == "validation" and not training_args.do_eval)
- or (key == "test" and not training_args.do_predict)
- ):
- tf_data[key] = None
- continue
- if key in ("train", "validation"):
- assert "label" in datasets[key].features, f"Missing labels from {key} data!"
- if key == "train":
- shuffle = True
- batch_size = training_args.per_device_train_batch_size * num_replicas
- else:
- shuffle = False
- batch_size = training_args.per_device_eval_batch_size * num_replicas
- samples_limit = max_samples[key]
- dataset = datasets[key]
- if samples_limit is not None:
- dataset = dataset.select(range(samples_limit))
-
- # model.prepare_tf_dataset() wraps a Hugging Face dataset in a tf.data.Dataset which is ready to use in
- # training. This is the recommended way to use a Hugging Face dataset when training with Keras. You can also
- # use the lower-level dataset.to_tf_dataset() method, but you will have to specify things like column names
- # yourself if you use this method, whereas they are automatically inferred from the model input names when
- # using model.prepare_tf_dataset()
- # For more info see the docs:
- # https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset
- # https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.to_tf_dataset
-
- data = model.prepare_tf_dataset(
- dataset,
- shuffle=shuffle,
- batch_size=batch_size,
- tokenizer=tokenizer,
- )
- data = data.with_options(dataset_options)
- tf_data[key] = data
- # endregion
-
- # region Optimizer, loss and compilation
-
- if training_args.do_train:
- num_train_steps = len(tf_data["train"]) * training_args.num_train_epochs
- if training_args.warmup_steps > 0:
- num_warmup_steps = training_args.warmup_steps
- elif training_args.warmup_ratio > 0:
- num_warmup_steps = int(num_train_steps * training_args.warmup_ratio)
- else:
- num_warmup_steps = 0
-
- optimizer, schedule = create_optimizer(
- init_lr=training_args.learning_rate,
- num_train_steps=num_train_steps,
- num_warmup_steps=num_warmup_steps,
- adam_beta1=training_args.adam_beta1,
- adam_beta2=training_args.adam_beta2,
- adam_epsilon=training_args.adam_epsilon,
- weight_decay_rate=training_args.weight_decay,
- adam_global_clipnorm=training_args.max_grad_norm,
- )
- else:
- optimizer = None
- if is_regression:
- metrics = []
- else:
- metrics = ["accuracy"]
- model.compile(optimizer=optimizer, metrics=metrics)
- # endregion
-
- # region Preparing push_to_hub and model card
- push_to_hub_model_id = training_args.push_to_hub_model_id
- model_name = model_args.model_name_or_path.split("/")[-1]
- if not push_to_hub_model_id:
- push_to_hub_model_id = f"{model_name}-finetuned-text-classification"
-
- model_card_kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "text-classification"}
-
- if training_args.push_to_hub:
- callbacks = [
- PushToHubCallback(
- output_dir=training_args.output_dir,
- hub_model_id=push_to_hub_model_id,
- hub_token=training_args.push_to_hub_token,
- tokenizer=tokenizer,
- **model_card_kwargs,
- )
- ]
- else:
- callbacks = []
- # endregion
-
- # region Training and validation
- if tf_data["train"] is not None:
- model.fit(
- tf_data["train"],
- validation_data=tf_data["validation"],
- epochs=int(training_args.num_train_epochs),
- callbacks=callbacks,
- )
- if tf_data["validation"] is not None:
- logger.info("Computing metrics on validation data...")
- if is_regression:
- loss = model.evaluate(tf_data["validation"])
- logger.info(f"Eval loss: {loss:.5f}")
- else:
- loss, accuracy = model.evaluate(tf_data["validation"])
- logger.info(f"Eval loss: {loss:.5f}, Eval accuracy: {accuracy * 100:.4f}%")
- if training_args.output_dir is not None:
- output_eval_file = os.path.join(training_args.output_dir, "all_results.json")
- eval_dict = {"eval_loss": loss}
- if not is_regression:
- eval_dict["eval_accuracy"] = accuracy
- with open(output_eval_file, "w") as writer:
- writer.write(json.dumps(eval_dict))
- # endregion
-
- # region Prediction
- if tf_data["test"] is not None:
- logger.info("Doing predictions on test dataset...")
- predictions = model.predict(tf_data["test"])["logits"]
- predicted_class = np.squeeze(predictions) if is_regression else np.argmax(predictions, axis=1)
- output_test_file = os.path.join(training_args.output_dir, "test_results.txt")
- with open(output_test_file, "w") as writer:
- writer.write("index\tprediction\n")
- for index, item in enumerate(predicted_class):
- if is_regression:
- writer.write(f"{index}\t{item:3.3f}\n")
- else:
- item = config.id2label[item]
- writer.write(f"{index}\t{item}\n")
- logger.info(f"Wrote predictions to {output_test_file}!")
- # endregion
-
- if training_args.output_dir is not None and not training_args.push_to_hub:
- # If we're not pushing to hub, at least save a local copy when we're done
- model.save_pretrained(training_args.output_dir)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chenxc/qweqwe/Dockerfile b/spaces/chenxc/qweqwe/Dockerfile
deleted file mode 100644
index d52ca2b3b0ca603af96a75376d63439f244e7934..0000000000000000000000000000000000000000
--- a/spaces/chenxc/qweqwe/Dockerfile
+++ /dev/null
@@ -1,32 +0,0 @@
-# Build Stage
-# 使用 golang:alpine 作为构建阶段的基础镜像
-FROM golang:alpine AS builder
-
-# 添加 git,以便之后能从GitHub克隆项目
-RUN apk --no-cache add git
-
-# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
-RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
-
-#使用轻量级alpine
-FROM alpine
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-#从构建阶段到镜像
-COPY --from=builder /workspace/app/go-proxy-bingai .
-# 设置环境变量
-ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtx5SAFASFADS4iO"
-
-# 暴露8080端口
-EXPOSE 8080
-
-# 容器启动时运行的命令
-CMD ["/workspace/app/go-proxy-bingai"]
-#从构建阶段到镜像
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/db/migrations/00002-migration-2.psql.sql b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/db/migrations/00002-migration-2.psql.sql
deleted file mode 100644
index 01e4b222af541efb9022d2eeb69e39239faecb34..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/db/migrations/00002-migration-2.psql.sql
+++ /dev/null
@@ -1,3 +0,0 @@
-CREATE TABLE table2 (
- name TEXT PRIMARY KEY
-);
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/tz/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/tz/__init__.py
deleted file mode 100644
index af1352c47292f4eebc5cae8da45641b5544558e3..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/tz/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# -*- coding: utf-8 -*-
-from .tz import *
-from .tz import __doc__
-
-__all__ = ["tzutc", "tzoffset", "tzlocal", "tzfile", "tzrange",
- "tzstr", "tzical", "tzwin", "tzwinlocal", "gettz",
- "enfold", "datetime_ambiguous", "datetime_exists",
- "resolve_imaginary", "UTC", "DeprecatedTzFormatWarning"]
-
-
-class DeprecatedTzFormatWarning(Warning):
- """Warning raised when time zones are parsed from deprecated formats."""
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/designspaceLib/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/designspaceLib/__init__.py
deleted file mode 100644
index 1c71fd002e8afcf4432db0e62b864c78b659d1fc..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/designspaceLib/__init__.py
+++ /dev/null
@@ -1,3283 +0,0 @@
-from __future__ import annotations
-
-import collections
-import copy
-import itertools
-import math
-import os
-import posixpath
-from io import BytesIO, StringIO
-from textwrap import indent
-from typing import Any, Dict, List, MutableMapping, Optional, Tuple, Union, cast
-
-from fontTools.misc import etree as ET
-from fontTools.misc import plistlib
-from fontTools.misc.loggingTools import LogMixin
-from fontTools.misc.textTools import tobytes, tostr
-
-"""
- designSpaceDocument
-
- - read and write designspace files
-"""
-
-__all__ = [
- "AxisDescriptor",
- "AxisLabelDescriptor",
- "AxisMappingDescriptor",
- "BaseDocReader",
- "BaseDocWriter",
- "DesignSpaceDocument",
- "DesignSpaceDocumentError",
- "DiscreteAxisDescriptor",
- "InstanceDescriptor",
- "LocationLabelDescriptor",
- "RangeAxisSubsetDescriptor",
- "RuleDescriptor",
- "SourceDescriptor",
- "ValueAxisSubsetDescriptor",
- "VariableFontDescriptor",
-]
-
-# ElementTree allows to find namespace-prefixed elements, but not attributes
-# so we have to do it ourselves for 'xml:lang'
-XML_NS = "{http://www.w3.org/XML/1998/namespace}"
-XML_LANG = XML_NS + "lang"
-
-
-def posix(path):
- """Normalize paths using forward slash to work also on Windows."""
- new_path = posixpath.join(*path.split(os.path.sep))
- if path.startswith("/"):
- # The above transformation loses absolute paths
- new_path = "/" + new_path
- elif path.startswith(r"\\"):
- # The above transformation loses leading slashes of UNC path mounts
- new_path = "//" + new_path
- return new_path
-
-
-def posixpath_property(private_name):
- """Generate a propery that holds a path always using forward slashes."""
-
- def getter(self):
- # Normal getter
- return getattr(self, private_name)
-
- def setter(self, value):
- # The setter rewrites paths using forward slashes
- if value is not None:
- value = posix(value)
- setattr(self, private_name, value)
-
- return property(getter, setter)
-
-
-class DesignSpaceDocumentError(Exception):
- def __init__(self, msg, obj=None):
- self.msg = msg
- self.obj = obj
-
- def __str__(self):
- return str(self.msg) + (": %r" % self.obj if self.obj is not None else "")
-
-
-class AsDictMixin(object):
- def asdict(self):
- d = {}
- for attr, value in self.__dict__.items():
- if attr.startswith("_"):
- continue
- if hasattr(value, "asdict"):
- value = value.asdict()
- elif isinstance(value, list):
- value = [v.asdict() if hasattr(v, "asdict") else v for v in value]
- d[attr] = value
- return d
-
-
-class SimpleDescriptor(AsDictMixin):
- """Containers for a bunch of attributes"""
-
- # XXX this is ugly. The 'print' is inappropriate here, and instead of
- # assert, it should simply return True/False
- def compare(self, other):
- # test if this object contains the same data as the other
- for attr in self._attrs:
- try:
- assert getattr(self, attr) == getattr(other, attr)
- except AssertionError:
- print(
- "failed attribute",
- attr,
- getattr(self, attr),
- "!=",
- getattr(other, attr),
- )
-
- def __repr__(self):
- attrs = [f"{a}={repr(getattr(self, a))}," for a in self._attrs]
- attrs = indent("\n".join(attrs), " ")
- return f"{self.__class__.__name__}(\n{attrs}\n)"
-
-
-class SourceDescriptor(SimpleDescriptor):
- """Simple container for data related to the source
-
- .. code:: python
-
- doc = DesignSpaceDocument()
- s1 = SourceDescriptor()
- s1.path = masterPath1
- s1.name = "master.ufo1"
- s1.font = defcon.Font("master.ufo1")
- s1.location = dict(weight=0)
- s1.familyName = "MasterFamilyName"
- s1.styleName = "MasterStyleNameOne"
- s1.localisedFamilyName = dict(fr="Caractère")
- s1.mutedGlyphNames.append("A")
- s1.mutedGlyphNames.append("Z")
- doc.addSource(s1)
-
- """
-
- flavor = "source"
- _attrs = [
- "filename",
- "path",
- "name",
- "layerName",
- "location",
- "copyLib",
- "copyGroups",
- "copyFeatures",
- "muteKerning",
- "muteInfo",
- "mutedGlyphNames",
- "familyName",
- "styleName",
- "localisedFamilyName",
- ]
-
- filename = posixpath_property("_filename")
- path = posixpath_property("_path")
-
- def __init__(
- self,
- *,
- filename=None,
- path=None,
- font=None,
- name=None,
- location=None,
- designLocation=None,
- layerName=None,
- familyName=None,
- styleName=None,
- localisedFamilyName=None,
- copyLib=False,
- copyInfo=False,
- copyGroups=False,
- copyFeatures=False,
- muteKerning=False,
- muteInfo=False,
- mutedGlyphNames=None,
- ):
- self.filename = filename
- """string. A relative path to the source file, **as it is in the document**.
-
- MutatorMath + VarLib.
- """
- self.path = path
- """The absolute path, calculated from filename."""
-
- self.font = font
- """Any Python object. Optional. Points to a representation of this
- source font that is loaded in memory, as a Python object (e.g. a
- ``defcon.Font`` or a ``fontTools.ttFont.TTFont``).
-
- The default document reader will not fill-in this attribute, and the
- default writer will not use this attribute. It is up to the user of
- ``designspaceLib`` to either load the resource identified by
- ``filename`` and store it in this field, or write the contents of
- this field to the disk and make ```filename`` point to that.
- """
-
- self.name = name
- """string. Optional. Unique identifier name for this source.
-
- MutatorMath + varLib.
- """
-
- self.designLocation = (
- designLocation if designLocation is not None else location or {}
- )
- """dict. Axis values for this source, in design space coordinates.
-
- MutatorMath + varLib.
-
- This may be only part of the full design location.
- See :meth:`getFullDesignLocation()`
-
- .. versionadded:: 5.0
- """
-
- self.layerName = layerName
- """string. The name of the layer in the source to look for
- outline data. Default ``None`` which means ``foreground``.
- """
- self.familyName = familyName
- """string. Family name of this source. Though this data
- can be extracted from the font, it can be efficient to have it right
- here.
-
- varLib.
- """
- self.styleName = styleName
- """string. Style name of this source. Though this data
- can be extracted from the font, it can be efficient to have it right
- here.
-
- varLib.
- """
- self.localisedFamilyName = localisedFamilyName or {}
- """dict. A dictionary of localised family name strings, keyed by
- language code.
-
- If present, will be used to build localized names for all instances.
-
- .. versionadded:: 5.0
- """
-
- self.copyLib = copyLib
- """bool. Indicates if the contents of the font.lib need to
- be copied to the instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.copyInfo = copyInfo
- """bool. Indicates if the non-interpolating font.info needs
- to be copied to the instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.copyGroups = copyGroups
- """bool. Indicates if the groups need to be copied to the
- instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.copyFeatures = copyFeatures
- """bool. Indicates if the feature text needs to be
- copied to the instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.muteKerning = muteKerning
- """bool. Indicates if the kerning data from this source
- needs to be muted (i.e. not be part of the calculations).
-
- MutatorMath only.
- """
- self.muteInfo = muteInfo
- """bool. Indicated if the interpolating font.info data for
- this source needs to be muted.
-
- MutatorMath only.
- """
- self.mutedGlyphNames = mutedGlyphNames or []
- """list. Glyphnames that need to be muted in the
- instances.
-
- MutatorMath only.
- """
-
- @property
- def location(self):
- """dict. Axis values for this source, in design space coordinates.
-
- MutatorMath + varLib.
-
- .. deprecated:: 5.0
- Use the more explicit alias for this property :attr:`designLocation`.
- """
- return self.designLocation
-
- @location.setter
- def location(self, location: Optional[AnisotropicLocationDict]):
- self.designLocation = location or {}
-
- def setFamilyName(self, familyName, languageCode="en"):
- """Setter for :attr:`localisedFamilyName`
-
- .. versionadded:: 5.0
- """
- self.localisedFamilyName[languageCode] = tostr(familyName)
-
- def getFamilyName(self, languageCode="en"):
- """Getter for :attr:`localisedFamilyName`
-
- .. versionadded:: 5.0
- """
- return self.localisedFamilyName.get(languageCode)
-
- def getFullDesignLocation(
- self, doc: "DesignSpaceDocument"
- ) -> AnisotropicLocationDict:
- """Get the complete design location of this source, from its
- :attr:`designLocation` and the document's axis defaults.
-
- .. versionadded:: 5.0
- """
- result: AnisotropicLocationDict = {}
- for axis in doc.axes:
- if axis.name in self.designLocation:
- result[axis.name] = self.designLocation[axis.name]
- else:
- result[axis.name] = axis.map_forward(axis.default)
- return result
-
-
-class RuleDescriptor(SimpleDescriptor):
- """Represents the rule descriptor element: a set of glyph substitutions to
- trigger conditionally in some parts of the designspace.
-
- .. code:: python
-
- r1 = RuleDescriptor()
- r1.name = "unique.rule.name"
- r1.conditionSets.append([dict(name="weight", minimum=-10, maximum=10), dict(...)])
- r1.conditionSets.append([dict(...), dict(...)])
- r1.subs.append(("a", "a.alt"))
-
- .. code:: xml
-
-
-
-
-
-
-
-
-
-
-
-
-
- """
-
- _attrs = ["name", "conditionSets", "subs"] # what do we need here
-
- def __init__(self, *, name=None, conditionSets=None, subs=None):
- self.name = name
- """string. Unique name for this rule. Can be used to reference this rule data."""
- # list of lists of dict(name='aaaa', minimum=0, maximum=1000)
- self.conditionSets = conditionSets or []
- """a list of conditionsets.
-
- - Each conditionset is a list of conditions.
- - Each condition is a dict with ``name``, ``minimum`` and ``maximum`` keys.
- """
- # list of substitutions stored as tuples of glyphnames ("a", "a.alt")
- self.subs = subs or []
- """list of substitutions.
-
- - Each substitution is stored as tuples of glyphnames, e.g. ("a", "a.alt").
- - Note: By default, rules are applied first, before other text
- shaping/OpenType layout, as they are part of the
- `Required Variation Alternates OpenType feature `_.
- See ref:`rules-element` § Attributes.
- """
-
-
-def evaluateRule(rule, location):
- """Return True if any of the rule's conditionsets matches the given location."""
- return any(evaluateConditions(c, location) for c in rule.conditionSets)
-
-
-def evaluateConditions(conditions, location):
- """Return True if all the conditions matches the given location.
-
- - If a condition has no minimum, check for < maximum.
- - If a condition has no maximum, check for > minimum.
- """
- for cd in conditions:
- value = location[cd["name"]]
- if cd.get("minimum") is None:
- if value > cd["maximum"]:
- return False
- elif cd.get("maximum") is None:
- if cd["minimum"] > value:
- return False
- elif not cd["minimum"] <= value <= cd["maximum"]:
- return False
- return True
-
-
-def processRules(rules, location, glyphNames):
- """Apply these rules at this location to these glyphnames.
-
- Return a new list of glyphNames with substitutions applied.
-
- - rule order matters
- """
- newNames = []
- for rule in rules:
- if evaluateRule(rule, location):
- for name in glyphNames:
- swap = False
- for a, b in rule.subs:
- if name == a:
- swap = True
- break
- if swap:
- newNames.append(b)
- else:
- newNames.append(name)
- glyphNames = newNames
- newNames = []
- return glyphNames
-
-
-AnisotropicLocationDict = Dict[str, Union[float, Tuple[float, float]]]
-SimpleLocationDict = Dict[str, float]
-
-
-class AxisMappingDescriptor(SimpleDescriptor):
- """Represents the axis mapping element: mapping an input location
- to an output location in the designspace.
-
- .. code:: python
-
- m1 = AxisMappingDescriptor()
- m1.inputLocation = {"weight": 900, "width": 150}
- m1.outputLocation = {"weight": 870}
-
- .. code:: xml
-
-
-
-
-
-
-
-
-
-
- """
-
- _attrs = ["inputLocation", "outputLocation"]
-
- def __init__(self, *, inputLocation=None, outputLocation=None):
- self.inputLocation: SimpleLocationDict = inputLocation or {}
- """dict. Axis values for the input of the mapping, in design space coordinates.
-
- varLib.
-
- .. versionadded:: 5.1
- """
- self.outputLocation: SimpleLocationDict = outputLocation or {}
- """dict. Axis values for the output of the mapping, in design space coordinates.
-
- varLib.
-
- .. versionadded:: 5.1
- """
-
-
-class InstanceDescriptor(SimpleDescriptor):
- """Simple container for data related to the instance
-
-
- .. code:: python
-
- i2 = InstanceDescriptor()
- i2.path = instancePath2
- i2.familyName = "InstanceFamilyName"
- i2.styleName = "InstanceStyleName"
- i2.name = "instance.ufo2"
- # anisotropic location
- i2.designLocation = dict(weight=500, width=(400,300))
- i2.postScriptFontName = "InstancePostscriptName"
- i2.styleMapFamilyName = "InstanceStyleMapFamilyName"
- i2.styleMapStyleName = "InstanceStyleMapStyleName"
- i2.lib['com.coolDesignspaceApp.specimenText'] = 'Hamburgerwhatever'
- doc.addInstance(i2)
- """
-
- flavor = "instance"
- _defaultLanguageCode = "en"
- _attrs = [
- "filename",
- "path",
- "name",
- "locationLabel",
- "designLocation",
- "userLocation",
- "familyName",
- "styleName",
- "postScriptFontName",
- "styleMapFamilyName",
- "styleMapStyleName",
- "localisedFamilyName",
- "localisedStyleName",
- "localisedStyleMapFamilyName",
- "localisedStyleMapStyleName",
- "glyphs",
- "kerning",
- "info",
- "lib",
- ]
-
- filename = posixpath_property("_filename")
- path = posixpath_property("_path")
-
- def __init__(
- self,
- *,
- filename=None,
- path=None,
- font=None,
- name=None,
- location=None,
- locationLabel=None,
- designLocation=None,
- userLocation=None,
- familyName=None,
- styleName=None,
- postScriptFontName=None,
- styleMapFamilyName=None,
- styleMapStyleName=None,
- localisedFamilyName=None,
- localisedStyleName=None,
- localisedStyleMapFamilyName=None,
- localisedStyleMapStyleName=None,
- glyphs=None,
- kerning=True,
- info=True,
- lib=None,
- ):
- self.filename = filename
- """string. Relative path to the instance file, **as it is
- in the document**. The file may or may not exist.
-
- MutatorMath + VarLib.
- """
- self.path = path
- """string. Absolute path to the instance file, calculated from
- the document path and the string in the filename attr. The file may
- or may not exist.
-
- MutatorMath.
- """
- self.font = font
- """Same as :attr:`SourceDescriptor.font`
-
- .. seealso:: :attr:`SourceDescriptor.font`
- """
- self.name = name
- """string. Unique identifier name of the instance, used to
- identify it if it needs to be referenced from elsewhere in the
- document.
- """
- self.locationLabel = locationLabel
- """Name of a :class:`LocationLabelDescriptor`. If
- provided, the instance should have the same location as the
- LocationLabel.
-
- .. seealso::
- :meth:`getFullDesignLocation`
- :meth:`getFullUserLocation`
-
- .. versionadded:: 5.0
- """
- self.designLocation: AnisotropicLocationDict = (
- designLocation if designLocation is not None else (location or {})
- )
- """dict. Axis values for this instance, in design space coordinates.
-
- MutatorMath + varLib.
-
- .. seealso:: This may be only part of the full location. See:
- :meth:`getFullDesignLocation`
- :meth:`getFullUserLocation`
-
- .. versionadded:: 5.0
- """
- self.userLocation: SimpleLocationDict = userLocation or {}
- """dict. Axis values for this instance, in user space coordinates.
-
- MutatorMath + varLib.
-
- .. seealso:: This may be only part of the full location. See:
- :meth:`getFullDesignLocation`
- :meth:`getFullUserLocation`
-
- .. versionadded:: 5.0
- """
- self.familyName = familyName
- """string. Family name of this instance.
-
- MutatorMath + varLib.
- """
- self.styleName = styleName
- """string. Style name of this instance.
-
- MutatorMath + varLib.
- """
- self.postScriptFontName = postScriptFontName
- """string. Postscript fontname for this instance.
-
- MutatorMath + varLib.
- """
- self.styleMapFamilyName = styleMapFamilyName
- """string. StyleMap familyname for this instance.
-
- MutatorMath + varLib.
- """
- self.styleMapStyleName = styleMapStyleName
- """string. StyleMap stylename for this instance.
-
- MutatorMath + varLib.
- """
- self.localisedFamilyName = localisedFamilyName or {}
- """dict. A dictionary of localised family name
- strings, keyed by language code.
- """
- self.localisedStyleName = localisedStyleName or {}
- """dict. A dictionary of localised stylename
- strings, keyed by language code.
- """
- self.localisedStyleMapFamilyName = localisedStyleMapFamilyName or {}
- """A dictionary of localised style map
- familyname strings, keyed by language code.
- """
- self.localisedStyleMapStyleName = localisedStyleMapStyleName or {}
- """A dictionary of localised style map
- stylename strings, keyed by language code.
- """
- self.glyphs = glyphs or {}
- """dict for special master definitions for glyphs. If glyphs
- need special masters (to record the results of executed rules for
- example).
-
- MutatorMath.
-
- .. deprecated:: 5.0
- Use rules or sparse sources instead.
- """
- self.kerning = kerning
- """ bool. Indicates if this instance needs its kerning
- calculated.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.info = info
- """bool. Indicated if this instance needs the interpolating
- font.info calculated.
-
- .. deprecated:: 5.0
- """
-
- self.lib = lib or {}
- """Custom data associated with this instance."""
-
- @property
- def location(self):
- """dict. Axis values for this instance.
-
- MutatorMath + varLib.
-
- .. deprecated:: 5.0
- Use the more explicit alias for this property :attr:`designLocation`.
- """
- return self.designLocation
-
- @location.setter
- def location(self, location: Optional[AnisotropicLocationDict]):
- self.designLocation = location or {}
-
- def setStyleName(self, styleName, languageCode="en"):
- """These methods give easier access to the localised names."""
- self.localisedStyleName[languageCode] = tostr(styleName)
-
- def getStyleName(self, languageCode="en"):
- return self.localisedStyleName.get(languageCode)
-
- def setFamilyName(self, familyName, languageCode="en"):
- self.localisedFamilyName[languageCode] = tostr(familyName)
-
- def getFamilyName(self, languageCode="en"):
- return self.localisedFamilyName.get(languageCode)
-
- def setStyleMapStyleName(self, styleMapStyleName, languageCode="en"):
- self.localisedStyleMapStyleName[languageCode] = tostr(styleMapStyleName)
-
- def getStyleMapStyleName(self, languageCode="en"):
- return self.localisedStyleMapStyleName.get(languageCode)
-
- def setStyleMapFamilyName(self, styleMapFamilyName, languageCode="en"):
- self.localisedStyleMapFamilyName[languageCode] = tostr(styleMapFamilyName)
-
- def getStyleMapFamilyName(self, languageCode="en"):
- return self.localisedStyleMapFamilyName.get(languageCode)
-
- def clearLocation(self, axisName: Optional[str] = None):
- """Clear all location-related fields. Ensures that
- :attr:``designLocation`` and :attr:``userLocation`` are dictionaries
- (possibly empty if clearing everything).
-
- In order to update the location of this instance wholesale, a user
- should first clear all the fields, then change the field(s) for which
- they have data.
-
- .. code:: python
-
- instance.clearLocation()
- instance.designLocation = {'Weight': (34, 36.5), 'Width': 100}
- instance.userLocation = {'Opsz': 16}
-
- In order to update a single axis location, the user should only clear
- that axis, then edit the values:
-
- .. code:: python
-
- instance.clearLocation('Weight')
- instance.designLocation['Weight'] = (34, 36.5)
-
- Args:
- axisName: if provided, only clear the location for that axis.
-
- .. versionadded:: 5.0
- """
- self.locationLabel = None
- if axisName is None:
- self.designLocation = {}
- self.userLocation = {}
- else:
- if self.designLocation is None:
- self.designLocation = {}
- if axisName in self.designLocation:
- del self.designLocation[axisName]
- if self.userLocation is None:
- self.userLocation = {}
- if axisName in self.userLocation:
- del self.userLocation[axisName]
-
- def getLocationLabelDescriptor(
- self, doc: "DesignSpaceDocument"
- ) -> Optional[LocationLabelDescriptor]:
- """Get the :class:`LocationLabelDescriptor` instance that matches
- this instances's :attr:`locationLabel`.
-
- Raises if the named label can't be found.
-
- .. versionadded:: 5.0
- """
- if self.locationLabel is None:
- return None
- label = doc.getLocationLabel(self.locationLabel)
- if label is None:
- raise DesignSpaceDocumentError(
- "InstanceDescriptor.getLocationLabelDescriptor(): "
- f"unknown location label `{self.locationLabel}` in instance `{self.name}`."
- )
- return label
-
- def getFullDesignLocation(
- self, doc: "DesignSpaceDocument"
- ) -> AnisotropicLocationDict:
- """Get the complete design location of this instance, by combining data
- from the various location fields, default axis values and mappings, and
- top-level location labels.
-
- The source of truth for this instance's location is determined for each
- axis independently by taking the first not-None field in this list:
-
- - ``locationLabel``: the location along this axis is the same as the
- matching STAT format 4 label. No anisotropy.
- - ``designLocation[axisName]``: the explicit design location along this
- axis, possibly anisotropic.
- - ``userLocation[axisName]``: the explicit user location along this
- axis. No anisotropy.
- - ``axis.default``: default axis value. No anisotropy.
-
- .. versionadded:: 5.0
- """
- label = self.getLocationLabelDescriptor(doc)
- if label is not None:
- return doc.map_forward(label.userLocation) # type: ignore
- result: AnisotropicLocationDict = {}
- for axis in doc.axes:
- if axis.name in self.designLocation:
- result[axis.name] = self.designLocation[axis.name]
- elif axis.name in self.userLocation:
- result[axis.name] = axis.map_forward(self.userLocation[axis.name])
- else:
- result[axis.name] = axis.map_forward(axis.default)
- return result
-
- def getFullUserLocation(self, doc: "DesignSpaceDocument") -> SimpleLocationDict:
- """Get the complete user location for this instance.
-
- .. seealso:: :meth:`getFullDesignLocation`
-
- .. versionadded:: 5.0
- """
- return doc.map_backward(self.getFullDesignLocation(doc))
-
-
-def tagForAxisName(name):
- # try to find or make a tag name for this axis name
- names = {
- "weight": ("wght", dict(en="Weight")),
- "width": ("wdth", dict(en="Width")),
- "optical": ("opsz", dict(en="Optical Size")),
- "slant": ("slnt", dict(en="Slant")),
- "italic": ("ital", dict(en="Italic")),
- }
- if name.lower() in names:
- return names[name.lower()]
- if len(name) < 4:
- tag = name + "*" * (4 - len(name))
- else:
- tag = name[:4]
- return tag, dict(en=name)
-
-
-class AbstractAxisDescriptor(SimpleDescriptor):
- flavor = "axis"
-
- def __init__(
- self,
- *,
- tag=None,
- name=None,
- labelNames=None,
- hidden=False,
- map=None,
- axisOrdering=None,
- axisLabels=None,
- ):
- # opentype tag for this axis
- self.tag = tag
- """string. Four letter tag for this axis. Some might be
- registered at the `OpenType
- specification `__.
- Privately-defined axis tags must begin with an uppercase letter and
- use only uppercase letters or digits.
- """
- # name of the axis used in locations
- self.name = name
- """string. Name of the axis as it is used in the location dicts.
-
- MutatorMath + varLib.
- """
- # names for UI purposes, if this is not a standard axis,
- self.labelNames = labelNames or {}
- """dict. When defining a non-registered axis, it will be
- necessary to define user-facing readable names for the axis. Keyed by
- xml:lang code. Values are required to be ``unicode`` strings, even if
- they only contain ASCII characters.
- """
- self.hidden = hidden
- """bool. Whether this axis should be hidden in user interfaces.
- """
- self.map = map or []
- """list of input / output values that can describe a warp of user space
- to design space coordinates. If no map values are present, it is assumed
- user space is the same as design space, as in [(minimum, minimum),
- (maximum, maximum)].
-
- varLib.
- """
- self.axisOrdering = axisOrdering
- """STAT table field ``axisOrdering``.
-
- See: `OTSpec STAT Axis Record `_
-
- .. versionadded:: 5.0
- """
- self.axisLabels: List[AxisLabelDescriptor] = axisLabels or []
- """STAT table entries for Axis Value Tables format 1, 2, 3.
-
- See: `OTSpec STAT Axis Value Tables `_
-
- .. versionadded:: 5.0
- """
-
-
-class AxisDescriptor(AbstractAxisDescriptor):
- """Simple container for the axis data.
-
- Add more localisations?
-
- .. code:: python
-
- a1 = AxisDescriptor()
- a1.minimum = 1
- a1.maximum = 1000
- a1.default = 400
- a1.name = "weight"
- a1.tag = "wght"
- a1.labelNames['fa-IR'] = "قطر"
- a1.labelNames['en'] = "Wéíght"
- a1.map = [(1.0, 10.0), (400.0, 66.0), (1000.0, 990.0)]
- a1.axisOrdering = 1
- a1.axisLabels = [
- AxisLabelDescriptor(name="Regular", userValue=400, elidable=True)
- ]
- doc.addAxis(a1)
- """
-
- _attrs = [
- "tag",
- "name",
- "maximum",
- "minimum",
- "default",
- "map",
- "axisOrdering",
- "axisLabels",
- ]
-
- def __init__(
- self,
- *,
- tag=None,
- name=None,
- labelNames=None,
- minimum=None,
- default=None,
- maximum=None,
- hidden=False,
- map=None,
- axisOrdering=None,
- axisLabels=None,
- ):
- super().__init__(
- tag=tag,
- name=name,
- labelNames=labelNames,
- hidden=hidden,
- map=map,
- axisOrdering=axisOrdering,
- axisLabels=axisLabels,
- )
- self.minimum = minimum
- """number. The minimum value for this axis in user space.
-
- MutatorMath + varLib.
- """
- self.maximum = maximum
- """number. The maximum value for this axis in user space.
-
- MutatorMath + varLib.
- """
- self.default = default
- """number. The default value for this axis, i.e. when a new location is
- created, this is the value this axis will get in user space.
-
- MutatorMath + varLib.
- """
-
- def serialize(self):
- # output to a dict, used in testing
- return dict(
- tag=self.tag,
- name=self.name,
- labelNames=self.labelNames,
- maximum=self.maximum,
- minimum=self.minimum,
- default=self.default,
- hidden=self.hidden,
- map=self.map,
- axisOrdering=self.axisOrdering,
- axisLabels=self.axisLabels,
- )
-
- def map_forward(self, v):
- """Maps value from axis mapping's input (user) to output (design)."""
- from fontTools.varLib.models import piecewiseLinearMap
-
- if not self.map:
- return v
- return piecewiseLinearMap(v, {k: v for k, v in self.map})
-
- def map_backward(self, v):
- """Maps value from axis mapping's output (design) to input (user)."""
- from fontTools.varLib.models import piecewiseLinearMap
-
- if isinstance(v, tuple):
- v = v[0]
- if not self.map:
- return v
- return piecewiseLinearMap(v, {v: k for k, v in self.map})
-
-
-class DiscreteAxisDescriptor(AbstractAxisDescriptor):
- """Container for discrete axis data.
-
- Use this for axes that do not interpolate. The main difference from a
- continuous axis is that a continuous axis has a ``minimum`` and ``maximum``,
- while a discrete axis has a list of ``values``.
-
- Example: an Italic axis with 2 stops, Roman and Italic, that are not
- compatible. The axis still allows to bind together the full font family,
- which is useful for the STAT table, however it can't become a variation
- axis in a VF.
-
- .. code:: python
-
- a2 = DiscreteAxisDescriptor()
- a2.values = [0, 1]
- a2.default = 0
- a2.name = "Italic"
- a2.tag = "ITAL"
- a2.labelNames['fr'] = "Italique"
- a2.map = [(0, 0), (1, -11)]
- a2.axisOrdering = 2
- a2.axisLabels = [
- AxisLabelDescriptor(name="Roman", userValue=0, elidable=True)
- ]
- doc.addAxis(a2)
-
- .. versionadded:: 5.0
- """
-
- flavor = "axis"
- _attrs = ("tag", "name", "values", "default", "map", "axisOrdering", "axisLabels")
-
- def __init__(
- self,
- *,
- tag=None,
- name=None,
- labelNames=None,
- values=None,
- default=None,
- hidden=False,
- map=None,
- axisOrdering=None,
- axisLabels=None,
- ):
- super().__init__(
- tag=tag,
- name=name,
- labelNames=labelNames,
- hidden=hidden,
- map=map,
- axisOrdering=axisOrdering,
- axisLabels=axisLabels,
- )
- self.default: float = default
- """The default value for this axis, i.e. when a new location is
- created, this is the value this axis will get in user space.
-
- However, this default value is less important than in continuous axes:
-
- - it doesn't define the "neutral" version of outlines from which
- deltas would apply, as this axis does not interpolate.
- - it doesn't provide the reference glyph set for the designspace, as
- fonts at each value can have different glyph sets.
- """
- self.values: List[float] = values or []
- """List of possible values for this axis. Contrary to continuous axes,
- only the values in this list can be taken by the axis, nothing in-between.
- """
-
- def map_forward(self, value):
- """Maps value from axis mapping's input to output.
-
- Returns value unchanged if no mapping entry is found.
-
- Note: for discrete axes, each value must have its mapping entry, if
- you intend that value to be mapped.
- """
- return next((v for k, v in self.map if k == value), value)
-
- def map_backward(self, value):
- """Maps value from axis mapping's output to input.
-
- Returns value unchanged if no mapping entry is found.
-
- Note: for discrete axes, each value must have its mapping entry, if
- you intend that value to be mapped.
- """
- if isinstance(value, tuple):
- value = value[0]
- return next((k for k, v in self.map if v == value), value)
-
-
-class AxisLabelDescriptor(SimpleDescriptor):
- """Container for axis label data.
-
- Analogue of OpenType's STAT data for a single axis (formats 1, 2 and 3).
- All values are user values.
- See: `OTSpec STAT Axis value table, format 1, 2, 3 `_
-
- The STAT format of the Axis value depends on which field are filled-in,
- see :meth:`getFormat`
-
- .. versionadded:: 5.0
- """
-
- flavor = "label"
- _attrs = (
- "userMinimum",
- "userValue",
- "userMaximum",
- "name",
- "elidable",
- "olderSibling",
- "linkedUserValue",
- "labelNames",
- )
-
- def __init__(
- self,
- *,
- name,
- userValue,
- userMinimum=None,
- userMaximum=None,
- elidable=False,
- olderSibling=False,
- linkedUserValue=None,
- labelNames=None,
- ):
- self.userMinimum: Optional[float] = userMinimum
- """STAT field ``rangeMinValue`` (format 2)."""
- self.userValue: float = userValue
- """STAT field ``value`` (format 1, 3) or ``nominalValue`` (format 2)."""
- self.userMaximum: Optional[float] = userMaximum
- """STAT field ``rangeMaxValue`` (format 2)."""
- self.name: str = name
- """Label for this axis location, STAT field ``valueNameID``."""
- self.elidable: bool = elidable
- """STAT flag ``ELIDABLE_AXIS_VALUE_NAME``.
-
- See: `OTSpec STAT Flags `_
- """
- self.olderSibling: bool = olderSibling
- """STAT flag ``OLDER_SIBLING_FONT_ATTRIBUTE``.
-
- See: `OTSpec STAT Flags `_
- """
- self.linkedUserValue: Optional[float] = linkedUserValue
- """STAT field ``linkedValue`` (format 3)."""
- self.labelNames: MutableMapping[str, str] = labelNames or {}
- """User-facing translations of this location's label. Keyed by
- ``xml:lang`` code.
- """
-
- def getFormat(self) -> int:
- """Determine which format of STAT Axis value to use to encode this label.
-
- =========== ========= =========== =========== ===============
- STAT Format userValue userMinimum userMaximum linkedUserValue
- =========== ========= =========== =========== ===============
- 1 ✅ ❌ ❌ ❌
- 2 ✅ ✅ ✅ ❌
- 3 ✅ ❌ ❌ ✅
- =========== ========= =========== =========== ===============
- """
- if self.linkedUserValue is not None:
- return 3
- if self.userMinimum is not None or self.userMaximum is not None:
- return 2
- return 1
-
- @property
- def defaultName(self) -> str:
- """Return the English name from :attr:`labelNames` or the :attr:`name`."""
- return self.labelNames.get("en") or self.name
-
-
-class LocationLabelDescriptor(SimpleDescriptor):
- """Container for location label data.
-
- Analogue of OpenType's STAT data for a free-floating location (format 4).
- All values are user values.
-
- See: `OTSpec STAT Axis value table, format 4 `_
-
- .. versionadded:: 5.0
- """
-
- flavor = "label"
- _attrs = ("name", "elidable", "olderSibling", "userLocation", "labelNames")
-
- def __init__(
- self,
- *,
- name,
- userLocation,
- elidable=False,
- olderSibling=False,
- labelNames=None,
- ):
- self.name: str = name
- """Label for this named location, STAT field ``valueNameID``."""
- self.userLocation: SimpleLocationDict = userLocation or {}
- """Location in user coordinates along each axis.
-
- If an axis is not mentioned, it is assumed to be at its default location.
-
- .. seealso:: This may be only part of the full location. See:
- :meth:`getFullUserLocation`
- """
- self.elidable: bool = elidable
- """STAT flag ``ELIDABLE_AXIS_VALUE_NAME``.
-
- See: `OTSpec STAT Flags `_
- """
- self.olderSibling: bool = olderSibling
- """STAT flag ``OLDER_SIBLING_FONT_ATTRIBUTE``.
-
- See: `OTSpec STAT Flags `_
- """
- self.labelNames: Dict[str, str] = labelNames or {}
- """User-facing translations of this location's label. Keyed by
- xml:lang code.
- """
-
- @property
- def defaultName(self) -> str:
- """Return the English name from :attr:`labelNames` or the :attr:`name`."""
- return self.labelNames.get("en") or self.name
-
- def getFullUserLocation(self, doc: "DesignSpaceDocument") -> SimpleLocationDict:
- """Get the complete user location of this label, by combining data
- from the explicit user location and default axis values.
-
- .. versionadded:: 5.0
- """
- return {
- axis.name: self.userLocation.get(axis.name, axis.default)
- for axis in doc.axes
- }
-
-
-class VariableFontDescriptor(SimpleDescriptor):
- """Container for variable fonts, sub-spaces of the Designspace.
-
- Use-cases:
-
- - From a single DesignSpace with discrete axes, define 1 variable font
- per value on the discrete axes. Before version 5, you would have needed
- 1 DesignSpace per such variable font, and a lot of data duplication.
- - From a big variable font with many axes, define subsets of that variable
- font that only include some axes and freeze other axes at a given location.
-
- .. versionadded:: 5.0
- """
-
- flavor = "variable-font"
- _attrs = ("filename", "axisSubsets", "lib")
-
- filename = posixpath_property("_filename")
-
- def __init__(self, *, name, filename=None, axisSubsets=None, lib=None):
- self.name: str = name
- """string, required. Name of this variable to identify it during the
- build process and from other parts of the document, and also as a
- filename in case the filename property is empty.
-
- VarLib.
- """
- self.filename: str = filename
- """string, optional. Relative path to the variable font file, **as it is
- in the document**. The file may or may not exist.
-
- If not specified, the :attr:`name` will be used as a basename for the file.
- """
- self.axisSubsets: List[
- Union[RangeAxisSubsetDescriptor, ValueAxisSubsetDescriptor]
- ] = (axisSubsets or [])
- """Axis subsets to include in this variable font.
-
- If an axis is not mentioned, assume that we only want the default
- location of that axis (same as a :class:`ValueAxisSubsetDescriptor`).
- """
- self.lib: MutableMapping[str, Any] = lib or {}
- """Custom data associated with this variable font."""
-
-
-class RangeAxisSubsetDescriptor(SimpleDescriptor):
- """Subset of a continuous axis to include in a variable font.
-
- .. versionadded:: 5.0
- """
-
- flavor = "axis-subset"
- _attrs = ("name", "userMinimum", "userDefault", "userMaximum")
-
- def __init__(
- self, *, name, userMinimum=-math.inf, userDefault=None, userMaximum=math.inf
- ):
- self.name: str = name
- """Name of the :class:`AxisDescriptor` to subset."""
- self.userMinimum: float = userMinimum
- """New minimum value of the axis in the target variable font.
- If not specified, assume the same minimum value as the full axis.
- (default = ``-math.inf``)
- """
- self.userDefault: Optional[float] = userDefault
- """New default value of the axis in the target variable font.
- If not specified, assume the same default value as the full axis.
- (default = ``None``)
- """
- self.userMaximum: float = userMaximum
- """New maximum value of the axis in the target variable font.
- If not specified, assume the same maximum value as the full axis.
- (default = ``math.inf``)
- """
-
-
-class ValueAxisSubsetDescriptor(SimpleDescriptor):
- """Single value of a discrete or continuous axis to use in a variable font.
-
- .. versionadded:: 5.0
- """
-
- flavor = "axis-subset"
- _attrs = ("name", "userValue")
-
- def __init__(self, *, name, userValue):
- self.name: str = name
- """Name of the :class:`AxisDescriptor` or :class:`DiscreteAxisDescriptor`
- to "snapshot" or "freeze".
- """
- self.userValue: float = userValue
- """Value in user coordinates at which to freeze the given axis."""
-
-
-class BaseDocWriter(object):
- _whiteSpace = " "
- axisDescriptorClass = AxisDescriptor
- discreteAxisDescriptorClass = DiscreteAxisDescriptor
- axisLabelDescriptorClass = AxisLabelDescriptor
- axisMappingDescriptorClass = AxisMappingDescriptor
- locationLabelDescriptorClass = LocationLabelDescriptor
- ruleDescriptorClass = RuleDescriptor
- sourceDescriptorClass = SourceDescriptor
- variableFontDescriptorClass = VariableFontDescriptor
- valueAxisSubsetDescriptorClass = ValueAxisSubsetDescriptor
- rangeAxisSubsetDescriptorClass = RangeAxisSubsetDescriptor
- instanceDescriptorClass = InstanceDescriptor
-
- @classmethod
- def getAxisDecriptor(cls):
- return cls.axisDescriptorClass()
-
- @classmethod
- def getAxisMappingDescriptor(cls):
- return cls.axisMappingDescriptorClass()
-
- @classmethod
- def getSourceDescriptor(cls):
- return cls.sourceDescriptorClass()
-
- @classmethod
- def getInstanceDescriptor(cls):
- return cls.instanceDescriptorClass()
-
- @classmethod
- def getRuleDescriptor(cls):
- return cls.ruleDescriptorClass()
-
- def __init__(self, documentPath, documentObject: DesignSpaceDocument):
- self.path = documentPath
- self.documentObject = documentObject
- self.effectiveFormatTuple = self._getEffectiveFormatTuple()
- self.root = ET.Element("designspace")
-
- def write(self, pretty=True, encoding="UTF-8", xml_declaration=True):
- self.root.attrib["format"] = ".".join(str(i) for i in self.effectiveFormatTuple)
-
- if (
- self.documentObject.axes
- or self.documentObject.axisMappings
- or self.documentObject.elidedFallbackName is not None
- ):
- axesElement = ET.Element("axes")
- if self.documentObject.elidedFallbackName is not None:
- axesElement.attrib[
- "elidedfallbackname"
- ] = self.documentObject.elidedFallbackName
- self.root.append(axesElement)
- for axisObject in self.documentObject.axes:
- self._addAxis(axisObject)
-
- if self.documentObject.axisMappings:
- mappingsElement = ET.Element("mappings")
- self.root.findall(".axes")[0].append(mappingsElement)
- for mappingObject in self.documentObject.axisMappings:
- self._addAxisMapping(mappingsElement, mappingObject)
-
- if self.documentObject.locationLabels:
- labelsElement = ET.Element("labels")
- for labelObject in self.documentObject.locationLabels:
- self._addLocationLabel(labelsElement, labelObject)
- self.root.append(labelsElement)
-
- if self.documentObject.rules:
- if getattr(self.documentObject, "rulesProcessingLast", False):
- attributes = {"processing": "last"}
- else:
- attributes = {}
- self.root.append(ET.Element("rules", attributes))
- for ruleObject in self.documentObject.rules:
- self._addRule(ruleObject)
-
- if self.documentObject.sources:
- self.root.append(ET.Element("sources"))
- for sourceObject in self.documentObject.sources:
- self._addSource(sourceObject)
-
- if self.documentObject.variableFonts:
- variableFontsElement = ET.Element("variable-fonts")
- for variableFont in self.documentObject.variableFonts:
- self._addVariableFont(variableFontsElement, variableFont)
- self.root.append(variableFontsElement)
-
- if self.documentObject.instances:
- self.root.append(ET.Element("instances"))
- for instanceObject in self.documentObject.instances:
- self._addInstance(instanceObject)
-
- if self.documentObject.lib:
- self._addLib(self.root, self.documentObject.lib, 2)
-
- tree = ET.ElementTree(self.root)
- tree.write(
- self.path,
- encoding=encoding,
- method="xml",
- xml_declaration=xml_declaration,
- pretty_print=pretty,
- )
-
- def _getEffectiveFormatTuple(self):
- """Try to use the version specified in the document, or a sufficiently
- recent version to be able to encode what the document contains.
- """
- minVersion = self.documentObject.formatTuple
- if (
- any(
- hasattr(axis, "values")
- or axis.axisOrdering is not None
- or axis.axisLabels
- for axis in self.documentObject.axes
- )
- or self.documentObject.locationLabels
- or any(source.localisedFamilyName for source in self.documentObject.sources)
- or self.documentObject.variableFonts
- or any(
- instance.locationLabel or instance.userLocation
- for instance in self.documentObject.instances
- )
- ):
- if minVersion < (5, 0):
- minVersion = (5, 0)
- if self.documentObject.axisMappings:
- if minVersion < (5, 1):
- minVersion = (5, 1)
- return minVersion
-
- def _makeLocationElement(self, locationObject, name=None):
- """Convert Location dict to a locationElement."""
- locElement = ET.Element("location")
- if name is not None:
- locElement.attrib["name"] = name
- validatedLocation = self.documentObject.newDefaultLocation()
- for axisName, axisValue in locationObject.items():
- if axisName in validatedLocation:
- # only accept values we know
- validatedLocation[axisName] = axisValue
- for dimensionName, dimensionValue in validatedLocation.items():
- dimElement = ET.Element("dimension")
- dimElement.attrib["name"] = dimensionName
- if type(dimensionValue) == tuple:
- dimElement.attrib["xvalue"] = self.intOrFloat(dimensionValue[0])
- dimElement.attrib["yvalue"] = self.intOrFloat(dimensionValue[1])
- else:
- dimElement.attrib["xvalue"] = self.intOrFloat(dimensionValue)
- locElement.append(dimElement)
- return locElement, validatedLocation
-
- def intOrFloat(self, num):
- if int(num) == num:
- return "%d" % num
- return ("%f" % num).rstrip("0").rstrip(".")
-
- def _addRule(self, ruleObject):
- # if none of the conditions have minimum or maximum values, do not add the rule.
- ruleElement = ET.Element("rule")
- if ruleObject.name is not None:
- ruleElement.attrib["name"] = ruleObject.name
- for conditions in ruleObject.conditionSets:
- conditionsetElement = ET.Element("conditionset")
- for cond in conditions:
- if cond.get("minimum") is None and cond.get("maximum") is None:
- # neither is defined, don't add this condition
- continue
- conditionElement = ET.Element("condition")
- conditionElement.attrib["name"] = cond.get("name")
- if cond.get("minimum") is not None:
- conditionElement.attrib["minimum"] = self.intOrFloat(
- cond.get("minimum")
- )
- if cond.get("maximum") is not None:
- conditionElement.attrib["maximum"] = self.intOrFloat(
- cond.get("maximum")
- )
- conditionsetElement.append(conditionElement)
- if len(conditionsetElement):
- ruleElement.append(conditionsetElement)
- for sub in ruleObject.subs:
- subElement = ET.Element("sub")
- subElement.attrib["name"] = sub[0]
- subElement.attrib["with"] = sub[1]
- ruleElement.append(subElement)
- if len(ruleElement):
- self.root.findall(".rules")[0].append(ruleElement)
-
- def _addAxis(self, axisObject):
- axisElement = ET.Element("axis")
- axisElement.attrib["tag"] = axisObject.tag
- axisElement.attrib["name"] = axisObject.name
- self._addLabelNames(axisElement, axisObject.labelNames)
- if axisObject.map:
- for inputValue, outputValue in axisObject.map:
- mapElement = ET.Element("map")
- mapElement.attrib["input"] = self.intOrFloat(inputValue)
- mapElement.attrib["output"] = self.intOrFloat(outputValue)
- axisElement.append(mapElement)
- if axisObject.axisOrdering or axisObject.axisLabels:
- labelsElement = ET.Element("labels")
- if axisObject.axisOrdering is not None:
- labelsElement.attrib["ordering"] = str(axisObject.axisOrdering)
- for label in axisObject.axisLabels:
- self._addAxisLabel(labelsElement, label)
- axisElement.append(labelsElement)
- if hasattr(axisObject, "minimum"):
- axisElement.attrib["minimum"] = self.intOrFloat(axisObject.minimum)
- axisElement.attrib["maximum"] = self.intOrFloat(axisObject.maximum)
- elif hasattr(axisObject, "values"):
- axisElement.attrib["values"] = " ".join(
- self.intOrFloat(v) for v in axisObject.values
- )
- axisElement.attrib["default"] = self.intOrFloat(axisObject.default)
- if axisObject.hidden:
- axisElement.attrib["hidden"] = "1"
- self.root.findall(".axes")[0].append(axisElement)
-
- def _addAxisMapping(self, mappingsElement, mappingObject):
- mappingElement = ET.Element("mapping")
- for what in ("inputLocation", "outputLocation"):
- whatObject = getattr(mappingObject, what, None)
- if whatObject is None:
- continue
- whatElement = ET.Element(what[:-8])
- mappingElement.append(whatElement)
-
- for name, value in whatObject.items():
- dimensionElement = ET.Element("dimension")
- dimensionElement.attrib["name"] = name
- dimensionElement.attrib["xvalue"] = self.intOrFloat(value)
- whatElement.append(dimensionElement)
-
- mappingsElement.append(mappingElement)
-
- def _addAxisLabel(
- self, axisElement: ET.Element, label: AxisLabelDescriptor
- ) -> None:
- labelElement = ET.Element("label")
- labelElement.attrib["uservalue"] = self.intOrFloat(label.userValue)
- if label.userMinimum is not None:
- labelElement.attrib["userminimum"] = self.intOrFloat(label.userMinimum)
- if label.userMaximum is not None:
- labelElement.attrib["usermaximum"] = self.intOrFloat(label.userMaximum)
- labelElement.attrib["name"] = label.name
- if label.elidable:
- labelElement.attrib["elidable"] = "true"
- if label.olderSibling:
- labelElement.attrib["oldersibling"] = "true"
- if label.linkedUserValue is not None:
- labelElement.attrib["linkeduservalue"] = self.intOrFloat(
- label.linkedUserValue
- )
- self._addLabelNames(labelElement, label.labelNames)
- axisElement.append(labelElement)
-
- def _addLabelNames(self, parentElement, labelNames):
- for languageCode, labelName in sorted(labelNames.items()):
- languageElement = ET.Element("labelname")
- languageElement.attrib[XML_LANG] = languageCode
- languageElement.text = labelName
- parentElement.append(languageElement)
-
- def _addLocationLabel(
- self, parentElement: ET.Element, label: LocationLabelDescriptor
- ) -> None:
- labelElement = ET.Element("label")
- labelElement.attrib["name"] = label.name
- if label.elidable:
- labelElement.attrib["elidable"] = "true"
- if label.olderSibling:
- labelElement.attrib["oldersibling"] = "true"
- self._addLabelNames(labelElement, label.labelNames)
- self._addLocationElement(labelElement, userLocation=label.userLocation)
- parentElement.append(labelElement)
-
- def _addLocationElement(
- self,
- parentElement,
- *,
- designLocation: AnisotropicLocationDict = None,
- userLocation: SimpleLocationDict = None,
- ):
- locElement = ET.Element("location")
- for axis in self.documentObject.axes:
- if designLocation is not None and axis.name in designLocation:
- dimElement = ET.Element("dimension")
- dimElement.attrib["name"] = axis.name
- value = designLocation[axis.name]
- if isinstance(value, tuple):
- dimElement.attrib["xvalue"] = self.intOrFloat(value[0])
- dimElement.attrib["yvalue"] = self.intOrFloat(value[1])
- else:
- dimElement.attrib["xvalue"] = self.intOrFloat(value)
- locElement.append(dimElement)
- elif userLocation is not None and axis.name in userLocation:
- dimElement = ET.Element("dimension")
- dimElement.attrib["name"] = axis.name
- value = userLocation[axis.name]
- dimElement.attrib["uservalue"] = self.intOrFloat(value)
- locElement.append(dimElement)
- if len(locElement) > 0:
- parentElement.append(locElement)
-
- def _addInstance(self, instanceObject):
- instanceElement = ET.Element("instance")
- if instanceObject.name is not None:
- instanceElement.attrib["name"] = instanceObject.name
- if instanceObject.locationLabel is not None:
- instanceElement.attrib["location"] = instanceObject.locationLabel
- if instanceObject.familyName is not None:
- instanceElement.attrib["familyname"] = instanceObject.familyName
- if instanceObject.styleName is not None:
- instanceElement.attrib["stylename"] = instanceObject.styleName
- # add localisations
- if instanceObject.localisedStyleName:
- languageCodes = list(instanceObject.localisedStyleName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue # already stored in the element attribute
- localisedStyleNameElement = ET.Element("stylename")
- localisedStyleNameElement.attrib[XML_LANG] = code
- localisedStyleNameElement.text = instanceObject.getStyleName(code)
- instanceElement.append(localisedStyleNameElement)
- if instanceObject.localisedFamilyName:
- languageCodes = list(instanceObject.localisedFamilyName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue # already stored in the element attribute
- localisedFamilyNameElement = ET.Element("familyname")
- localisedFamilyNameElement.attrib[XML_LANG] = code
- localisedFamilyNameElement.text = instanceObject.getFamilyName(code)
- instanceElement.append(localisedFamilyNameElement)
- if instanceObject.localisedStyleMapStyleName:
- languageCodes = list(instanceObject.localisedStyleMapStyleName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue
- localisedStyleMapStyleNameElement = ET.Element("stylemapstylename")
- localisedStyleMapStyleNameElement.attrib[XML_LANG] = code
- localisedStyleMapStyleNameElement.text = (
- instanceObject.getStyleMapStyleName(code)
- )
- instanceElement.append(localisedStyleMapStyleNameElement)
- if instanceObject.localisedStyleMapFamilyName:
- languageCodes = list(instanceObject.localisedStyleMapFamilyName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue
- localisedStyleMapFamilyNameElement = ET.Element("stylemapfamilyname")
- localisedStyleMapFamilyNameElement.attrib[XML_LANG] = code
- localisedStyleMapFamilyNameElement.text = (
- instanceObject.getStyleMapFamilyName(code)
- )
- instanceElement.append(localisedStyleMapFamilyNameElement)
-
- if self.effectiveFormatTuple >= (5, 0):
- if instanceObject.locationLabel is None:
- self._addLocationElement(
- instanceElement,
- designLocation=instanceObject.designLocation,
- userLocation=instanceObject.userLocation,
- )
- else:
- # Pre-version 5.0 code was validating and filling in the location
- # dict while writing it out, as preserved below.
- if instanceObject.location is not None:
- locationElement, instanceObject.location = self._makeLocationElement(
- instanceObject.location
- )
- instanceElement.append(locationElement)
- if instanceObject.filename is not None:
- instanceElement.attrib["filename"] = instanceObject.filename
- if instanceObject.postScriptFontName is not None:
- instanceElement.attrib[
- "postscriptfontname"
- ] = instanceObject.postScriptFontName
- if instanceObject.styleMapFamilyName is not None:
- instanceElement.attrib[
- "stylemapfamilyname"
- ] = instanceObject.styleMapFamilyName
- if instanceObject.styleMapStyleName is not None:
- instanceElement.attrib[
- "stylemapstylename"
- ] = instanceObject.styleMapStyleName
- if self.effectiveFormatTuple < (5, 0):
- # Deprecated members as of version 5.0
- if instanceObject.glyphs:
- if instanceElement.findall(".glyphs") == []:
- glyphsElement = ET.Element("glyphs")
- instanceElement.append(glyphsElement)
- glyphsElement = instanceElement.findall(".glyphs")[0]
- for glyphName, data in sorted(instanceObject.glyphs.items()):
- glyphElement = self._writeGlyphElement(
- instanceElement, instanceObject, glyphName, data
- )
- glyphsElement.append(glyphElement)
- if instanceObject.kerning:
- kerningElement = ET.Element("kerning")
- instanceElement.append(kerningElement)
- if instanceObject.info:
- infoElement = ET.Element("info")
- instanceElement.append(infoElement)
- self._addLib(instanceElement, instanceObject.lib, 4)
- self.root.findall(".instances")[0].append(instanceElement)
-
- def _addSource(self, sourceObject):
- sourceElement = ET.Element("source")
- if sourceObject.filename is not None:
- sourceElement.attrib["filename"] = sourceObject.filename
- if sourceObject.name is not None:
- if sourceObject.name.find("temp_master") != 0:
- # do not save temporary source names
- sourceElement.attrib["name"] = sourceObject.name
- if sourceObject.familyName is not None:
- sourceElement.attrib["familyname"] = sourceObject.familyName
- if sourceObject.styleName is not None:
- sourceElement.attrib["stylename"] = sourceObject.styleName
- if sourceObject.layerName is not None:
- sourceElement.attrib["layer"] = sourceObject.layerName
- if sourceObject.localisedFamilyName:
- languageCodes = list(sourceObject.localisedFamilyName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue # already stored in the element attribute
- localisedFamilyNameElement = ET.Element("familyname")
- localisedFamilyNameElement.attrib[XML_LANG] = code
- localisedFamilyNameElement.text = sourceObject.getFamilyName(code)
- sourceElement.append(localisedFamilyNameElement)
- if sourceObject.copyLib:
- libElement = ET.Element("lib")
- libElement.attrib["copy"] = "1"
- sourceElement.append(libElement)
- if sourceObject.copyGroups:
- groupsElement = ET.Element("groups")
- groupsElement.attrib["copy"] = "1"
- sourceElement.append(groupsElement)
- if sourceObject.copyFeatures:
- featuresElement = ET.Element("features")
- featuresElement.attrib["copy"] = "1"
- sourceElement.append(featuresElement)
- if sourceObject.copyInfo or sourceObject.muteInfo:
- infoElement = ET.Element("info")
- if sourceObject.copyInfo:
- infoElement.attrib["copy"] = "1"
- if sourceObject.muteInfo:
- infoElement.attrib["mute"] = "1"
- sourceElement.append(infoElement)
- if sourceObject.muteKerning:
- kerningElement = ET.Element("kerning")
- kerningElement.attrib["mute"] = "1"
- sourceElement.append(kerningElement)
- if sourceObject.mutedGlyphNames:
- for name in sourceObject.mutedGlyphNames:
- glyphElement = ET.Element("glyph")
- glyphElement.attrib["name"] = name
- glyphElement.attrib["mute"] = "1"
- sourceElement.append(glyphElement)
- if self.effectiveFormatTuple >= (5, 0):
- self._addLocationElement(
- sourceElement, designLocation=sourceObject.location
- )
- else:
- # Pre-version 5.0 code was validating and filling in the location
- # dict while writing it out, as preserved below.
- locationElement, sourceObject.location = self._makeLocationElement(
- sourceObject.location
- )
- sourceElement.append(locationElement)
- self.root.findall(".sources")[0].append(sourceElement)
-
- def _addVariableFont(
- self, parentElement: ET.Element, vf: VariableFontDescriptor
- ) -> None:
- vfElement = ET.Element("variable-font")
- vfElement.attrib["name"] = vf.name
- if vf.filename is not None:
- vfElement.attrib["filename"] = vf.filename
- if vf.axisSubsets:
- subsetsElement = ET.Element("axis-subsets")
- for subset in vf.axisSubsets:
- subsetElement = ET.Element("axis-subset")
- subsetElement.attrib["name"] = subset.name
- # Mypy doesn't support narrowing union types via hasattr()
- # https://mypy.readthedocs.io/en/stable/type_narrowing.html
- # TODO(Python 3.10): use TypeGuard
- if hasattr(subset, "userMinimum"):
- subset = cast(RangeAxisSubsetDescriptor, subset)
- if subset.userMinimum != -math.inf:
- subsetElement.attrib["userminimum"] = self.intOrFloat(
- subset.userMinimum
- )
- if subset.userMaximum != math.inf:
- subsetElement.attrib["usermaximum"] = self.intOrFloat(
- subset.userMaximum
- )
- if subset.userDefault is not None:
- subsetElement.attrib["userdefault"] = self.intOrFloat(
- subset.userDefault
- )
- elif hasattr(subset, "userValue"):
- subset = cast(ValueAxisSubsetDescriptor, subset)
- subsetElement.attrib["uservalue"] = self.intOrFloat(
- subset.userValue
- )
- subsetsElement.append(subsetElement)
- vfElement.append(subsetsElement)
- self._addLib(vfElement, vf.lib, 4)
- parentElement.append(vfElement)
-
- def _addLib(self, parentElement: ET.Element, data: Any, indent_level: int) -> None:
- if not data:
- return
- libElement = ET.Element("lib")
- libElement.append(plistlib.totree(data, indent_level=indent_level))
- parentElement.append(libElement)
-
- def _writeGlyphElement(self, instanceElement, instanceObject, glyphName, data):
- glyphElement = ET.Element("glyph")
- if data.get("mute"):
- glyphElement.attrib["mute"] = "1"
- if data.get("unicodes") is not None:
- glyphElement.attrib["unicode"] = " ".join(
- [hex(u) for u in data.get("unicodes")]
- )
- if data.get("instanceLocation") is not None:
- locationElement, data["instanceLocation"] = self._makeLocationElement(
- data.get("instanceLocation")
- )
- glyphElement.append(locationElement)
- if glyphName is not None:
- glyphElement.attrib["name"] = glyphName
- if data.get("note") is not None:
- noteElement = ET.Element("note")
- noteElement.text = data.get("note")
- glyphElement.append(noteElement)
- if data.get("masters") is not None:
- mastersElement = ET.Element("masters")
- for m in data.get("masters"):
- masterElement = ET.Element("master")
- if m.get("glyphName") is not None:
- masterElement.attrib["glyphname"] = m.get("glyphName")
- if m.get("font") is not None:
- masterElement.attrib["source"] = m.get("font")
- if m.get("location") is not None:
- locationElement, m["location"] = self._makeLocationElement(
- m.get("location")
- )
- masterElement.append(locationElement)
- mastersElement.append(masterElement)
- glyphElement.append(mastersElement)
- return glyphElement
-
-
-class BaseDocReader(LogMixin):
- axisDescriptorClass = AxisDescriptor
- discreteAxisDescriptorClass = DiscreteAxisDescriptor
- axisLabelDescriptorClass = AxisLabelDescriptor
- axisMappingDescriptorClass = AxisMappingDescriptor
- locationLabelDescriptorClass = LocationLabelDescriptor
- ruleDescriptorClass = RuleDescriptor
- sourceDescriptorClass = SourceDescriptor
- variableFontsDescriptorClass = VariableFontDescriptor
- valueAxisSubsetDescriptorClass = ValueAxisSubsetDescriptor
- rangeAxisSubsetDescriptorClass = RangeAxisSubsetDescriptor
- instanceDescriptorClass = InstanceDescriptor
-
- def __init__(self, documentPath, documentObject):
- self.path = documentPath
- self.documentObject = documentObject
- tree = ET.parse(self.path)
- self.root = tree.getroot()
- self.documentObject.formatVersion = self.root.attrib.get("format", "3.0")
- self._axes = []
- self.rules = []
- self.sources = []
- self.instances = []
- self.axisDefaults = {}
- self._strictAxisNames = True
-
- @classmethod
- def fromstring(cls, string, documentObject):
- f = BytesIO(tobytes(string, encoding="utf-8"))
- self = cls(f, documentObject)
- self.path = None
- return self
-
- def read(self):
- self.readAxes()
- self.readLabels()
- self.readRules()
- self.readVariableFonts()
- self.readSources()
- self.readInstances()
- self.readLib()
-
- def readRules(self):
- # we also need to read any conditions that are outside of a condition set.
- rules = []
- rulesElement = self.root.find(".rules")
- if rulesElement is not None:
- processingValue = rulesElement.attrib.get("processing", "first")
- if processingValue not in {"first", "last"}:
- raise DesignSpaceDocumentError(
- " processing attribute value is not valid: %r, "
- "expected 'first' or 'last'" % processingValue
- )
- self.documentObject.rulesProcessingLast = processingValue == "last"
- for ruleElement in self.root.findall(".rules/rule"):
- ruleObject = self.ruleDescriptorClass()
- ruleName = ruleObject.name = ruleElement.attrib.get("name")
- # read any stray conditions outside a condition set
- externalConditions = self._readConditionElements(
- ruleElement,
- ruleName,
- )
- if externalConditions:
- ruleObject.conditionSets.append(externalConditions)
- self.log.info(
- "Found stray rule conditions outside a conditionset. "
- "Wrapped them in a new conditionset."
- )
- # read the conditionsets
- for conditionSetElement in ruleElement.findall(".conditionset"):
- conditionSet = self._readConditionElements(
- conditionSetElement,
- ruleName,
- )
- if conditionSet is not None:
- ruleObject.conditionSets.append(conditionSet)
- for subElement in ruleElement.findall(".sub"):
- a = subElement.attrib["name"]
- b = subElement.attrib["with"]
- ruleObject.subs.append((a, b))
- rules.append(ruleObject)
- self.documentObject.rules = rules
-
- def _readConditionElements(self, parentElement, ruleName=None):
- cds = []
- for conditionElement in parentElement.findall(".condition"):
- cd = {}
- cdMin = conditionElement.attrib.get("minimum")
- if cdMin is not None:
- cd["minimum"] = float(cdMin)
- else:
- # will allow these to be None, assume axis.minimum
- cd["minimum"] = None
- cdMax = conditionElement.attrib.get("maximum")
- if cdMax is not None:
- cd["maximum"] = float(cdMax)
- else:
- # will allow these to be None, assume axis.maximum
- cd["maximum"] = None
- cd["name"] = conditionElement.attrib.get("name")
- # # test for things
- if cd.get("minimum") is None and cd.get("maximum") is None:
- raise DesignSpaceDocumentError(
- "condition missing required minimum or maximum in rule"
- + (" '%s'" % ruleName if ruleName is not None else "")
- )
- cds.append(cd)
- return cds
-
- def readAxes(self):
- # read the axes elements, including the warp map.
- axesElement = self.root.find(".axes")
- if axesElement is not None and "elidedfallbackname" in axesElement.attrib:
- self.documentObject.elidedFallbackName = axesElement.attrib[
- "elidedfallbackname"
- ]
- axisElements = self.root.findall(".axes/axis")
- if not axisElements:
- return
- for axisElement in axisElements:
- if (
- self.documentObject.formatTuple >= (5, 0)
- and "values" in axisElement.attrib
- ):
- axisObject = self.discreteAxisDescriptorClass()
- axisObject.values = [
- float(s) for s in axisElement.attrib["values"].split(" ")
- ]
- else:
- axisObject = self.axisDescriptorClass()
- axisObject.minimum = float(axisElement.attrib.get("minimum"))
- axisObject.maximum = float(axisElement.attrib.get("maximum"))
- axisObject.default = float(axisElement.attrib.get("default"))
- axisObject.name = axisElement.attrib.get("name")
- if axisElement.attrib.get("hidden", False):
- axisObject.hidden = True
- axisObject.tag = axisElement.attrib.get("tag")
- for mapElement in axisElement.findall("map"):
- a = float(mapElement.attrib["input"])
- b = float(mapElement.attrib["output"])
- axisObject.map.append((a, b))
- for labelNameElement in axisElement.findall("labelname"):
- # Note: elementtree reads the "xml:lang" attribute name as
- # '{http://www.w3.org/XML/1998/namespace}lang'
- for key, lang in labelNameElement.items():
- if key == XML_LANG:
- axisObject.labelNames[lang] = tostr(labelNameElement.text)
- labelElement = axisElement.find(".labels")
- if labelElement is not None:
- if "ordering" in labelElement.attrib:
- axisObject.axisOrdering = int(labelElement.attrib["ordering"])
- for label in labelElement.findall(".label"):
- axisObject.axisLabels.append(self.readAxisLabel(label))
- self.documentObject.axes.append(axisObject)
- self.axisDefaults[axisObject.name] = axisObject.default
-
- mappingsElement = self.root.find(".axes/mappings")
- self.documentObject.axisMappings = []
- if mappingsElement is not None:
- for mappingElement in mappingsElement.findall("mapping"):
- inputElement = mappingElement.find("input")
- outputElement = mappingElement.find("output")
- inputLoc = {}
- outputLoc = {}
- for dimElement in inputElement.findall(".dimension"):
- name = dimElement.attrib["name"]
- value = float(dimElement.attrib["xvalue"])
- inputLoc[name] = value
- for dimElement in outputElement.findall(".dimension"):
- name = dimElement.attrib["name"]
- value = float(dimElement.attrib["xvalue"])
- outputLoc[name] = value
- axisMappingObject = self.axisMappingDescriptorClass(
- inputLocation=inputLoc, outputLocation=outputLoc
- )
- self.documentObject.axisMappings.append(axisMappingObject)
-
- def readAxisLabel(self, element: ET.Element):
- xml_attrs = {
- "userminimum",
- "uservalue",
- "usermaximum",
- "name",
- "elidable",
- "oldersibling",
- "linkeduservalue",
- }
- unknown_attrs = set(element.attrib) - xml_attrs
- if unknown_attrs:
- raise DesignSpaceDocumentError(
- f"label element contains unknown attributes: {', '.join(unknown_attrs)}"
- )
-
- name = element.get("name")
- if name is None:
- raise DesignSpaceDocumentError("label element must have a name attribute.")
- valueStr = element.get("uservalue")
- if valueStr is None:
- raise DesignSpaceDocumentError(
- "label element must have a uservalue attribute."
- )
- value = float(valueStr)
- minimumStr = element.get("userminimum")
- minimum = float(minimumStr) if minimumStr is not None else None
- maximumStr = element.get("usermaximum")
- maximum = float(maximumStr) if maximumStr is not None else None
- linkedValueStr = element.get("linkeduservalue")
- linkedValue = float(linkedValueStr) if linkedValueStr is not None else None
- elidable = True if element.get("elidable") == "true" else False
- olderSibling = True if element.get("oldersibling") == "true" else False
- labelNames = {
- lang: label_name.text or ""
- for label_name in element.findall("labelname")
- for attr, lang in label_name.items()
- if attr == XML_LANG
- # Note: elementtree reads the "xml:lang" attribute name as
- # '{http://www.w3.org/XML/1998/namespace}lang'
- }
- return self.axisLabelDescriptorClass(
- name=name,
- userValue=value,
- userMinimum=minimum,
- userMaximum=maximum,
- elidable=elidable,
- olderSibling=olderSibling,
- linkedUserValue=linkedValue,
- labelNames=labelNames,
- )
-
- def readLabels(self):
- if self.documentObject.formatTuple < (5, 0):
- return
-
- xml_attrs = {"name", "elidable", "oldersibling"}
- for labelElement in self.root.findall(".labels/label"):
- unknown_attrs = set(labelElement.attrib) - xml_attrs
- if unknown_attrs:
- raise DesignSpaceDocumentError(
- f"Label element contains unknown attributes: {', '.join(unknown_attrs)}"
- )
-
- name = labelElement.get("name")
- if name is None:
- raise DesignSpaceDocumentError(
- "label element must have a name attribute."
- )
- designLocation, userLocation = self.locationFromElement(labelElement)
- if designLocation:
- raise DesignSpaceDocumentError(
- f'